There's been a lot of hype this week over the first so-called AI Actress, Tilly Norwood. Created by Xicoia, the AI division of the production company Particle6, Norwood has appeared in the AI-generated and frankly abysmal comedy sketch "AI Commissioner".
To give it its dues, the AI rendered characters are quite realistic and mostly avoid the "uncanny valley" effect where fake humans become unsettling to us because they are just close enough to reality and yet not real enough to be convincing. We only have Xicoia's word for it that these are indeed 100% AI-generated though. As with all slick product demos and clinical trials, you only get to see the best results with no indication of how many duff runs they had to throw away to get here.
In amongst all the debate and criticism of AI-generated actors there is another question which I've not seen addressed yet: to what extent is anything you see on screen real? We're quite happy to sit through animated movies that have only ever existed as ones and zeros in a computer. Even relatively mundane stories in live action movies will have had some kind of visual effects applied in post-production even if it's just to make daytime shots look like night or to conceal a water bottle accidentally left in shot by one of the cast.
The difference between visual effects and AI generation is the amount of input from a human being. If we take Xicoia's claims at face value then everything in their skit was generated by an autonomous computer system. This is clearly, to use a technical term, bollocks. Gen AI doesn't have ideas, it responds to prompts entered by a human being. Without seeing the prompts used and the process followed we have no idea what the AI actually produced of its own accord. If it was given shot-by-shot detailed descriptions of video then it's little more than an expensive and inefficient rendering stream.
On the other hand, if the system was given a more vague prompt then what we have is a mere aggregation of the content that it was trained on in the first place. If it's asked for a scene showing a man walking through an office, it's going to be rendering something that is some kind of average of all the scenes in its training set labelled "man walking through office". Because that's what Gen AI does. It doesn't have new ideas of its own.
The AI actor herself, Ms Norwood, might look more human than most animated characters but that's all she really is. She can't do interviews. She can't do personal appearances. She can't get audience bums on seats when every other movie studio can spin up their own AI star-du-jour in less time that it takes to read through the script. And in due course, when computing power increases sufficiently, we won't even go to see a movie because we have our own movie-slop farm at home.
As with so much of AI, we're being told that the output is human-level just because it looks and sounds like a human. We see intelligence in a chatbot output because we're used to seeing intelligence expressed in writing. We see creativity in an AI movie scene because we're used to seeing creativity in movies. But it's just an illusion, a parlour trick to amuse for a few minutes but nothing more.
Venture capitalists and Silicon Valley tech bros like to talk about "product-market fit" - where your product is developed specifically to fit the needs of your target market. I'm struggling to understand what market Xicoia think they're targetting. An AI that just dumps out a simile of the content it was fed is never going to produce original creativity that both directors and audiences want. You could argue that much of Hollywood's output is so derivative that it might as well be AI-generated but that just leads into a race to the bottom. Ever cheaper, ever more generic movies that all merge into an amorphous blob of entertainment can never hit the kind of mass market appeal that movie producers need to make money.
Hollywood studios are businesses and like every business their operating policy is essentially "how much does this cost, how much can I sell it for and how much profit can I make?". At the moment, AI slop is subsidised by venture capital but when its true costs are being borne by the studios it's going to have to pay its way. It needs to create content that is good enough that people will pay enough to see it that the studio can make money. And how much will people pay for AI slop? I suspect, not a lot.
.png)
