Artificial intelligence means anyone can cast Hollywood stars in their own films

Free AI software is primed to strip away the control of studios and actors who appears in films

Artificial intelligence means anyone can cast Hollywood stars in their own films

 

For years, the only way to create a blockbuster film featuring a Hollywood star and dazzling special effects was at a major studio. The Hollywood giants were the ones that could afford to pay celebrities millions of dollars and license sophisticated software to produce elaborate, special effects-laden films. That’s all about to change, and the public is getting a preview thanks to artificial intelligence (AI) tools like OpenAI’s DALL-E and Midjourney.

Both tools use images scraped from the internet and select datasets like LAION to train their AI models to reconstruct similar yet wholly original imagery using text prompts. The AI images, which vary from photographic realism to mimicking the styles of famous artists, can be generated in as little as 20 to 30 seconds, often producing results that would take a human hours to produce. For example, the “detailed glowing synthwave diagram of Japanese mech robot” illustration below.

These paid tools generally prevent the direct appearance of celebrities in their outputs. But users are finding workarounds with the arrival of free, open-source AI image generation software. The most popular is Stable Diffusion produced by UK-based startup Stability.AI. The open-source tool allows users to generate original imagery of any celebrity found on the internet. One particularly convincing example (below) shows Bryan Cranston (Breaking Bad) donning the costume of a Game of Thrones character, putting the iconic actor in a setting many fans could only dream of.

What began as AI image generation has quickly evolved into experimental animations that combine Stable Diffusion with free complementary tools like Deforum, Google Colab, and Ebsynth. In some cases, the add-ons require some code manipulation, YouTube, Discord, and Reddit are filled with easy-to-interpret tutorials that make them accessible to nearly anyone. These cobbled-together animations are just the start. Full-fledged original AI videos derived from text prompts are already on the way.

Earlier this year, Google’s Imagen and Meta’s Make-a-Video previewed AI tools that can generate photorealistic video from text prompts. Open-source versions of these kinds of AI tools are already being developed—a dynamic that will likely lead to the same kind of uncensored content produced by Stable Diffusion.

If the evolution of AI video tools mirrors the rapid improvement of 2D AI image generators, the public may see its first full-length, completely AI-generated film in the next one to two years.

The potential abuse of AI video generation tools to violate the intellectual property rights of Hollywood studios, and the rights of the actors, are among the many potential problems that will arise in the coming months. Soon, almost anyone using AI tools may be able to cast celebrities in unauthorized films featuring everything from violent sequences to sexually-explicit scenes.

For example, the general install version of Stable Diffusion includes safety filters to prevent explicit imagery from being generated, but the open-source community rapidly came up with code to defeat those guardrails. While sexual content is not the most popular target for the roughly 10 million users of Stable Diffusion, there’s at least one Reddit community with more than 6,400 members dedicated to exploring how to use Stable Diffusion to produce sexually suggestive and nude imagery.

In September, when publicly questioned about the possibility of people using Stable Diffusion in potentially problematic ways, such as in the creation of porn, the company’s CEO Emad Mostaque shrugged off responsibility.

“If people use technology to do illegal stuff then it is their fault,” Mostaque tweeted. “This is the basis for liberal society.” In the same thread, Mostaque wrote that“if people use this to copy artists styles or break copyright they are behaving unethical and it is literally traceable as outputs are deterministic.” 

On Oct. 17, Stability.AI received an investment of $101 million to continue to develop its AI models, as well as its in-development Dream Studio product, a proprietary, paid version of its AI model that doesn’t require the same technical skills needed to install and operate Stable Diffusion.

But along with Stable Diffusion’s new billion-dollar valuation has come an apparent shift in Stability.AI’s approach to its business. The company’s public messaging has shifted with the latest release of Stable Diffusion 1.5 which, according to the startup, attracted government interest in its free AI tool being used by millions. 

“We’ve taken a step back at Stability AI and chose not to release version 1.5 as quickly as we released earlier checkpoints,” wrote Stability.AI’s chief information officer, Daniel Jeffries, on Oct. 20. “We’ve heard from regulators and the general public that we need to focus more strongly on security to ensure that we’re taking all the steps possible to make sure people don’t use Stable Diffusion for illegal purposes or hurting people.”

This extra caution from the Stable Diffusion team may be welcomed by the movie and music studios (there’s an AI music generation tool in the works as well) that could be impacted by AI generation tools in the coming months and years.

Nevertheless, the next independent film starring one’s favorite actor may soon be produced in the bedroom of someone using an AI video generator to create unlikely Hollywood mashups and put them on the internet, legal consequences be damned. At this point, the question isn’t if this will happen, but when.

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.

Full Article