Artificial Intelligence Archives - TheWrap https://www.thewrap.com/category/artificial-intelligence/ Your trusted source for breaking entertainment news, film reviews, TV updates and Hollywood insights. Stay informed with the latest entertainment headlines and analysis from TheWrap. Sat, 04 Oct 2025 06:59:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/www.thewrap.com/wp-content/uploads/2024/05/the_wrap_symbol_black_bkg.png?fit=32%2C32&quality=80&ssl=1 Artificial Intelligence Archives - TheWrap https://www.thewrap.com/category/artificial-intelligence/ 32 32 OpenAI’s Sam Altman Changes Copyright Controls, Offers Monetization for Sora After Hollywood Raises Concerns https://www.thewrap.com/openai-sora-update-copyright-controls-monetization/ Sat, 04 Oct 2025 06:38:15 +0000 https://www.thewrap.com/?p=7858203 "We have been learning quickly from how people are using Sora and taking feedback," the CEO writes

The post OpenAI’s Sam Altman Changes Copyright Controls, Offers Monetization for Sora After Hollywood Raises Concerns appeared first on TheWrap.

]]>
OpenAI CEO Sam Altman issued a major update for the AI video app Sora in the aftermath of copyright concerns from Hollywood.

The tech power player released a new statement Friday evening, where he confirmed two new changes were coming to the app — one of which will give more copyright control to rightsholders.

“We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders and other interested groups,” Altman wrote in a new blog post. “We of course spent a lot of time discussing this before launch, but now that we have a product out we can do more than just theorize.”

As he continued, Altman noted the new changes were just two of “many more to come.”

“First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls,” he continued. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).”

Sora’s initial opt out policy required companies to explicitly say they didn’t want their IP to appear on the app. If not, copyrighted content could find itself on to Sora. Disney has reportedly already opted out.

This sparked concern in Hollywood, prompting a response from Altman, who theorized users will “try very different approaches and will figure out what works for them.”

He added, “We want to apply the same standard towards everyone, and let rightsholders decide how to proceed (our aim of course is to make it so compelling that many people want to). There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.”

Altman then noted Sora would be experimenting in ways to monetize the videos, too. In particular, the CEO explained they would be exploring revenue sharing with rightsholders when their characters are used.

“The exact model will take some trial and error to figure out, but we plan to start very soon,” Altman said. “Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we want both to be valuable.”

Before signing off his note, Altman warned that there would likely be both “good decisions and some missteps” as they implement the changes.

The post OpenAI’s Sam Altman Changes Copyright Controls, Offers Monetization for Sora After Hollywood Raises Concerns appeared first on TheWrap.

]]>
AI Actress Tilly Norwood Rejected by More Acting Unions as ‘Nothing but Lines of Code’ https://www.thewrap.com/ai-actress-tilly-norwood-actra-equity-union-rejection/ Thu, 02 Oct 2025 22:17:19 +0000 https://www.thewrap.com/?p=7856801 Canada's ACTRA union and the U.K.'s Equity join SAG-AFTRA in denouncing the suggestion that the "actress" could replace human performers

The post AI Actress Tilly Norwood Rejected by More Acting Unions as ‘Nothing but Lines of Code’ appeared first on TheWrap.

]]>
One day after SAG-AFTRA denounced the suggestion that an AI “actress” named Tilly Norwood could soon be signed to a Hollywood talent agency, more acting unions spoke out Thursday, saying the performer is “nothing but lines of code.”

Canada’s ACTRA union and the U.K.’s Equity joined SAG-AFTRA in criticizing the new tech development and argued that AI cannot replace human talent. Their rejection came days after rumors of the AI actress lit up Hollywood.

“Like any other art form, you can’t just copy and repackage it as your own,” the official X account for ACTRA posted Thursday. “The recent ‘synthetic performer’ case is a wake-up call for lawmakers — a clear reminder why moral rights matter. Unchecked AI must be regulated now.”

In a separate statement to the Hollywood Reporter, ACTRA national executive director and chief negotiator Marie Kelly added: “Tilly’s existence is nothing but lines of code, wrongfully based and programmed from actual human performance. There is no place in our industry, and no use in the humanity of art, for replacing performers with synthetics. ACTRA rejects any attempt to do so.”

Kelly continued: “Performers are concerned about their craft, their place in the world of entertainment and their livelihoods. They have always competed against thousands of other performers for work but are now faced with synthetic competition. Aside from the fact that the synthetic ‘performer’ doesn’t eat, consume goods, pay taxes or otherwise contribute to our society, they don’t engage audiences using human creativity.”

Equity echoed those statements in their own. They called for the “Wild West” of AI to end and “robust protections must be implemented to ensure artists’ work is not stolen.”

“Equity is supporting a member who believes her image and performance is included in the creation of the new AI actress without her permission,” the statement read. “The lack of transparency around this – and so many other – AI creations represent these problems. The industry desperately needs a system of transparency, consent and remuneration to ensure that performers’ rights are respected and upheld.”

The statement concluded: “Technological advancements must not come at the expense of those who bring art to life.”

These latest union statements came after Dutch actress, comedian and digital producer Eline Van der Velden suggested at a Zurich summit last week that her creation Tilly Norwood could get signed by an agency “in the coming months.”

On Tuesday, SAG-AFTRA also decried the idea of AI actors gaining representation and replacing human performers.

“SAG-AFTRA believes creativity is, and should remain, human-centered. The union is opposed to the replacement of human performers by synthetics,” the Screen Actors Guild shared in their statement.

“To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience,” they continued. “It doesn’t solve any ‘problem’ — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.”

Actors like Melissa Barrera, Emily Blunt, Simu Liu, Lukas Gage, Mara Wilson and Nicholas Alexander Chavez all spoke out about artificial intelligence in the entertainment industry.

The post AI Actress Tilly Norwood Rejected by More Acting Unions as ‘Nothing but Lines of Code’ appeared first on TheWrap.

]]>
Beyond the Tilly Norwood Hype, Studio Execs Explain How AI Is Actually Being Used https://www.thewrap.com/tilly-norwood-ai-actor-hype-technology-capabilities/ Thu, 02 Oct 2025 13:00:00 +0000 https://www.thewrap.com/?p=7855682 TheGrill 2025: Execs reject the idea of a synthetic actor and say AI's arrival in Hollywood is happening in quieter ways

The post Beyond the Tilly Norwood Hype, Studio Execs Explain How AI Is Actually Being Used appeared first on TheWrap.

]]>
Days after the AI-generated “actress” Tilly Norwood got Hollywood erupting with outrage, much of the discussion at TheWrap’s annual business conference TheGrill revolved around parsing out how the new technology is actually being used on productions and in studio offices here and now. Put simply, is AI tech even capable of creating an AI “actor” right now?

Despite claims from creator Eline Van der Velden that she and her company, Xicoia, had received interest from talent agencies, no one at TheGrill believed that AI actors are seriously going to be a part of Hollywood anytime soon.

“We are in the human business. We have been in the human business. We’re going to continue to always be in the human business,” WME co-chair Richard Weitz said after saying the agency wasn’t interested in signing Norwood. “We’re not interested in taking the best of our actors and the actors in their community and being put in an AI model.”

Tilly and Richard Weitz WME
AI actress Tilly Norwood does not have a future being signed at WME, co-chairman Richard Weitz tells TheWrap. (Getty Images)

Yves Bergquist, director of the USC Entertainment Technology Center’s “AI in Media” program, was even more blunt, dismissing it as a “gimmick.”

The speakers at TheGrill joined a chorus of individuals, such as actors Melissa Barrera and Simu Liu, and organizations like SAG-AFTRA in denouncing the idea that AI “actors” could receive the same kind of treatment as humans, raising the question of whether the noise around Tilly Norwood was all just a bid to get attention. After all, the idea of AI replacing humans is a universal fear and a big reason why it’s still considered a “dirty word” in Hollywood. Norwood directly strikes that nerve.

“It is the sort of virus that has been plaguing the discussion around AI that I have been talking about day in and day out,” Bergquist said on a panel at TheGrill. “AI music has been a possibility for years and years. You don’t have any major AI artists out there.”

“I think that this is all evolving, but it’s not clear that just synthetic actors are adding utility of itself, so why do that?” Jon Zepp, head of entertainment, content and platforms at Google, said on a separate panel on AI. And Google has gone all-in on the technology.

That’s because an AI-generated “actor” would stretch the limits of what the technology is capable of right now, with even stills or short videos of an AI character at times flirting with the uncanny valley. TheGrill conference took place the same day that OpenAI unveiled Sora 2, a new video generation model that promises to be a step-up in capabilities over the original. But whether it’s something studios would want to use remains up in the air.

The rejection of Norwood, which represents just one facet of AI, doesn’t mean there aren’t broader applications of the technology, which executives at TheGrill went further in-depth about. They touched upon aspects like the ability to streamline production schedules, create shareable clips of content in a fraction of the time and even generate AI versions of notable personalities as part of a marketing stunt.

Beyond the hype

AI is already being put to use, even if the applications aren’t sexy.

Fox CTO Melody Hildebrandt and Universal VP of Creative Technologies Annie Chang, who spoke alongside Bergquist on the same panel, said that many of the immediate ways AI is being used in entertainment are invisible to the public.

At Universal, production execs are using AI to help break down scripts and organize them into efficient shooting schedules, enabling productions to start rolling cameras faster, Chang said, adding that the tools are useful to generate rough visual approximations of ideas and concepts that allow creatives to better communicate their vision to others.

Hildebrandt also noted that during a time when many TV viewers are watching clips of shows, particularly late night, in YouTube videos and TikTok snippets, AI can help studios scan their content libraries for the most shareable clips.

“We can actually be present in those platforms and make our content discoverable, make it more searchable,” she said.

That’s not to say AI’s impact is completely invisible. AI-generated video has been used by Fox Sports in video packages for its recent broadcasts, including a 20-second video recapping the career of four-time NFL MVP Aaron Rodgers that aired earlier this month.

Last month, at a special MLB game at Bristol Motor Speedway, Fox showed an AI clip of its pregame host Kevin Burkhardt in a NASCAR race against baseball greats and Fox analysts David Ortiz, Derek Jeter and Alex Rodriguez, with the four men watching the AI video live.

“It was a hilarious segment, just really good vibes and fun to watch, and it allowed us to kind of cross-promote NASCAR and MLB with new audiences,” Hildebrandt said. “That was a creative concept that you have a director of marketing come up with and then essentially execute the entire concept in a matter of days to take advantage of the window of opportunity.”

Bergquist says that while major studios are figuring out how to implement AI into immense, well-established production pipelines, AI will have a larger creative impact on an individual level as filmmakers who are just getting started will use the technology in ways that will allow them to get productions done much faster.

Feeling the squeeze

Of course, as that generational shift takes place, countless creative artists will get caught in the crossfire. Last year, members of the Art Directors Guild told TheWrap that they were voting against IATSE’s bargaining agreement because they felt the agreement did not provide members with enough protection against AI automation.

ADG-covered positions like concept artists are among the top positions facing immediate automation, and studio execs like Chang have said that AI’s ability to generate immediate concept art has become an increasingly common part of project pitches.

“A lot of artists have had and will continue to have their styles and artistic identities taken and absorbed into these systems, and the result is going to be very derivative output that is going to affect the quality of these productions,” industrial designer Matthew Cunningham told TheWrap last year.

Recently, independent tech journalist Brian Merchant shared stories of people who have lost jobs to automation, and earlier this month turned his attention to graphics and concept artists. One anonymous respondent said he built his career around doing graphics work on b-roll footage for TV history documentaries that have since been replaced by AI.

“As much as I would like to say viewers will reject the AI style and demand a return to human-made art, I’m not convinced it will happen,” the artist wrote. “Even if it did, it might soon be too late to turn back. I know that there are studios with expert producers, writers and showrunners with decades of experience in this exact genre who are closing their doors.”

When asked about the impact of AI on human work, Chang said she did not foresee a future in which Universal completely removed humans from any part of the production process even as the studio seeks ways to increase efficiency.

Carrie-Anne Moss and Keanu Reeves in The Matrix
AI would struggle with specific color needs of filmmakers, such as the iconic green tint of “The Matrix.” (Warner Bros.)

One example was color grading, a common part of VFX post-production that changes the color of footage such as the iconic green tint of “The Matrix.” When experimenting with AI, Chang and her team at Universal found that the output of automated color grading is not yet up to proper Hollywood quality.

“It kind of reaffirmed to us that even with AI, you still need the constant presence of humans to control the output,” she said. “If we compromise our creativity, we compromise our business model.”

Ultimately, that initial spark will have to come from a human being.

“There’s combinatory creativity, which takes parts of already existing things and creates something from that, which AI does well,” added Bergquist. “And then there’s change creativity, which imagines something entirely new, and that’s never going to be something AI can do.”

Watch our full panel from TheGrill below:

The post Beyond the Tilly Norwood Hype, Studio Execs Explain How AI Is Actually Being Used appeared first on TheWrap.

]]>
Meta Suggests Joseph Gordon-Levitt Is Against Its AI Because His Wife Worked for OpenAI https://www.thewrap.com/joseph-gordon-levitt-meta-ai-new-york-times/ Wed, 01 Oct 2025 21:48:05 +0000 https://www.thewrap.com/?p=7855735 The "Inception" actor called out the company and Mark Zuckerberg for developing AI "designed to prey on kids"

The post Meta Suggests Joseph Gordon-Levitt Is Against Its AI Because His Wife Worked for OpenAI appeared first on TheWrap.

]]>
Meta slammed Joseph Gordon-Levitt after the actor participated in a New York Times op-ed calling out Mark Zuckerberg and the company’s AI chatbots said to be “designed to prey on kids.”

Company spokesperson Andy Stone even suggested in a Wednesday post on X that Gordon-Levitt was against the tech giant because his wife, Tasha McCauley, formerly served on the board of Meta rival OpenAI.

“What qualifies an ‘actor and filmmaker’ to weigh in on AI issues (and to make a bunch of inaccurate claims)? Must be because, as @nytopinion buries in the last two seconds, his wife is a former OpenAI board member,” Stone wrote.

Gordon-Levitt’s video op-ed posted Tuesday with The New York Times. He said that as the father of an 8-year-old, the idea that kids were having “synthetic intimacy” with chatbots talking to minors made him “livid.” The actor called Zuckerberg out for choosing “lots and lots of money” over implementing safety regulations for minors’ interactions with his company’s AI tools.

“It’s hard to describe how angry this makes me,” Gordon-Levitt said in the video. “It’s not known how many kids have already been exposed to this kind of synthetic intimacy.”

He added: “A bunch of big names in Silicon Valley, including Meta, launched two new Super PACs committing up to $200 million toward suppressing AI regulation. They’re worried that American voters – both Republicans and Democrats – mostly agree that there should be laws that protect our kids from these predatory companies and their algorithms.”

For McCauley, she was once a part of a four-person coup attempt at OpenAI that briefly removed Sam Altman as CEO. Altman returned to his original position after only a few days after a rise in internal support. McCauley and the others involved were eventually replaced at the company.

Representatives for the New York Times did not immediately respond to TheWrap’s request for comment, but in a statement to the New York Post, a spokesperson said, “This is a guest video created for the Opinion section, where perspectives from a variety of points of view and backgrounds are published every day … The indirect and non-pertinent connection to OpenAI is clearly disclosed and has no bearing on the piece.”

The post Meta Suggests Joseph Gordon-Levitt Is Against Its AI Because His Wife Worked for OpenAI appeared first on TheWrap.

]]>
WME Won’t Sign AI Actress Tilly Norwood, Leaders Say: ‘We Represent Humans’ https://www.thewrap.com/wme-will-not-sign-ai-actress-tilly-norwood/ Tue, 30 Sep 2025 20:29:28 +0000 https://www.thewrap.com/?p=7854663 TheGrill 2025: “If she has a future, it won't be at WME,” co-chairman Richard Weitz tells TheWrap

The post WME Won’t Sign AI Actress Tilly Norwood, Leaders Say: ‘We Represent Humans’ appeared first on TheWrap.

]]>
Tilly Norwood, the”AI actress” caused a Hollywood uproar on reports that she is being shopped for talent agency representation, does not have a future at WME Group.

Agency leadership — President Mark Shapiro and chairmen Christian Muirhead and Richard Weitz — said at TheWrap’s 2025 Grill conference on Tuesday that the company is not interested in representing the AI actress: “If she has a future, it won’t be at WME. We represent humans,” Weitz told TheWrap founder, CEO and editor-in-chief Sharon Waxman.

“We are in the human business. We have been the human business. We’re going to continue to always be in the human business,” Weitz continued, speaking on morning panel titled “WME: The Next Chapter.” “We’re not interested in taking the best of our actors and the actors in their community and being put in an AI model. So nope, we’re not going to represent her. We weren’t approached by her and I don’t think that that’s going to be the future for us.”

Muirhead also cited recent comments from SAG-AFTRA, which argued that “the audience is looking for a human connection.” “There is no human connection, there is no light in the eyes,” he said, “and I don’t think that’s the business we are interested in.”

Shapiro, who also serves as president and COO of TKO Group Holdings, which owns UFC and WWE, called the idea “ridiculous” but added: “There is going to be an AI actor, actress that’s coming at some point. That will happen. But that’s not the business WME is in right now, nor is it a place we think we want to go.”

Dutch actress, comedian and digital producer Eline Van der Velden, who created Norwood, made the bold claim over the weekend at a Zurich summit that the AI actress will be signed by an agency “in the coming months.”

WME wasn’t the only megawatt agency to speak out against Tilly this week. Gersh Agency president Leslie Siebert told Variety in an interview published Tuesday that her creation was “frightening” and that Gersh will not sign her. “That said, it’s going to keep coming up, and we have to figure out how to deal with it in the proper way,” she said. “But it’s not a focus for us today.”

The quick and vocal reaction from critics across the industry puts a spotlight on the underlying concern that Hollywood — and most people — have about AI: that it’s coming for our jobs. The idea of an AI-generated character garnering interest from talent agencies reinforces the notion that no one is safe.

Van der Velden told Broadcast International she hopes Norwood will be “the next Scarlett Johansson or Natalie Portman.” She said audiences will ultimately determine whether AI talent succeeds. “Audiences care about the story — not whether the star has a pulse,” she wrote on LinkedIn.

The post WME Won’t Sign AI Actress Tilly Norwood, Leaders Say: ‘We Represent Humans’ appeared first on TheWrap.

]]>
Fox and Tubi on AI: 5 Insights From the Pros Who Regularly Use the New Tech https://www.thewrap.com/fox-tubi-ai-tech-uses-melody-hildebrandt-nicole-parlapiano/ Thu, 25 Sep 2025 19:52:21 +0000 https://www.thewrap.com/?p=7850787 From the best way to get their teams utilizing AI effectively to what the technology does to the talent pipeline, here are some insights from executives using it now

The post Fox and Tubi on AI: 5 Insights From the Pros Who Regularly Use the New Tech appeared first on TheWrap.

]]>
When it comes to artificial intelligence, Melody Hildebrandt prefers to get her hands dirty.

The chief technology officer of Fox keeps a finger on the pulse on what’s happening with technology by digging into AI models herself to see what’s capable. 

“I have to spin up this app and write an app from scratch to actually be, like, ‘Wow this is incredible,’” she told TheWrap. “There is really no substitute for being hands on.”

Hildebrandt and Nicole Parlapiano, chief marketing officer of Tubi, joined me in a roundtable session called “AI in Hollywood: Recoding Content and Creativity.” The discussion touched on different facets of AI, from how to get employees to embrace the technology to the inevitable question about job displacement.

Hildebrandt and Parlapiano’s comments offer a glimpse into how major media companies are employing AI, as well as their strategies on ensuring that it’s done in an effective and logical manner. It’s an acknowledgement that even as some in Hollywood regard AI as a “dirty word,” many are embracing the technology and the advancements they bring. 

To watch the full panel, go here. The following are five of the most insightful things I learned about their use of AI. 

A bottoms-up approach to AI

Too many C-suite executives are mandating the use of AI by their employees without giving proper guidance or even understanding what that means. It’s understandable that companies want their employees to be comfortable with the rise of AI, but blanket edicts aren’t effective. 

Besides, at least 800 million people each week use OpenAI’s ChatGPT, so they’re probably figuring this stuff out on their own. 

But when it comes to work, a more effective approach would be to figure out a problem you want to solve, and determine what AI tools can provide a solution, according to Parlapiano. 

“Using AI and having all these tools just to have them is not a really good place to start,” she said. “If you’re thinking about a problem or gap in the business, and you’re thinking about how (AI) can help you overcome it, I think it’s a lot cleaner.”

Hildebrandt said Fox is providing tools to employees based more on demand, with people coming to her central team asking about AI models they’ve heard about from other companies and asking for advice on how to utilize or even experiment with them. 

A contrarian view on jobs

The common belief/fear is that AI is coming for our jobs. Anthropic CEO Dario Amodei said he believes AI will wipe out half of all entry-level white-collar jobs over the next one to five years — which may be a bit of an aggressive estimate.

That entry-level aspect is worrisome, with AI potentially killing off talent pipelines.

But neither Hildebrandt and Parlapiano believe that’s the case.

“We’re in entertainment. If we don’t have young people on our teams, we don’t have a forward-thinking business,” Parlapiano said. “If I don’t have young people coming in, then they’re not giving us a perspective on what people want to watch, what’s trending, what’s cool, what talent we should be excited about.”

Hildebrandt noted that one benefit of hiring younger employees is that some are already versed in AI. She mentioned an engineer that her team hired out of college who told her that gen AI coding tools made him 10 times the engineer he was before. 

“The bar has been raised where you expect more out of entry-level talent,” she said. “You’re not just farming out grunt-level work now to entry-level talent. We have high expectations now that people who are hitting the ground are able to move to the next level.”

Parlapiano added: “They have been living this, and I think as you see on a lot of Gen AI tools, they are adopting way faster than older cohorts.” 

Hildebrandt noted that middle managers may feel the squeeze, with entry level employees leveling up and higher level managers doing more hands-on work.

As for jobs in general, Parlapiano made the point that marketing departments already run lean despite high demands, and that AI will be able to remove some of the time-consuming tasks rather than outright replace anyone. 

“I don’t see that fear on my team and with the people I’m working with,” she said. “We’ve all been kind of playing a game of whack-a-mole, just trying to figure out, how do you prioritize multiple objectives?”

Embracing AI means making less trade offs and scaling up a team’s capabilities more effectively, she added. 

Repackaging content

Hildebrandt touched on more personalized experiences with the concept of short-form video on its newly launched Fox One streaming service. She said the videos are built in an AI pipeline that repackages all of the company’s linear content into vertical short-form videos.

This system allows these shorts to more quickly tap into a viral moment in a match or a game-winning touchdown. Or they can be created based on conceptual ideas like a rundown of the biggest comebacks over the weekend, and not just simply by teams or players, she said. 

Another example is if you come into a show or game late, AI will be able to quickly assemble a recap video to get you up to speed, she added.

The next phase, she added, was using AI to identify key moments in the game that aren’t as obvious as a touchdown or play, like a spontaneous moment between players that goes viral, and then quickly repackaging that as a short video to be sent out to audiences. 

“It’s not just how you merchandise existing content and bring the most relevant stuff to consumers, but how do you actually cut, repackage, reshape that content, and then how do you enrich it to bring entirely new consumer experiences?” she said. “That’s an exciting new frontier.”

A post-SEO world

We’re quickly moving away from a search engine-driven experience on the internet, where answers are now summarized by AI. Down the line, these queries will be handled by bots talking to other bots, with humans potentially having even less exposure to the search process.

“I think the top line principle is we recognize that the internet, increasingly, the vast majority of traffic is going to be bots, not humans,” Hildebrandt said. “So basically the internet is being rewritten right now.”

That potential future already has companies thinking about how to remain relevant. 

“We’ve been so old school search focused for so long, it’s such an efficient driver of conversion that it actually kind of flips it back to where you have to be more top of funnel, and you have to think about like, how do I get people talking about my brand?” Parlapiano said. 

Hildebrandt added it was also thinking about how to optimize your online content for bots instead of humans, but also that media companies, creators and publishers will need to fight to ensure their content is appropriately represented in the new search reality. 

Authenticity matters

In the age of AI, having an authentic voice makes even more of a difference. 

“We hear it all the time, that word, people want authentic connections,” Parlapiano said. “They want to hear authentic stories.”

When so many tasks and ideas are being generated bot, she added there’s a bigger need to trust where you’re getting your news and content from.  

The post Fox and Tubi on AI: 5 Insights From the Pros Who Regularly Use the New Tech appeared first on TheWrap.

]]>
Neon Pulls ‘Together’ From Chinese Theaters After Unauthorized Censorship of Gay Wedding Scene https://www.thewrap.com/neon-pulls-together-chinese-theaters-gay-wedding-scene-censorship/ Wed, 24 Sep 2025 20:25:34 +0000 https://www.thewrap.com/?p=7850391 A key moment involving a videotape of a gay wedding was altered with AI by Chinese distributor Hishow

The post Neon Pulls ‘Together’ From Chinese Theaters After Unauthorized Censorship of Gay Wedding Scene appeared first on TheWrap.

]]>
Neon’s body horror film “Together” has been pulled from theaters in China after the indie studio discovered that local distributor Hishow made unauthorized alterations to a key scene involving a videotape of a gay wedding.

The film stars Dave Franco and Alison Brie as a couple who find themselves forced to fuse together into one person after moving to a town with a cult secret. The altered scene is a key one in the film’s plot, as Brie’s character discovers that a person they met in the town has undergone the fusion process with his husband.

But thanks to AI-generated alterations made by Hishow, the version seen in China makes the fused couple a heterosexual one, replacing one of the men in the videotape of the wedding and subsequent fusion with a woman.

Neon bought “Together” at the Sundance Film Festival for $15 million and sold Chinese distribution rights to Hishow as part of the film’s foreign sales process. Hishow pulled the film from theaters at Neon’s request.

“Neon does not approve of Hishow’s unauthorized edit of the film and have demanded they cease distributing this altered version,” Neon said in a statement.

This is not the first time American films have faced LGBTQ+ censorship in China. In the 2022 “Harry Potter” spinoff “Fantastic Beasts: The Secrets of Dumbledore,” Warner Bros. removed two lines of dialogue between Jude Law and Mads Mikkelsen that alluded to a past romantic relationship between their characters, Albus Dumbledore and Gellert Grindelwald.

Three minutes of scenes in the 2018 Freddie Mercury biopic “Bohemian Rhapsody” were also cut from the Chinese version to remove any allusions to the Queen vocalist’s coming out as a gay man and his eventual death from AIDS.

The post Neon Pulls ‘Together’ From Chinese Theaters After Unauthorized Censorship of Gay Wedding Scene appeared first on TheWrap.

]]>
Google TV Adds Gemini AI To Make Your Television More Conversational https://www.thewrap.com/google-tv-adds-gemini-ai-to-make-your-television-more-conversational/ Mon, 22 Sep 2025 17:38:05 +0000 https://www.thewrap.com/?p=7847500 Google wants you treating your TV like you would talk to a smart speaker

The post Google TV Adds Gemini AI To Make Your Television More Conversational appeared first on TheWrap.

]]>
If you and your group of friends are having trouble figuring out what to watch on TV, Google’s upgraded Gemini AI-powered assistant might be able to help.

Rather than simply look up what’s playing now or calling out a specific title, the AI is smart enough to understand multifaceted and even vague suggestions to offer a recommendations. You can ask it to find something for you and your friends, noting your different preferences, and Google Assistant figures out a choice you can agree on (theoretically).

Or you can ask for something vague like “What’s that new hospital drama that everyone’s talking about?” and it will show a few recommendations like “The Pitt.”

The new version of the assistant rolled out on the TCL QM9K with Google TV on Monday, but the company plans to expand this to additional newer TVs, and next year start to add the capabilities to some older televisions.

The addition of Gemini to Google TV offers an early glimpse into how AI will change the television viewing experience, with the hope the interactions with your smart TV will be more conversational. It’s also just the latest way Google, which has invested billions of dollars into AI development, is bringing the technology to different facets of our lives.

“This is a natural stepping stone to provide a more natural interface to talk to your TV, not only to find out what to watch, but to explore things in different ways,” said Shalini Govil-Pai, vice president of TV at Google. “What we’ve launched with TCL is where we think the industry is going.”

The new TCL TV will have far-field microphones built in, as well as sensors that detect whether you’re around. Those allow you to talk directly to the TV without holding a remote control, mimicking the kind of experience people are used to when talking to a smart speaker.

Indeed, Govil-Pai doesn’t expect you to just ask your TV about what shows and movies to watch, but pose more general questions like the distance between the Earth and the moon. She said that the Gemini-powered TV would not only answer the question with a spoken answer, but also call up YouTube videos to help you dive further into the topic.

The idea of turning a smart TV into the next smart hub similar to a speaker in the kitchen has been around. But Govil-Pai believes the additional of a generative AI-powered assistant opens the door to more types of conversations with your TV.

Beyond answering “knowledge inquiries” and recommending shows, Govil-Pai said she envisions the TV offering shopping and travel content and ways to make purchases tied to the programs you watch.

Before the end of the year, the Gemini capability will come to devices like Google’s own Google TV Streamer, Walmart onn televisions and more sets from TCL and Hisense.

The post Google TV Adds Gemini AI To Make Your Television More Conversational appeared first on TheWrap.

]]>
What Happened to Lionsgate’s Splashy Plan to Make AI Movies With Runway? It’s Complicated | Exclusive https://www.thewrap.com/lionsgate-runway-ai-deal-ip-model-concerns/ Mon, 22 Sep 2025 13:00:00 +0000 https://www.thewrap.com/?p=7845141 The agreement with the AI startup serves as a cautionary tale of the pitfalls of embracing a technology too early

The post What Happened to Lionsgate’s Splashy Plan to Make AI Movies With Runway? It’s Complicated | Exclusive appeared first on TheWrap.

]]>
A year ago, Lionsgate and Runway, an artificial intelligence startup, unveiled a groundbreaking partnership to train the studio’s library of films with the ultimate goal of creating shows and movies using AI.

But that partnership hit some early snags. It turns out utilizing AI is harder than it sounds.  

Over the last 12 months, the deal has encountered unforeseen complications, from the limited capabilities that come from using just Runway’s AI model to copyright concerns over Lionsgate’s own library and the potential ancillary rights of actors.

Those problems run counter to the big promises made by Lionsgate both at the time of the deal and in recent months. “Runway is a visionary, best-in-class partner who will help us utilize AI to develop cutting edge, capital efficient content creation opportunities,” Lionsgate Vice Chairman Michael Burns said in its announcement with Runway a year ago. Last month, he bragged to New York magazine’s Vulture that he could use AI to remake one of its action franchises (an allusion to “John Wick”) into a PG-13 anime. “Three hours later, I’ll have the movie.”

The reality is that utilizing just a single custom model powered by the limited Lionsgate catalog isn’t enough to create those kinds of large-scale projects, according to two people familiar with the situation. It’s not that there was anything wrong with Runway’s model; but the data set wouldn’t be sufficient for the ambitious projects they were shooting for.

“The Lionsgate catalog is too small to create a model,” said a person familiar with the situation. “In fact, the Disney catalog is too small to create a model.”

On paper, the deal made a lot of sense. Lionsgate would jump out of the gate with an AI partnership at a time when other media companies were still trying to figure out the technology. Runway, meanwhile, would get around the thorny IP licensing debate and potentially create a model for future studio clients. The partnership opened the door to the idea that a specifically tuned AI model could eventually create a fully formed trailer — or even scenes from a movie — based on nothing but the right code. 

The challenges facing both Lionsgate and Runway offer a cautionary tale of the risks that come from jumping on the AI hype train too early. It’s a story that’s playing out in a number of different industries, from McDonald’s backing away from an early test of a generative AI-based drive-thru order system to Swedish financial tech firm Klarna slashing its work force in favor of AI, only to backpedal and hire back some of those same employees (Klarna later clarified it hired two staffers back).

It’s also a lesson that Hollywood is learning as more studios quietly embrace AI, even if it’s in fits and starts. Netflix co-CEO Ted Sarandos in July revealed on an investor call that for the first time, his company used generative AI on the Argentinian sci-fi series “The Eternaut,” which was released in April. But when actress Natasha Lyonne said her directorial debut would be an animated film that embraced AI, she was bombarded with criticism on social media. 

Then there’s the thorny issue of copyright protections, both for talent involved with the films being used to train those AI models, and for the content being generated on the other end. The inherent legal ambiguity of AI work likely has studio lawyers urging caution as the boundaries of what can legally be done with the technology are still being established.

“In the movie and television industry, each production will have a variety of interested rights holders,” said Ray Seilie, attorney at Kinsella Holley Iser Kump Steinsapir LLP. “Now that there’s this tech where you can create an AI video of an actor saying something they did not say, that kind of right gets very thorny.” 

A Lionsgate spokesman said it’s still pursuing AI initiatives on “several fronts as planned” and noted that its deal with Runway isn’t exclusive. The studio also says that it is planning on using both Runway’s tools and those developed by other AI companies to streamline processes in preproduction and postproduction for multiple film and tv projects, though which of those projects such tools would be used on and how were not specified.

A spokesman for Runway didn’t respond to a request for comment. 

Limitations of going solo

Under the agreement announced a year ago, Lionsgate would hand over its library to Runway, which would use all of that valuable IP to train its model. The key is the proprietary nature of this partnership; the custom model would be a variant of Runway’s core large language model trained on Lionsgate’s assets, but would only be accessible to use by the studio itself.

In other words, another random company couldn’t tap into this specially trained model to create their own AI-generated video. 

But relying on just Lionsgate assets wasn’t enough to adequately train the model, according to a person familiar with the situation. Another AI expert with knowledge of its current use in film production also said that any bespoke model built around any single studio’s library will have limits as to what it can feasibly do to cut down a project’s timeline and costs.

“To use any generative AI models in all the thousands of potential outputs and versions and scenes and ways that a production might need, you need as much data as possible for it to understand context and then to render the right frames, human musculature, physics, lighting and other elements of any given shot,” the expert said.

But even models with access to vastly larger amounts of video and audio material than Lionsgate and Runway’s model are facing roadblocks. Take Veo 3, a generative AI model developed by Google that allows users to create eight-second clips with a simple prompt. That model has pulled, along with other pieces of media, the entire 20-year archive of YouTube into its data set, far greater than the 20,000+ film and TV titles in Lionsgate’s library.

“Google claims that data set is clean because of YouTube’s end-user license agreement. That’s a battle that’s going to be played out in the courts for a while,” the AI expert said. “But even with their vast data sets, they are struggling to render human physics like lip sync and musculature consistently.”

Nowadays, studios are learning that no single model is enough to meet the needs of filmmakers because each model has its own specific strengths and weaknesses. One might be good at generating realistic facial expressions, while another might be good at visual effects or creating convincing crowds.

“To create a full professional workflow, you need more than just one model; you need an ecosystem,” said Jonathan Yunger, CEO of Arcana Labs, which created the first AI-generated short film and whose platform works with many AI tools like Luma AI, Kling and, yes, Runway. Yunger didn’t comment on the Lionsgate-Runway deal, but talked generally about the practical benefits of working with different AI models. 

Likewise, there’s Adobe’s Firefly, another platform that’s catering to the entertainment industry. On Thursday, Adobe announced it would be the first to support Luma AI’s newest model, Ray3, an update that’s indicative of how quickly the industry is iterating. Like Arcana Labs, Firefly supports a host of models from the likes of Google and OpenAI.

While Lionsgate said their partnership isn’t exclusive, offering its valuable film library to just Runway effectively limits what you can do with other AI models, since those other models don’t get the benefit of its library of films. 

Screenshot of the short film “Echo Hunter,” starring Breckin Meyer and produced by Arcana Labs as a proof of concept.

Even Arcana Labs, which created the AI-generated short film in “Echo Hunter” as a proof-of-concept using its multi-model platform, faced some limitations with what AI could do now. Yunger noted that even if you’re using models trained on people, you still lose a bit of the performance, and reiterated the importance of actors and other creatives for any project.

For now, Yunger said that using AI to do things like tweaking backgrounds or creating custom models of specific sets — smaller details that traditionally would take a lot of time and money to replicate physically — is the most effective way to apply the technology. But even in that process, he recommended working with a platform that can utilize multiple AI models rather than just one. 

Legally ambiguous

Generative AI and what exactly can be used to train a model occupies a gray legal zone, with small armies of lawyers duking it out in various courtrooms around the country. On Tuesday, Walt Disney, NBCUniversal and Warner Bros. Discovery sued Chinese AI firm MiniMax for copyright infringement, just the latest in a series of lawsuits filed by media companies against AI startups

Then there was the court ruling that argued AI company Anthropic was able to train its model on books it purchased, providing a potential loophole that gets around the need to sign broader licensing deals with the original publishers — a case that could potentially be applied to other forms of media. 

“There will be a lot of litigation in the near future to decide whether the copyright alone is enough to give AI companies the right to use that content in their training model,” Seile said.

Another gray area is whether Lionsgate even has full rights over its own films, and whether there may be ancillary rights that need to be settled with actors, writers or even directors for specific elements of those films, such as likeness or even specific facial features. 

keanu-reeves-john-wick-5
Keanu Reeves might want to have a say on whether his face would be used to train an AI model despite Lionsgate’s ownership of the “John Wick” franchise. (Lionsgate)

Seilie said there’s likely a tug-of-war going on at various studios about how far they’re able to go, with lawyers erring on the side of caution and “seeking permission rather than forgiveness.”

Jacob Noti-Victor, professor at Cardozo Law School, said he was surprised by Burns’ comment in the Vulture article. 

The professor said that depending on the nature of such a film and how much human involvement is in its making, it might not be subject to copyright protection. The U.S. Copyright Office warned as much in a report published in February, saying that creators would have to prove that a substantial amount of human work was used to create a project outside of an AI prompt in order to qualify for copyright protection.

“I think the studios would be leaning on the fact that they would own the IP that the AI is adapting from, but the work itself wouldn’t have full copyright protection,” he said. “Just putting in a prompt like that executive said would lead to a Swiss cheese copyright.”

The post What Happened to Lionsgate’s Splashy Plan to Make AI Movies With Runway? It’s Complicated | Exclusive appeared first on TheWrap.

]]>
‘The Wizard of Oz’ Goes Fully Immersive in Vegas Thanks to AI and VFX https://www.thewrap.com/the-wizard-of-oz-at-sphere-vegas-artificial-intelligence-controversy-report/ Wed, 27 Aug 2025 13:00:00 +0000 https://www.thewrap.com/?p=7827027 "The Wizard of Oz at Sphere" gives the yellow brick road a technological upgrade. But is it still the same cinematic classic?

The post ‘The Wizard of Oz’ Goes Fully Immersive in Vegas Thanks to AI and VFX appeared first on TheWrap.

]]>
The Yellow Brick Road just got longer.

On Thursday, the Las Vegas Sphere will premiere a new version of Victor Fleming’s iconic 1939 Americana film “The Wizard of Oz” with some major adjustments. The story is shorter, the visual quality is upscaled — and digital effects and AI have been used to expand the movie for the Sphere’s unique wraparound screen.

“It’s an experiential film,” producer Jane Rosenthal told TheWrap. “But it’s ‘The Wizard of Oz,’ first and foremost.”

This new take on “The Wizard of Oz” finds itself at the center of much debate among moviegoers and industry figures. It arrives at a precarious time in Hollywood, one where any use of artificial intelligence, particularly as it concerns generating new images and performances, is closely examined with a concerned eye. This project itself already endured criticism after early footage aired on “CBS Sunday Morning” last month.

They are concerns not fully shared by Ben Grossmann, an Oscar-winning VFX artist and CEO of Magnopus, a tech company that uses digital tools to create immersive experiences such as those found at Sphere. Grossmann and Magnopus worked on the film, with Grossmann overseeing effects on the project.

“I see people ranting on the internet about, you know, AI is stealing artists’ work, and things like that,” Grossmann told TheWrap. “I don’t think you understand how artists work, because for 30 years, I’ve been working for famous Hollywood filmmakers who come to me with another artist’s work and say, ‘See this picture from this movie? We want to do something like that.’”

Several times, the creators of “The Wizard of Oz at Sphere” emphasized to TheWrap that you have to see the film to write about it, calling it a gargantuan experience unlike any other in cinema. Beyond the 16K-resolution expanded screen of 160,000 square feet, the presentation uses Sphere’s atmospheric capabilities, including surround sound and 4D environmental control, to give the film a transportive quality — not unlike early audiences being immersed in the Technicolor world of Oz.

TheWrap did not screen the film ahead of Thursday’s premiere.

A technological safe haven

The unveiling of “The Wizard of Oz at Sphere” was not met with broad approval online. After the aforementioned “CBS Sunday Morning” segment aired, one user took to X to mock the updated graphics that look “terrible in motion.” Film critic Courtney Howard pointed out that the show’s high price point (with tickets costing more than $100) would prevent most people from truly “discovering” the nearly century-old classic in this format.

“Very strange part of the ‘Wizard of Oz Sphere’ thing is that it’s roughly 95% headroom,” one user said on X. “The AI extended screen bit covers most everything, the action still seems confined to the bottom middle of the screen.” (When talking about the translation of close-ups from the original film to Sphere, Grossmann asked, “Do you want to look at a close-up of a 200-foot-tall face?”)

Grossmann insisted this is what the filmmakers of “Oz” would do today if given the capabilities.

“The original movie already exists. We don’t have to improve it,” he said. “We don’t need to fix it. There’s nothing broken about it. It still exists. You can go watch it at home, you can watch it in a movie theater.”

Grossmann said the process started with his crew attempting to recreate the look of “The Wizard of Oz” as closely as possible, adopting the matte painting aesthetic to fit the gargantuan space of Sphere. Eventually, they instead sought to find a middle ground between recreation and innovation.

“We’re trying to create something new that puts you inside the movie. So why are we trying to faithfully reproduce things that were not intentional creative choices? They were actually technological limitations. Why don’t we do what the filmmakers would’ve done if they were here today and see what that looks like?”

Grossmann said he wanted the “Wizard of Oz” experience at the Sphere to be a technological safe haven, one where artists and engineers were free to experiment with new instruments to create an experience solely suited for this venue. The VFX artist, who won an Oscar for Martin Scorsese’s “Hugo” and an Emmy for “The Triangle,” likened AI to a paintbrush, an evolution of the old ways.

“I spend more time creating and less time in the technicalities of shoving pixels around,” Grossmann said.

Judy Garland
“The Wizard of Oz” (Everett Collection)

Grossmann spoke of “The Wizard of Oz” with reverence. He and the Sphere team went through vast archives of matte paintings and production notes, trying to render the film for the gargantuan format with as much information as possible. Grossmann noted that the experience, with pixel resolution higher than the human eye can see, is meant to feel as if you are in Oz rather than simply watching it on a screen.

Big changes and big reactions

The expansion came with some changes. Listed on Sphere’s website at only 75 minutes, this version of “The Wizard of Oz” is nearly a half hour shorter than the original. Film grain has been digitally removed for smoother visuals (Grossmann called film grain an “obligation” of older movies, asking, “Are your eyes grainy? Because we’re creating an experience that’s more like the human eye than a film stock from 1939”).

In some newly expanded shots, characters that were once off-screen for certain moments must now be visible to audience members, such as where Dorothy is on-camera and her uncle exits the frame. Sphere used AI modeling to reinsert these characters into their expanded frames, using information from other scenes featuring their performances to fill in the gaps of what they would be doing off-screen in the original cut.

“The Wizard of Oz at Sphere” utilizes artificial intelligence to create off-screen actions of Uncle Henry for the venue’s expansive screen.

Like a traditional blockbuster, it’s a project that took years, with thousands of collaborators. It also had the blessing of Warner Bros. Discovery CEO David Zaslav, who on his earnings call earlier this month praised the upcoming showing as “very innovative.” That’s not a big surprise; the Vegas presentation could drive traditional rental and purchase of the original.

One prominent cinematographer who spoke to TheWrap said the new tech-driven version may infuse some excitement into going to the movies.

“There is a real danger to the movie-going experience in terms of the modern cultural connection to that community experience that is really in a precarious place,” the cinematographer, who didn’t wish to be named, said. “The fact that people are going to have that community experience is a really important and good thing.”

But the new tech’s changes to “The Wizard of Oz” drew online ire when they were featured in a July “CBS Sunday Morning” segment previewing the event.

TCM host Ben Mankiewicz defended the film, saying it would bring “The Wizard of Oz” to new audiences. But the comments drew backlash, with social media users questioning how someone committed to the preservation of film could sign off on the project. Mankiewicz later took to X to say, “I have an advantage. I’ve seen it. It’s ‘The Wizard of Oz.’”

Rosenthal, an Academy Award-nominated producer and longtime collaborator with Robert De Niro, said she has her own concerns with making sure individuals own their images. However, she emphasized that Sphere is using “AI for good” through this production.

“AI has become such a buzzword in the industry … It’s really a unique way we’re using it in terms of the respect that we have for the film,” she said. “Any of the stuff that was written, it’s like, hater’s gonna hate. What am I going to do? You haven’t seen it.”

In preparation for Thursday’s premiere, Rosenthal said she watched the film at least six times in the span of 48 hours. She’s excited to see it anew with audiences for the first time.

“(‘The Wizard of Oz’) spans generations,” she said. “I could watch it with my daughters, and I could watch it with my mother. And now this weekend, my mother and my daughters and I will all sit in the Sphere and watch it together.”

The post ‘The Wizard of Oz’ Goes Fully Immersive in Vegas Thanks to AI and VFX appeared first on TheWrap.

]]>