A version of this article first appeared in the “Reliable Sources” newsletter. You can sign up for free here.
If you work in the media industry or simply care about the quality of the information ecosystem we all inhabit, you need to pay attention to Sora 2.
“This feels to many of us like the ‘ChatGPT for creativity’ moment,” OpenAI CEO Sam Altman wrote in a blog post announcing the product, and many early testers agree. Sora 2 is an AI video generator, a TikTok challenger, and a social network all in one. And right now, at Altman’s urging, the feed is full of Altman “deepfakes.”
Now, Sora 2 might just be another online fad, a reality-deadening distraction that people will soon tire of. But more likely it’s a new form of communication, turning users into the stars of AI-created mini-movies — copyright owners and professional actors and scam victims be damned.
When seeing stops believing
Sora 2, currently the no. 1 free app in Apple’s US App Store, is part of a fast-growing (and so destabilizing it’s frightening) phenomenon. It comes fast on the heels of Meta releasing a new AI video feed called Vibes. And it adds to a season full of heightened stress about “AI slop.” Unreal videos are moving from a concerning sideshow to being the centerpiece of our feeds.
For former WarnerMedia CEO Jason Kilar, the biggest takeaway about Sora and Vibes is that consumers are valuing AI-generated videos “like they do traditionally created good short form vids.”
For users, it’s not necessarily a question of “real” versus “fake,” but rather “fun to watch” versus “boring.” And with Sora’s “cameos,” which turn people into playable characters, your actual face is inside the artificial reality, so what’s “fake” anymore?
Stating the obvious: The implications for human-made news and info are massive. As Scott Rosenberg wrote for Axios, “Feeds, memes and slop are the building blocks of a new media world where verification vanishes, unreality dominates, everything blurs into everything else and nothing carries any informational or emotional weight.”
Making disinfo ‘extremely’ easy and real
For a brief moment in history, video was evidence of reality. Now it’s a tool for unreality.
Two teams at The New York Times tried out Sora, which is currently invite-only, and the resulting stories reflected the power and the possibility of the technology. Reporters Mike Isaac and Eli Tan led with the amazing, “jaw-dropping” creativity that Sora has unleashed before turning to the “disconcerting” aspects of the app. Tiffany Hsu, Stuart A. Thompson and Steven Lee Myers led with the app’s ability to make disinformation “extremely easy and extremely real.”
NPR’s Geoff Brumfiel noticed that OpenAI’s “guardrails” against unwelcome content “appeared to be somewhat loose around Sora. While many prompts were refused, it was possible to generate videos that support conspiracy theories. For example it was easy to create a video of what appeared to be President Richard Nixon giving a televised address telling America the moon landing was faked.”
That’s kind of the whole point — to be able to create almost anything. “Prioritize creation” is the way Altman put it. And the result, as Hayden Field wrote for The Verge, is that “it’s getting hard to tell what’s real.” Or rather, it’s getting even harder.
The era of infinite content
It’s natural to wonder what concepts like “high-quality,” “edited” and “fact-checked” will mean in a world of infinite and mostly AI-generated content. The war for attention is ferocious and yet it’s barely even started, considering what seems to be coming.
“Ever since the early days of social media and YouTube, there has been more content created than people could ever watch, so we allowed algorithms to sort through it all and tell us what we watch, and we saw what happened to the media landscape, to our attention economy,” Human Ventures co-founder Joe Marchese told Reliable Sources.

“Now we are entering an entirely new epoch where Generative AI will allow for the creation of infinite content, at ever lower costs, meaning the means of distribution, the platforms and their algorithms, are going to have to adjust again, and the business models we built, however shaky on top of the platforms and their algorithms, are about to be upended, again,” he continued.
“The future of the attention economy, that all us humans have to live in together, feels more chaotic than ever.”
Essential Sora reading
With that in mind, here are a few of this week’s best reads about Sora and the implications for all of us.
Platformer’s Casey Newton recapped “what everyone is saying about Sora.” In sum: “It’s cool. It’s scary. It’s a hit.”
Over at Business Insider, Katie Notopoulos tried to set aside all the “worries and fears” about Sora and wrote about how deliriously fun it is. It’s addictive “because it’s starring you.”
“It turns out that people may not mind AI slop as long as they can be part of it with their friends,” Alex Heath wrote in Sources. “That Meta, the most successful social media company in history, apparently did not understand this, while OpenAI did, is striking.”
“Should public figures be fair game in this game? The lawyers are going to have a field day in this brave new world,” Spyglass reporter M.G. Siegler wrote after making a video of John F. Kennedy saying that his favorite movie is the Care Bears.
“The difference with Sora 2, I think, is that OpenAI, like X’s Grok, has completely given up any pretense that this is anything other than a machine that is trained on other people’s work that it did not pay for, and that can easily recreate that work,” wrote 404 Media’s Jason Koebler.
And another big question: Will studios sue? “Talks are underway,” Winston Cho reported for The Hollywood Reporter.
Our hyper-individualized media future
Pondering the longer-term consequences of these tools can be overwhelming. Historically, people have primarily been consumers of media, not creators. What does it mean when we’re all creators first and foremost?
It’s good to be instinctively wary of AI hype artists, but some of these predictions from tech investor Greg Isenberg feel spot-on. “In 5-10 years,” he wrote, “people won’t ask ‘what’s your favorite show?’ they’ll ask ‘what’s your favorite generator?’”
We also have to ask, as the aforementioned Casey Newton does, “what happens when the majority of video we consume is not just synthetic but also highly personalized: tuned not just to our individual tastes and interests but also to our faces and voices.”
If you think today’s information bubbles divide us, wait until each of us lives in a bubble designed for one.

