Editor's note: This is AI Impact, Newsweek's weekly newsletter where each week, we will explore how business leaders are unlocking real value through artificial intelligence.
Tap here to get this newsletter delivered to your inbox.
Inference Layer
By Gabriel Snyder
Lately, I've been exploring AI's future by learning how early automotive pioneers imagined their transformative technology would be used. At the dawn of the automobile, even prescient observers like Thomas Edison—who told the New York World in 1895, “the horse is doomed... It is only a question of a short time when carriages and trucks of every large city will be run by motors”—primarily envisioned fleets of automobiles as centrally owned and operated. They missed the sweeping societal changes, suburban sprawl among them, that individual car ownership would bring once technological advances enabled mass production.
Today’s hyperscalers—OpenAI, Google, Microsoft, Meta, and Anthropic—are building AI around a very specific structure: frontier models running on massive data centers accessed via the cloud. The capital expenditure required to realize this vision is staggering. OpenAI CEO Sam Altman projects “trillions of dollars over time” in infrastructure spending, and Morgan Stanley predicts global data center investment will approach $3 trillion between 2025 and 2029. As one analyst put it, “AI capex is eating the economy.”
But will this vision prevail? Consider the Electric Vehicle Company, which launched in New York City in 1897. At its peak, it operated more than 1,000 electric cabs—one of the largest automotive fleets of its time—from central hubs outfitted with lifts and pulleys that could swap out the cars’ 1,250-pound batteries in under three minutes.
Had they invented ridesharing a century before Uber? Not quite. They were simply mirroring the ownership model of the dominant technology of the time: the horse. While horsepower moved most people around, few city dwellers could afford to own one; instead, when they wanted to travel farther than their own legs would carry them, they relied on hired coaches or livery stables that rented horses.
One wonders if today’s hyperscalers are making a similar mistake. After all, serving billions of users from centralized data centers is the current model for web services, too. Yet, while cloud-centric AI dominates headlines, a quieter revolution may be unfolding in localized AI: small language models that offer economical, specialized, and personally operated alternatives to ChatGPT. It’s a possibility that echoes computing’s own evolution—from shared institutionally owned mainframes to personal desktops to smartphones.
The current AI boom is channeling trillions of dollars into a fast-evolving technology, deployed on infrastructure with 12–18-month depreciation cycles—with massive unanswered questions looming. The most important one of all may be: Will AI’s greatest impact come from frontier cloud models—or from “good enough” agents in our pockets.
Core Intelligence
A fox, a hedgehog and a large language model (LLM). It sounds like the setup to a joke, but it was the starting point for a wide-ranging conversation hosted by Marcus Weldon, Newsweek contributing editor for AI and President Emeritus of Bell Labs, with Philip Tetlock, the University of Pennsylvania psychology professor behind Superforecasting: The Art and Science of Prediction.
Tetlock has spent decades studying how people make predictions—whether they behave like hedgehogs, tied to one big idea, or like foxes, stitching together diverse perspectives. His early “dart-throwing chimpanzees” finding showed that many experts were no more accurate than chance, but his later work revealed an important distinction: Fox-like forecasters consistently beat the odds. “The critical factor was how they thought,” Tetlock said, not their degrees, politics or access to information.
Now, Tetlock is exploring what happens when LLMs enter the forecasting mix. “It is absolutely crucial to integrate LLMs into almost all lines of inquiry,” he said, noting that models can broaden the range of scenarios considered and help apply superforecasting techniques to break complex problems into smaller, solvable parts. In tests, human forecasters who worked with LLM “advisors” improved their accuracy by as much as 41 percent.
“The clear conclusion was that just the activity of interacting with a state-of-the-art LLM was helpful to all human participants under all circumstances,” Tetlock explained. Looking ahead, he is “80 percent confident” that LLMs themselves could become superforecasters within the next five years.
You can read Weldon’s full analysis of the discussion here: Predicting the Future: the supergroup of AI, humans, hedgehogs and foxes.
Prompt Injection

Adam Bry | Chief Operating Officer, Skydio
One of the things that using AI has made me reflect on a lot more is the nature of intelligence and paying attention. At my smartest moments, what am I really doing? Our most brilliant insights aren’t that brilliant. It’s basically combining different pieces of data and applying some sequential logic to it. And I think it’s already within reach of these models. The barrier to getting to superintelligent behavior isn’t reasoning capability—that’s already there. It’s getting the data in there and building out the integrations and workflows.
Have your own lesson to share? Email us at ai.newsletter@newsweek.com
Run Log
By Adam Mills
When Hind Kraytem, a Palantir deployment strategist, needed to send a time-sensitive email to a senior executive in Japan, she faced a challenge. Drafting in English and translating often produces stilted, unnatural messages. So, she turned to a generative AI model and fed it short, tone-rich bullet points outlining what she wanted to communicate, framed around the conventions of Japanese executive emails rather than literal English phrasing.
The model generated the email directly in Japanese and included an English gloss beneath. Before sending, she ran a quick check with a Japanese colleague, who called it “perfect.” The AI not only saved her time but also ensured the message sounded natural and professional, despite the time pressure.
Kraytem’s experience highlights an important principle for working with AI: the framing of your prompt matters more than literal translation. By designing instructions around the target style and trusting the model to do the heavy lifting, you can produce high-quality communication quickly, then add a human check for nuance and accuracy. In other words, AI can handle the drafting, but judgment still belongs to the human.
This approach isn’t just about efficiency; it’s about respecting cultural context and tone while leveraging technology to make fast, polished communication possible.
Context Window
■ Many organizations struggle to move beyond pilot projects because AI isn’t the issue—culture, governance, and leadership alignment are. This piece explores why success depends on embedding AI into strategy, not just systems. [Newsweek]
■ A new report warns that automation is displacing early-career jobs as companies rely on AI for data analysis, scheduling and communications once handled by entry-level staff. [The Guardian]
■ A new state law requires chatbots and digital companions to clearly disclose when users are interacting with an AI system, an early step toward transparency in human-machine conversations. [The Verge]
■ Public sentiment toward generative AI is souring as creators push back against uncredited data use and consumers tire of synthetic content. Forty-three percent of Americans now say AI is more likely to harm than help. [Newsweek]
■ At the IMF and World Bank meetings, leaders warned that most countries lack the legal and ethical frameworks to manage AI safely, creating a widening global readiness gap. [Reuters]
Transfer Protocol
Chano Fernández, former board member and former executive at Eightfold.ai and Workday, is now Interim Executive Officer at Klaviyo.
Kevin Lingley, former vice president, content platform at Spotify, is now executive vice president of global AI at Fremantle.
Frank Chu, a key AI executive from Apple, has joined Meta’s Superintelligence Labs (MSL) team.
Christian Hammer, who recently joined VeraScore’s board of directors through the acquisition of Vala AI, will take on an advisory role as the company’s chief science officer AI/ML.
Jeff Kirk, formerly vice president of AI and data at bswift, is now executive vice president of applied AI at Robots & Pencils.
Know someone on the move in AI? Send job change info to ai.newsletter@newsweek.com
Magic Moment

Ashis Barad | Chief Digital and Technology Officer at Hospital for Special Surgery
"I had family in town in New York. They just came from Texas. We were taking a train to Grand Central. I used ChatGPT’s agent mode: Nine of us, sushi, 2:30 p.m. by Chelsea Market. What’s open? What has high ratings above 4.2 on Google? Make me a reservation. Call. You know, I wanted to really try the agent mode of ChatGPT—not just write me the options, but actually see who has seats. And it did pretty good. Not perfect. It wasn’t perfect, because the restaurant I wanted to do, OpenTable had some issues. But it got me to the restaurant, got me OpenTable, got me the night, and then I had to go to OpenTable real quick and click it. And was the restaurant good? Perfect. 4.2. That was kind of cool to have an actual agent take action. Because I was in the midst of kids screaming, everybody’s yelling, everybody’s hungry. In that place, you don’t want to go to a [bad] place." 
Experience some AI magic? Tell us about it at ai.newsletter@newsweek.com











