Penn Wharton Budget Model’s cover photo
Penn Wharton Budget Model

Penn Wharton Budget Model

Higher Education

Provides budgetary projections and economic analysis of major U.S. legislation without advocacy.

About us

PWBM provides fiscal budget projections and economic analysis of major U.S. legislation without advocacy.

Website
https://budgetmodel.wharton.upenn.edu/
Industry
Higher Education
Company size
11-50 employees
Headquarters
Philadelphia
Type
Educational
Founded
2017

Locations

Employees at Penn Wharton Budget Model

Updates

  • The impact of AI on labor markets is still very much a work in progress.

    View profile for Kent Smetters

    The Boettner Chair Professor, Wharton | Applied Math and Computational Science Group, Penn | Faculty Director at Penn Wharton Budget Model | Fields: Applied theory, public macroeconomics, computational Economics

    What is the impact of AI on labor markets? At the Penn Wharton Budget Model, we've examined how AI can impact economic growth, future deficits, and the finances of a large program like Social Security. While we model AI as more labor-augmenting than labor-replacing based on past innovations, a legitimate question is why we don't focus more on specific labor market outcomes, including job losses by exposed sector and the impact on wages across the income distribution. The reason is simple: the current data really do not yet support a clear labor-market conclusion on their own. We first need better theoretical modeling and more experience with AI. That assessment might be surprising to people who are lining up on both sides of this issue. On one hand, you might have seen a graph that shows how job postings suddenly fell after ChatGPT-3 was introduced. Plus, layoffs at Amazon and other technology companies are also making waves. But that evidence is not very useful. A lot of that change has nothing to do with AI. Many companies are laying off workers in response to macroeconomic outcomes and COVID hiring. Companies naturally want to project gains from AI to their investors, but other factors are more important. On the other hand, some recent studies have appeared to show little impact of AI on employment, often using variation in reported AI use by sector. Some tech companies love that message because it makes them seem less villainous. But much of that evidence already selects in favor of labor-augmenting change rather than labor replacing (survivor bias). For example, call centers that remain are not using nearly as much AI as those previously replaced by AI for a reason. The truth is that data—even a lot of it—cannot be interpreted without serious theoretical modeling that helps us construct the right tests and interpret the results. In fact, the very fact that macro effects seem to be contaminating micro conclusions should give some pause: the analysis is likely poorly identified. The good news is that stronger theoretical work is now being done that is laying the foundations for future empirical work. In case you want to read more, I am leaving some links below: Automation and Polarization https://lnkd.in/evqcbyku Artificial Intelligence in the Knowledge Economy https://lnkd.in/ej8xHehh Other studies referenced above: The Long-Term Outlook for Social Security: Baseline and Alternative Assumptions https://lnkd.in/eC3F6PPM The Projected Impact of Generative AI on Future Productivity Growth https://lnkd.in/e_Aw2Yee

  • Some thoughts on managing context across large code bases.

    View profile for Kent Smetters

    The Boettner Chair Professor, Wharton | Applied Math and Computational Science Group, Penn | Faculty Director at Penn Wharton Budget Model | Fields: Applied theory, public macroeconomics, computational Economics

    Anthropic recently launched Skills. While they're not "MCP killers" as some claim, Skills are a key part of the context layer and can be used strategically to keep costs down. Here’s what I found actually works (and what’s overkill) when projects and teams get complex. MCP Servers: Think of these as broad, unified context management systems at the enterprise level that work across apps, AI models, and platforms. For example, if you want a single source of truth for JS/TS, Python, etc., then the Context7 MCP might be the way. At Penn Wharton Budget Model, we don’t use a monorepo across projects the form our integrated platform, so having a way to ensure consistency -- including with external resources -- across many active project repos became essential. We built a MCP server for our own use to keep these standards in sync. Skills (Claude Code and Web): Skills work across different project repos but with a tighter focus. They automate business logic, formatting, reporting, and internal procedures, all inside Claude’s ecosystem. They shine at standardizing tasks like reports or meeting analysis when multiple projects rely on the same AI setup. However, Skills remain internal to Claude: they’re portable across projects, but not deeply integrated with external systems. Rules (Cursor, Claude Code, and similar): These enforce requirements at the project level. For example, I’m strict about unit testing: sometimes, “no mocks, patches, skipTest, or wishful asserts—don’t hide bugs.” In other projects, limited mocking of expensive resources is fine if core systems get real tests. So, I want testing rules to vary by project. Trying to pack too much into a universal testing ruleset (such as a Skill) dilutes quality and leaves too much discretion to the AI model. Rules can be combined with Skills and MCP. Use them all at different context layers. And, be savvy to save money. Specifically, work backwards. Hit Rules and Skills first to minimize token usage. Keep those instructions lean to avoid overinvesting in broad MCP management that are overkill for narrow solutions. For example, I might run a full Context7 MCP session when upgrading libraries in a common requirements.txt file that impacts many files. But I am not spending 20K tokens to update a few lines of code where Skills and Rules are often enough. https://lnkd.in/epknBpXg

  • Economics of AI coding "last mile"

    View profile for Kent Smetters

    The Boettner Chair Professor, Wharton | Applied Math and Computational Science Group, Penn | Faculty Director at Penn Wharton Budget Model | Fields: Applied theory, public macroeconomics, computational Economics

    Cursor AI big update shifts the economics of the AI coding “last mile”. Cursor AI just launched Composer 1.0, a major update that allows its coding agent to operate without depending directly on Anthropic or OpenAI APIs—at least in its default autonomous mode. This is a significant strategic shift. Until now, most AI engineering copilots relied on large foundational models from the big players, effectively renting intelligence by the token. Cursor’s move toward independence could mark the beginning of a new phase: one where product companies optimize, fine-tune, or even train their own runtime systems for tighter control and lower marginal costs. In economic terms, this changes the “last mile” of AI delivery—the interface between model intelligence and user productivity. Ownership of that layer could unlock new defensibility, better margins, and faster iteration. If Composer 1.0 performs competitively while reducing reliance on external APIs, it will pressure others to develop their own inference layers too. The next wave of AI might not just be about who builds the biggest models—but who designs the smartest workflows around them. I fully expect Windsurf AI to follow. https://lnkd.in/egnFvGkB

  • U.S. federal debt is rising rapidly and is now roughly the same size as the country's GDP. What does that mean for families, the economy, and the Philadelphia region? Penn Wharton Budget Model (PWBM), The Concord Coalition, and Concord Action invite you to The Philadelphia Citizens Debt Forum, where experts will discuss America’s fiscal outlook and how it will affect the Philadelphia region and the economy. 📅 Monday, November 3 🕕 6:00–7:30 PM 📍 Montgomery Auditorium, Free Library of Philadelphia (1901 Vine Street) 👉 Register here: https://lnkd.in/eM8UMv_B Speakers include: • Carolyn Bourdeaux – Executive Director, The Concord Coalition; Former Member of Congress • Patrick Harker – Rowan Distinguished Professor, Wharton; Former President & CEO, Federal Reserve Bank of Philadelphia • Kent Smetters – Faculty Director, Penn Wharton Budget Model Join to explore the policies that are shaping the nation’s fiscal path and what they mean for our economic future.

  • Economic implications if the Supreme Court reverses the IEPA tariffs.

    View profile for Kent Smetters

    The Boettner Chair Professor, Wharton | Applied Math and Computational Science Group, Penn | Faculty Director at Penn Wharton Budget Model | Fields: Applied theory, public macroeconomics, computational Economics

    What are the potential economic implications if the Supreme Court strikes down the IEEPA tariffs and orders a refund? Here's the data and details: As of yesterday (10/28/2025), the government collected approximately $222 billion in tariff-related revenue in 2025. However, an estimated $85 billion would have been collected based on tariffs in place before the Trump second administration. Those previous tariffs are not central to the current litigation. If a refund were mandated by the Supreme Court, it would total around $137 billion as of yesterday. Looking ahead, over the next decade, the government will forego approximately $2.8 trillion in additional revenue. However, removing the tariffs would be pro-growth even with the loss of revenue that would otherwise be used to pay down some government debt. The reason is that tariffs reduce economic activity more than other ways of raising the same amount of revenue. Unlike the undergraduate endowment model of trade, many U.S. imports (40 percent) are “intermediate goods” that lower the cost of U.S. production. So, tariffs raise these production costs and reduce U.S. capital investment inflows, including buyers of federal government debt over the long term. That is the reason that the dollar has fallen in value, unlike the undergraduate model where the dollar should have appreciated in value after the new tariffs. Put differently, if the new tariffs are put aside by the Supreme Court, the government would have to float more debt in the future, but the pool of buyers would also be larger, thereby allowing the government to sell that debt at a higher price and pay a lower return relative to baseline with tariffs. The price effect easily dominates the level effect in our calculations. Sources: - Real-time Federal Budget Tracker: https://lnkd.in/eBMsXFQf - Tariff Revenue Simulator: https://lnkd.in/enmRJf_c - Debt, Tariffs, and Capital Markets in a Dynamic Setting: https://lnkd.in/ef68g46k

  • Using game theory to analyze a ban on AI superintelligence research.

    View profile for Kent Smetters

    The Boettner Chair Professor, Wharton | Applied Math and Computational Science Group, Penn | Faculty Director at Penn Wharton Budget Model | Fields: Applied theory, public macroeconomics, computational Economics

    I understand why many people are worried about the potential rise of AI superintelligence, including 850 experts who signed a petition arguing for a pause in model development. The mechanisms behind these models remain poorly understood, and discussions about AI's future often become abstract or metaphysical. However, game theory offers a clear answer: do NOT pause. The nuclear arms race involved only a few players with massive industrial resources that produced some hope of detection and inspection. But AI development can be done secretly by many more actors. Even large computing facilities don’t reveal what models they are running—whether beneficial ones like protein folding or potentially harmful ones. Pausing development would only hold back responsible researchers while creating opportunities for others to step in unchecked. Since verification is nearly impossible, the best strategy for responsible developers is to create superior models that outmatch those built by malicious actors. Build faster, build better—stay one step ahead, with good models designed to detect and counter the bad. It's the only rational move. https://lnkd.in/eU8DeuPd

  • PWBM has produced up-to-date estimates of customs revenue and effective tariff rates through July 2025 based on updated trade and tariff data recently released by the USITC. Key Points: • New tariffs have raised $80.3 billion in revenue between January 2025 and July 2025 before accounting for income and payroll tax offsets. • The average effective tariff rate increased to 9.75 percent in July, up from 2.2 percent in January. • Among major trading partners, China faces the highest tariffs, with effective rates reaching 40 percent in July. • Steel and aluminum products are the most heavily tariffed product category at 41.2 percent, followed by automotive vehicles at 22.3 percent. See details here: https://lnkd.in/eSpgj38w For forward-looking analysis, including long-term revenue and effective tariff rate projections, see our tariff simulator: https://lnkd.in/eKMJJntx For real-time data on daily tariff revenue collections, see our Real-Time Federal Budget Tracker: https://lnkd.in/eWAmiEdr

  • Insights from PWBM on tariffs for the WSJ:

  • PWBM estimates that the COVID-era Employee Retention Credit (ERC) will have cost more than $300 billion when the IRS finishes processing claims later in 2025, nearly four times the initial projected cost. Most of the ERC was paid retroactively, well after pandemic-related economic disruptions had ended, limiting its effectiveness as a worker retention incentive. Key Points: • The Joint Committee on Taxation (JCT) initially projected that the ERC would cost $78 billion, but costs came in much higher because of an extended claiming period and a flood of questionable claims, often encouraged by third-party firms. PWBM estimates that, as of September 2023, the ERC was on pace to reach a final cumulative cost of $567 billion. • The IRS announced a moratorium on processing ERC claims in September 2023 and did not resume processing until August 2024. During the moratorium, the IRS examined and improved its procedures to identify improper claims. • With the IRS’ enhanced ERC enforcement efforts, PWBM now projects that the ERC will reach a final cumulative cost of $302 billion. That is nearly four times the original projected cost but only about half the pre-moratorium cost trajectory. Read the full analysis here: https://lnkd.in/er2b_FWN

  • We estimate that AI will increase productivity and GDP by 1.5% by 2035, nearly 3% by 2055, and 3.7% by 2075. AI’s boost to 𝘢𝘯𝘯𝘶𝘢𝘭 𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘷𝘪𝘵𝘺 𝘨𝘳𝘰𝘸𝘵𝘩 is strongest in the early 2030s but eventually fades, with a permanent effect of less than 0.04 percentage points due to sectoral shifts. Key Points: • We estimate that 40 percent of current GDP could be substantially affected by generative AI. Occupations around the 80th percentile of earnings are the most exposed, with around half of their work susceptible to automation by AI, on average. The highest-earning occupations are less exposed, and the lowest-earning occupations are the least exposed. • AI’s boost to productivity 𝘨𝘳𝘰𝘸𝘵𝘩 is strongest in the early 2030s, with a peak annual contribution of 0.2 percentage points in 2032. After adoption saturates, growth reverts to trend. Because sectors that are more exposed to AI have faster trend TFP growth, sectoral shifts during the AI transition add a lasting 0.04 percentage point boost to aggregate growth. • Compounded, TFP and GDP levels are 1.5% higher by 2035, nearly 3% by 2055, and 3.7% by 2075, meaning that AI leads to a permanent increase in the level of economic activity. • Caution is required in interpreting these projections of AI’s impact, which are based on limited data on AI’s initial effects. Future data and developments in AI technology could lead to a significant change in these estimates. https://lnkd.in/g8WuPpMs

Similar pages

Browse jobs