Economics Professor Gary Smith Warns of an AI Bubble: Hype, Trillion-Dollar Valuations, and What Comes Next

Is AI the new dot-com? Between Super Bowl buzz, a geopolitical race for dominance, and trillion-dollar stock charts that seem to only go up, it certainly feels like we’re living through a once-in-a-generation tech inflection point. But what if the foundations aren’t as solid as the stories?

On February 20, 2025, Pomona College economist Gary Smith sounded a sober alarm: the AI sector is showing classic bubble dynamics—frothy hype, staggering investment inflows, sky-high valuations, and a growing disconnect between promises and measurable progress. His warning isn’t anti-technology; it’s pro-reality. Smith argues that while narrow AI has delivered real gains, the sector’s grander promises remain out ahead of the evidence.

If you’re an investor, operator, or policymaker navigating AI’s velocity, this is your cue to slow down just enough to separate durable innovation from speculative fervor.

Read the original Pomona College article

Who Is Gary Smith—and Why This Warning Matters

Gary Smith is a seasoned economics professor at Pomona College, known for rigorous, data-first analysis and for his book “The AI Delusion,” which dissects how statistical pattern-matching is often mistaken for true intelligence. He’s not anti-AI; he’s anti-hype. When Smith says the numbers don’t support the narrative, he’s asking us to do the hard work: trace the returns, test assumptions, and look beyond the gloss.

Anatomy of an AI Bubble: How We Got Here

Most bubbles share a familiar arc. Economists often point to the stages of displacement, boom, euphoria, profit-taking, and panic—what Hyman Minsky called the canonical path of financial excess. If that sounds abstract, think dot-com circa 1999.

Smith’s argument is that AI today is deep in the euphoria phase: – Displacement: Breakthroughs in deep learning, transformer architectures, and massive datasets made machine learning mainstream. – Boom: Cloud vendors, chipmakers, and model labs pushed capabilities—and spending—into the stratosphere. – Euphoria: Trillion-dollar market caps, ubiquitous AI branding, and a rush to “AI-ify” everything from chatbots to toothbrushes. – Profit-taking/Panic: Not here yet—but Smith argues conditions are building.

What’s Fueling the Euphoria: Valuations, Ads, and an Arms Race

Let’s be honest: it’s intoxicating. Nvidia and Microsoft—two pillars of AI infrastructure and commercialization—have hit or exceeded trillion-dollar market caps, with Nvidia’s run fueled by insatiable demand for AI compute and Microsoft’s by its AI-infused cloud and productivity suite.

Add in: – Saturation marketing (think concurrent national ad campaigns and major-event placements promising AI miracles) – A geopolitical race for “AI supremacy” that turbocharges subsidies, talent grabs, and national strategies – Fear of missing out in boardrooms and VC portfolios

The result? A sector priced for perfection.

Where the Numbers Don’t Add Up: Smith’s Core Case

Smith’s critique isn’t philosophical, it’s empirical. Several trendlines undermine the everything-everywhere-all-at-once AI narrative.

1) Productivity and GDP: The Missing Macro Signal

If generative AI were already transforming the economy at scale, you’d expect to see a clear, sustained acceleration in national productivity and GDP growth. Instead, the data is mixed. Some micro-studies show meaningful productivity lift in specific tasks and occupations, but a broad-based macro surge is not yet visible.

The takeaway: compelling pilots don’t necessarily translate into economy-wide transformation—at least not yet.

2) Diminishing Returns to Scaling LLMs

Bigger models trained on ever-larger datasets and compute budgets don’t deliver proportionally bigger performance gains. We’ve known for years that there are scaling laws and compute-optimal tradeoffs; the leap from “impressive” to “reliable, general intelligence” is non-linear and still elusive. Costs are rising faster than the quality gains that enterprises actually monetize.

As Smith frames it: the frontier’s getting more expensive, but not necessarily more useful in the ways businesses need.

3) Venture Capital Surge, Profit Lag

When capital is cheap and narratives are hot, funding floods in. AI has been Exhibit A. In recent years, annual funding to AI startups has exceeded $50 billion, even as many business models remain unproven and profitability elusive.

This isn’t a moral judgment—it’s a cyclical one. The gap between spend and return can’t persist indefinitely.

4) Enterprise “Velocity Whiplash”

Smith points to a corporate phenomenon: leaders sprint toward AI adoption to avoid being left behind, but frontline teams struggle with security, data readiness, integration, and quality. Proofs of concept proliferate; production-grade results lag.

This execution gap turns into budget burn, disillusionment, and skepticism—key ingredients of a bubble’s “oh” moment.

5) History Rhymes: Autonomous Vehicles

Remember when fully autonomous taxis were “2–3 years away” for most of the 2010s? Reality was harder: complex edge cases, regulatory scrutiny, and safety setbacks pushed timelines right. That sector didn’t vanish, but expectations recalibrated.

Smith’s point: don’t mistake momentum for inevitability.

The Valuation Paradox: Why Markets May Be Getting Ahead of Reality

Markets price in future cash flows. For AI leaders, the future priced-in is enormous: – Perpetual demand for high-margin compute and cloud services – Rapid AI adoption across industries – Step-function productivity gains that swell profits economy-wide

But if the real world delivers something more incremental—useful, even impressive, but not transformational on the promised timeline—valuations have to normalize. That doesn’t mean “AI goes to zero.” Bubble deflation is often about repricing, not repudiation.

What a Bubble Burst Could Look Like

No one rings a bell at the top, but here’s how a deflation could unfold:

  • Funding pullback: Series B and C rounds tighten. Bridge rounds proliferate. Some AI-first startups pivot to adjacent workflows or fade.
  • Compute demand whipsaw: From “buy all the GPUs” to “sweat the clusters,” as procurement shifts from land-grab to efficiency.
  • Big Tech resilience, but with rotation: Incumbents with diversified cash flows hold up better; multiples compress. “Picks and shovels” winners remain, but growth rates normalize.
  • Enterprise reset: From 100 pilots to a handful of production workloads with proven unit economics. Shadow AI projects get shuttered.
  • Regulatory and policy scrutiny: As promises and outcomes diverge, oversight around claims, safety, competition, and energy intensifies.

In a soft-landing scenario, valuations cool and discipline returns. In a hard-landing scenario, failures cascade and sentiment sours for years.

How to Avoid the Worst Outcomes: Tempered Expectations, Real ROI, and Diversification

Smith’s prescription is pragmatic: slow down the storytelling, speed up the math, and de-risk concentration.

Build a Real ROI Model (Before You Train or Buy)

AI ROI is not magic; it’s math across time.

  • Define the business objective: revenue lift, cost reduction, risk mitigation, or customer experience.
  • Total cost of ownership (TCO):
  • Model costs: API usage or training compute; fine-tuning; inference at expected volumes and latencies
  • Data costs: labeling, RAG pipelines, vector databases, governance and lineage
  • Engineering: integration, orchestration, evaluation harnesses, guardrails, monitoring
  • People: human-in-the-loop review, QA, and escalation
  • Risk: cost of errors/hallucinations, brand risk, compliance
  • Quantify value:
  • Time saved per task x task volume x wage rate
  • Uplift in conversion, AOV, or retention attributable to AI features
  • Error-rate reduction tied to financial outcomes (chargebacks, rework)
  • Stage-gate milestones:
  • Pilot targets: ≥20–30% time savings or measurable quality delta
  • Pre-production: sustained performance under load, ablation-tested
  • Production: unit economics green across realistic demand scenarios

Tip: model multiple scenarios (optimistic, base, conservative) and include sensitivity analyses on token prices, latency SLAs, and model drift.

Ruthless Use-Case Prioritization

Not all AI is created equal. Rank by: – Regulatory exposure: lower is easier (e.g., internal summarization vs. external financial advice) – Error tolerance: higher tolerance makes earlier wins more likely – Data readiness: high-quality, proprietary data beats generic prompts – Line-of-sight to P&L: “search to solve” use cases outrank “search for a problem”

Design for Reliability, Not Demos

  • Retrieval-augmented generation (RAG) with strong retrieval evaluation
  • Structured outputs with schema validation
  • Guardrails: content filters, policy enforcement, deterministic checks
  • Human-in-the-loop review for high-stakes tasks
  • Offline evaluation suites plus online A/B testing
  • Observability: per-prompt metrics, drift detection, and feedback loops

Standards help: NIST AI Risk Management Framework

Optimize for Unit Economics

  • Token budgeting: prompt compression, instruction tuning, caching, adapters
  • Model right-sizing: use the smallest model that meets quality thresholds
  • Hybrid architectures: combine deterministic code with LLMs sparingly
  • Throughput and latency tuning: batch inference, streaming, hardware acceleration
  • Vendor diversification: avoid lock-in; benchmark periodically

Diversify Beyond an “AI Monoculture”

For investors and execs alike: – Balance AI exposure with non-AI cash flow – Consider “boring tech” that benefits from AI but isn’t priced like AI (storage efficiency, networking, MLOps, data quality) – Focus on must-have infrastructure vs. nice-to-have apps – Hedge geopolitical and supply chain risks (chips, energy, datacenter capacity)

Policy and Macro: Watching the Second-Order Effects

Policymakers aren’t ignoring AI’s macro ripple effects. They’re asking: – Will AI raise productivity enough to be disinflationary—or will compute/energy bottlenecks and talent shortages be inflationary? – How do we measure quality-adjusted output when AI changes the nature of work? – What are the systemic risks of concentrated AI infrastructure?

Useful context: – IEA: Data centres, AI and electricity demandIMF on Generative AI’s labor impact

Expect more guidance on AI disclosures, model risk, energy reporting, and competition—especially if returns underwhelm and public scrutiny rises.

What Could Prove the Skeptics Wrong

A bubble warning isn’t a forecast of doom; it’s a call for discipline. Here are developments that could justify today’s sky-high expectations: – Breakthroughs in reasoning and planning that materially reduce hallucinations and error rates – Major algorithmic efficiency gains that flatten the cost-performance curve – Next-gen hardware and interconnects that deliver order-of-magnitude inference savings – Reliable agentic systems that automate multi-step workflows with verifiable outcomes – Tight integration of AI with robotics and edge compute, unlocking physical-world productivity – Standardized evaluation benchmarks tied to economic outcomes, not just leaderboards

Any combination of the above would move AI from “promising” to “definitive” for many industries.

Practical Playbook: 90-Day Plan for Leaders

  • Inventory and score use cases across value, feasibility, risk
  • Build two production-bound pilots with clear P&L linkage; kill the rest
  • Stand up an AI governance group spanning security, legal, data, and operations
  • Implement an evaluation harness with offline and online metrics
  • Negotiate vendor SLAs on latency, uptime, and pricing floors
  • Start an efficiency program: prompt optimization, model right-sizing, caching
  • Publish a quarterly AI scorecard to your board: cost, value, risks, roadmap

Metrics That Actually Matter

Track these instead of vanity demos: – Task-level throughput and cycle time – Quality: human-rated accuracy, error rates, and factuality checks – Containment rate: % of customer queries resolved without human handoff – Cost per successful outcome (including human review) – Uptime and latency percentiles against SLA – Model and data drift indicators – Energy per 1,000 tokens or per task (if material to margins)

The Bottom Line on Smith’s Warning

Gary Smith isn’t betting against AI. He’s betting against magical thinking. His case is straightforward: – Narrow AI is real and valuable. – The leap to economy-changing general AI is far from guaranteed—and certainly not on a straight line. – Valuations, funding, and narratives have sprinted past the evidence.

If you ground your strategy in unit economics, disciplined execution, and portfolio balance, you’ll be positioned for both outcomes: a soft landing if the bubble deflates—and upside participation if breakthroughs arrive.

FAQs

Q: What does “AI bubble” mean in practical terms?
A: It’s when investment, valuations, and expectations in AI rise much faster than demonstrated, monetizable performance. It doesn’t mean AI is fake—only that the price and promise can get ahead of the proof.

Q: Are there real AI productivity gains today?
A: Yes, in targeted workflows: coding assistance, content drafting, summarization, search, and classification can yield 20–50% time savings in controlled studies. The open question is translating those gains into durable, company- and economy-wide results.

Q: What is “velocity whiplash” in enterprises?
A: It’s the gap between executive urgency to “do AI now” and real-world implementation friction—security reviews, data prep, integration complexity, quality control, and change management—leading to too many pilots and too few production wins.

Q: How do I know if an AI use case is bubble-proof?
A: Look for: clear P&L impact within 12 months, high data readiness, manageable risk, measurable quality targets, and a path to unit-economic breakeven after all costs (compute, people, governance).

Q: Are LLMs hitting diminishing returns?
A: At today’s frontier, costs are rising faster than step-change quality improvements for many business tasks. Research shows compute-optimal tradeoffs exist; simply scaling parameters/data doesn’t guarantee proportional gains.

Q: What happened with autonomous vehicles, and why is it relevant?
A: AVs faced harder edge cases and slower regulatory acceptance than expected, pushing timelines right. The analogy: complex, real-world AI often takes longer and costs more than optimistic roadmaps suggest.

Q: Will AI cause inflation or deflation?
A: It could do either, depending on the balance between productivity gains (disinflationary) and resource constraints like energy, compute, and scarce talent (inflationary). Policymakers are studying these dynamics closely.

Q: How should investors position around AI?
A: Balance exposure. Own durable cash generators that benefit from AI, select “picks and shovels” with clear moats, and avoid concentration in unproven AI-first models. Demand real revenue traction and pathway to profitability.

Q: What metrics should my board see in AI updates?
A: Cost per outcome, quality/error rates, containment rate, latency SLOs, model drift, security incidents, and realized dollar impact. Avoid vanity metrics like “number of prompts” or “POCs launched.”

Q: What could invalidate the bubble thesis?
A: Breakthroughs in reasoning reliability, drastic cost-efficiency improvements, and verifiable agentic automation that materially reshape P&L across industries in short order.

Clear Takeaway

AI is powerful—and overpromised. Gary Smith’s warning isn’t to slam the brakes; it’s to steer with headlights on. Temper narratives with numbers. Prioritize use cases with measurable ROI. Diversify beyond a single tech story. If the froth fades, you’ll preserve capital and credibility. If the breakthroughs come, you’ll be ready to scale what actually works.

Discover more at InnoVirtuoso.com

I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.

For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring! 

Stay updated with the latest news—subscribe to our newsletter today!

Thank you all—wishing you an amazing day ahead!

Read more related Articles at InnoVirtuoso

Browse InnoVirtuoso for more!