
Recently, three heavyweight figures from different fields sat down at the same table: contrarian investor Michael Burry, who foresaw the 2008 financial crisis; Anthropic co-founder Jack Clark, who works with cutting-edge AI models; and host Dwarkesh Patel, who has interviewed nearly every major figure in Silicon Valley.
They put the question bluntly and sharply: as the AI revolution surges forward, can the global economy pass safely through this dramatic transformation? Is this wave of AI investment a necessary path to realizing the future—or a historic capital misallocation unfolding right before our eyes?
Key Takeaways
The “cognitive correction” from 2017–2025:
What truly ignited AI was large-scale pretraining, not training agents from scratch. The industry once bet on “whiteboard agents,” believing that general intelligence could be forced out through task environments. The result was only “in-distribution superhumans” for narrow tasks. What actually changed the world was the Transformer and scaling laws. Today’s return of agents is built on massive pretrained models. The consensus is: what we see now is the floor of capability, not the ceiling.
Chatbots triggered a trillion-dollar infrastructure race:
The investment logic is highly unusual. ChatGPT initially looked like a writing, search, and homework tool, yet it unexpectedly ignited a global trillion-dollar hardware and infrastructure boom. Application-layer revenue has not yet materialized, but capital expenditure has already exploded. Traditional software companies are being forced to transform into capital-intensive hardware firms. This “build infrastructure first, wait for demand later” model is extremely rare in investment history.
“Who is winning” is not a simple question:
In AI, leading advantages are not durable. Unlike winner-takes-all platform markets, AI is more like a high-competition arena. Leadership constantly shifts among Google, OpenAI, Anthropic, and others. Talent mobility, ecosystem diffusion, and reverse engineering rapidly erode moats. The current pattern looks like “the top three take turns on stage,” with leadership always at risk of reversal.
Does AI really boost productivity?
The key issue is not how powerful the tools are, but the lack of reliable, quantifiable metrics. Existing data conflicts: METR research suggests AI coding tools reduce efficiency, while Anthropic’s user surveys claim a 50% productivity boost. Both sides agree the problem is the absence of a fine-grained “process dashboard.” Feeling faster does not equal real efficiency gains. The industry urgently needs a trustworthy productivity measurement system.
Why hasn’t AI replaced white-collar jobs at scale?
In theory, today’s models far exceed 2017 expectations. In practice, error rates, weak self-correction, and complex accountability chains prevent seamless integration into real workflows. Only “naturally closed-loop” scenarios like software development scale quickly. Most industries need verification and automation loops before true productivity gains can be unlocked.
Burry’s core concern:
Not whether AI is useful, but whether returns on capital can hold up. He focuses on ROIC, depreciation cycles, and stranded-asset risk. Data centers and chips upgrade extremely fast, and many assets should not be depreciated using traditional long-term logic, or profits will be overstated. If end-user AI revenue grows far more slowly than infrastructure investment, massive “projects under construction” could pile up and even trigger private credit risk chains.
What could change their minds in 2026:
Turn opinions into testable bets. The guests proposed indicators they are willing to be “proven wrong” by: whether AI application revenue exceeds $500 billion, whether frontier labs surpass $100 billion in revenue, whether chip lifespans extend, whether continuous learning is solved, and whether scaling hits a wall. Over the next year, the industry’s health will be judged across four dimensions: revenue, capability, efficiency, and return on capital.
Consensus: the ultimate bottleneck is energy, not algorithms.
No matter how models evolve, compute demand keeps rising. Power supply becomes the hard constraint. Small modular nuclear, independent grids, and energy infrastructure will determine whether AI can enter the real economy at scale. The real limit is not models—it’s where the electricity comes from. The AI revolution may be written into power grids, not code.
How to judge whether the AI boom has gone off track:
Watch five variables. The real value of this debate is offering hard indicators for healthy development. Technology will keep improving, but that doesn’t mean commercial closed loops exist. Productivity gains need real data, not vibes. Capital-cycle depreciation and maturity mismatch risks will surface. Job disruption is muted because real workflows are far more complex than demos. Energy and infrastructure are the ultimate ceiling.
To see whether AI is veering off course, track five lines: capability, efficiency, return on capital, industrial closed loops, and energy supply.
You must be logged in to post a comment.