The Unseen Cracks in the AI Economy’s Foundation
I’ve been covering tech long enough to know when the air starts to get a little thin at the top of a hype cycle. And let me tell you, at this year’s Milken Global Conference, discussing the future of the AI economy challenges, the oxygen masks felt a little closer than usual. We had a fascinating lineup on stage: the chip architect, the cloud giant, the physical AI pioneer, the search disruptor, and even a quantum physicist challenging the very premise of it all. What I found truly compelling, however, wasn’t the usual bravado, but a stark, almost uncomfortable acknowledgment of the very real, very physical limits we’re rapidly approaching.
Remember the dot-com boom? Everyone was building fiber optic networks, assuming infinite demand, only to find themselves with dark fiber and empty data centers when the bubble burst. This feels different, certainly, but the echoes of unbridled optimism running headfirst into physical reality are unmistakable. We’re talking about fundamental constraints here, not just market corrections. We’re talking about silicon, electrons, and the very ground beneath our feet. It’s a sobering thought, isn’t it?
The Hard Physical Limits of Our Digital Dreams
Let’s be honest about this: the AI boom is running into hard, physical limits, and they begin much further down the stack than most venture capitalists or casual observers realize. Christophe Fouquet, the CEO of ASML – the Dutch behemoth holding a near-monopoly on the extreme ultraviolet lithography machines that make modern chips possible – laid it out plain. Despite a “huge acceleration of chips manufacturing,” he’s convinced that for the next three to five years, the market will be supply limited. Full stop. The hyperscalers, the Googles and Microsofts of the world, simply won’t get all the chips they’re paying for. It’s a bottleneck that could slow everything down.
Francis deSouza from Google Cloud underscored this with some eye-popping numbers. Google Cloud’s revenue crossed $20 billion last quarter, growing 63%. But here’s the kicker: their backlog—the revenue committed but not yet delivered—nearly doubled in a single quarter, from $250 billion to an astonishing $460 billion. “The demand is real,” he said, with a calm that belied the sheer scale of the problem. What this tells me, what it should tell all of us, is that the infrastructure isn’t just catching up; it’s falling behind. And fast.
But chips aren’t the only choke point. Qasar Younis, co-founder of Applied Intuition, pointed to another, equally fundamental constraint for physical AI: data. Not just any data, but the kind you can only gather by deploying machines in the real world. You can simulate all you want, but the messy, unpredictable reality of a car on a road or a drone in the sky provides a truth no synthetic environment can fully replicate. This is a critical distinction that many AI evangelists often gloss over, assuming perfect simulation is just around the corner. It isn’t.
The Looming Energy Crisis and Geopolitical Chessboard
If chips are the first bottleneck, energy is the one looming behind it, a shadow growing larger with every new model. DeSouza’s confession that Google is seriously exploring orbital data centers to tap into “more abundant energy” wasn’t just a fascinating aside; it was a flashing red light. Think about that for a second. We’re talking about building data centers in space because we’re running out of viable, sustainable ways to power and cool them on Earth. (And yes, the physics of shedding heat in a vacuum are, as he noted, a nightmare.)
His argument for Google’s integrated stack — custom TPUs, models, agents — paying dividends in flops per watt is compelling, but it also highlights a growing divide. Those with the resources to build their entire stack can achieve efficiencies others can only dream of. For everyone else, the energy cost is just another barrier, a price that, as Fouquet dryly observed, “nothing can be priceless.” A 2024 International Energy Agency report estimated that the electricity demand from AI and data centers could more than double by 2026, reaching the equivalent of Japan’s entire energy consumption. That’s not a hidden cost; that’s a looming crisis.
Then there’s the elephant in the room: geopolitics. Younis delivered what might have been the panel’s most potent observation: physical AI and national sovereignty are inextricably linked in ways purely digital AI never was. Autonomous vehicles, defense drones, mining equipment—these aren’t just lines of code. They operate within borders, collect sensitive data, and raise profound questions about control. “Almost consistently, every country is saying: we don’t want this intelligence in a physical form in our borders, controlled by another country,” he explained. This feels a lot like the early days of GPS, where military control dictated civilian access, but with far greater implications for everyday life.
Fouquet backed this up with a blunt assessment of China’s AI progress. While impressive at the software layer, it’s fundamentally constrained by a lack of access to advanced EUV lithography. Without those machines, Chinese chipmakers can’t produce the most cutting-edge semiconductors, placing them at a compounding disadvantage. It’s a stark reminder that in this new AI arms race, the physical tools of production are as critical as the digital models themselves. This isn’t just about economic competition; it’s about national security and technological self-determination.
A Different Kind of Intelligence: Beyond the LLM Hype
While the titans debate scale and efficiency within the prevailing large language model (LLM) paradigm, Eve Bodnia, a quantum physicist turned startup founder, is building something fundamentally different. Her company, Logical Intelligence, is betting on energy-based models (EBMs). Instead of predicting the next token, EBMs aim to understand the underlying rules of data, a process she argues is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.”
Her largest model? A mere 200 million parameters, compared to the hundreds of billions in leading LLMs. She claims it runs thousands of times faster and, crucially, updates its knowledge dynamically without requiring a full retraining from scratch. For domains like chip design or robotics, where understanding physical rules trumps linguistic patterns, EBMs might be a more natural fit. It’s an intriguing counter-narrative, one that forces us to ask: Are we blindly scaling the wrong architecture? (I’ve watched companies chase scale at the expense of fundamental innovation before, and it rarely ends well.)
Dimitry Shevelenko from Perplexity also offered a glimpse beyond the current LLM obsession, describing their evolution from a search product to a “digital worker.” Perplexity Computer isn’t just a tool; it’s designed as a staff that a knowledge worker directs. “Every day you wake up and you have a hundred staff on your team,” he mused. The pitch is compelling, but it immediately raises red flags about control and autonomy. Who’s really in charge here? Shevelenko’s answer, “granularity,” focusing on enterprise controls and approval workflows, is important. But the friction he admits some users find annoying is, in my view, absolutely essential. The lessons from previous automation cycles are clear: unchecked agency leads to unintended consequences. Trust, especially in an enterprise context, is built on transparency and explicit control, not blind delegation.
The Human Element: Jobs, Creativity, and Critical Thought
Near the end of our panel, someone asked the question that’s on everyone’s mind, the one that usually gets a carefully worded, optimistic dodge: Is all of this going to impact the next generation’s capacity for critical thinking?
The answers, predictably, leaned towards the positive. DeSouza spoke of AI unleashing humanity to solve grand challenges like neurological diseases and climate change. Shevelenko highlighted the unprecedented accessibility for independent creators with tools like Perplexity Computer. Younis, however, offered the most pragmatic and, perhaps, reassuring perspective: physical AI isn’t just displacing workers; it’s filling voids in sectors like agriculture, mining, and long-haul trucking where labor shortages are chronic because people simply don’t want those jobs. The average American farmer is 58 years old, he reminded us. In these areas, AI isn’t a threat to willing workers, but a necessary solution to a deepening problem.
But let’s not get ahead of ourselves. While the potential is undeniable, the challenges are equally profound. From the physical constraints of silicon and energy to the geopolitical implications of sovereignty and the philosophical questions about the nature of intelligence itself, the road ahead for the AI economy is anything but smooth. We’re in a moment of immense technological power, yes, but also one of deep uncertainty. The architects of this new world are acutely aware of where the wheels could come off. The real question is, are we, the users, the investors, and the policy makers, listening closely enough?
Image Source: techcrunch.com
Prev Post




