Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
Geopolitics

The $650 billion question: Can Big Tech justify the AI spending spree?

Microsoft, Alphabet, Meta, and Amazon are on track to spend $650 billion on AI infrastructure by the end of 2026. Investors are starting to ask where the return is.

5 min read
Half-built data centre interior with exposed server racks, scaffolding, and workers running cables
Inside a data centre under construction, where the next wave of AI infrastructure is taking shape
Editor
Mar 26, 2026 · 5 min read
By Zara Kincaid · 2026-03-24

The scale of corporate investment in artificial intelligence has moved beyond 'significant' into the realm of 'historically unprecedented.' According to current guidance and infrastructure buildout rates, the four largest technology companies—Microsoft, Alphabet, Meta, and Amazon—are on track to have committed $650 billion to AI-related capital expenditure by the end of 2026.

KEY TAKEAWAYS

01Projected Big Tech AI capex to hit $650 billion by year-end 2026.
02Hardware lead times for NVIDIA B200s remain the primary constraint on deployment.
03Cloud AI margins are facing pressure as hardware depreciation costs rise.
04Investors are demanding more granular disclosure on AI-driven revenue.
05Energy constraints in Tier 1 data centre markets are forcing expansion into secondary regions.

To put that in perspective: that is more than the total annual GDP of Sweden. It is an infrastructure bet larger than the Apollo program, adjusted for inflation. And it is happening in the space of just 36 months. The fundamental question for the market in 2026 is no longer what the technology can do, but whether it can generate a return on this level of capital.

The Blackwell-Rubin cycle

Much of this spending is being swallowed by hardware procurement. The cycle from NVIDIA's Blackwell (B200) architecture to the newly announced Vera Rubin generation has created a 'perpetual upgrade' environment. Hyperscalers cannot afford to be even six months behind on hardware performance, as that would mean losing the low-latency inference market to competitors.

Lead times for high-end GPUs still stretch to 24 weeks for the largest buyers, effectively forcing companies to place orders for infrastructure that won't be online for two fiscal quarters. This 'pre-ordering' of growth has created a disconnect between current balance sheet pain and future revenue potential. Capex is recognized now; revenue arrives much later.

Investor fatigue and margin pressure

For the first two years of the AI boom, stock prices rose in tandem with capex guidance. Investors treated spending as a proxy for ambition. That phase ended in late 2025. In 2026, the market is punishing companies that increase spending without showing proportional gains in free cash flow.

Meta, in particular, has seen its margins compressed by the sheer cost of maintaining its Llama 4 and 5 inference clusters. While Mark Zuckerberg has argued that the cost of under-investing is higher than the cost of over-investing, the reality is that depreciation costs are now a significant headwind to earnings per share. Cloud providers are hiking prices for premium AI instances to compensate, but competition is keeping a lid on pricing power.

The energy wall

Beyond the cost of the chips, Big Tech is now paying a premium for power. Data centre capacity in traditional hubs like Northern Virginia and Dublin is effectively sold out. Securing 100MW+ of grid connection now requires direct investment in energy generation—further inflating the capex totals.

Microsoft's recent decision to restart retired nuclear reactors is the most visible sign of this shift. Companies are no longer just software and service providers; they are becoming heavy infrastructure operators. The operational complexity of managing $650 billion in physical assets is a different challenge than managing code.

For the next 18 months, expect continued capex guidance increases, continued earnings pressure from margin compression, and continued hand-waving about "AI-driven revenue opportunities yet to be realised." For investors who bought these stocks for capital discipline and shareholder returns, that's a frustrating conversation. For investors who bought them for growth option value, it's expected.

The real test arrives in late 2026 or 2027 when the market starts asking harder questions about utilisation rates, effective returns on deployed capital, and whether the $650 billion was a strategic investment or an expensive hedge against obsolescence. That's when the actual balance sheet pressure becomes real, and that's when capital discipline decisions actually constrain acquisition and buyback flexibility.

For now, the spending is justified. Whether it remains justified once the bills come due is the question no one wants to ask.

TLDR

Combined capital expenditure from Microsoft, Alphabet, Meta, and Amazon is projected to hit $650 billion by the end of 2026, driven almost entirely by AI infrastructure. While revenue from cloud AI services is growing at 30-40% annually, the scale of investment is now large enough to impact total corporate margins. Wall Street is shifting from excitement over AI capability to scrutiny of capital efficiency. The core tension for 2026 is whether these 'hyperscalers' are building a productive foundation for the next decade of growth, or simply overbuilding in a desperate attempt to avoid being left behind by the Blackwell-Rubin hardware cycle.

FREQUENTLY ASKED QUESTIONS

Which companies are spending the most?
Microsoft and Amazon (AWS) lead the spending, followed closely by Alphabet and Meta. Combined, these four 'hyperscalers' account for the vast majority of global AI infrastructure investment.
Is the spending sustainable?
Analysts are divided. Some argue that the productivity gains from AI will eventually justify the cost. Others fear a 'capex bubble' where companies are overbuilding capacity that will not find enough paying customers to achieve a reasonable return on investment.
What is the Blackwell-Rubin cycle?
It refers to the rapid transition from NVIDIA's Blackwell GPU architecture (released in 2024/25) to the next-generation Vera Rubin architecture (announced 2026). This rapid hardware obsolescence forces Big Tech to keep spending to maintain competitive performance levels.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.