Oracle and OpenAI have walked away from plans to expand a flagship artificial intelligence data centre in Texas. The site, located in Abilene, was meant to be a centrepiece of the Stargate project, the $500 billion AI infrastructure initiative announced at the White House in January 2025 with then-President Donald Trump standing alongside tech executives.
TLDR
Oracle and OpenAI have abandoned plans to expand a flagship AI data centre in Texas, despite OpenAI's recent $110 billion funding round and preparation for an $840 billion IPO in 2026. The collapse, first reported by Bloomberg News, came after negotiations stalled over financing structures and OpenAI's evolving compute requirements. The deal was part of Stargate, the US AI infrastructure initiative that promised to transform American compute capacity.
KEY TAKEAWAYS
Bloomberg News reported on 6 March 2026 that negotiations between the two companies dragged on for months before collapsing. The sticking points were financing structures and OpenAI's shifting requirements for compute capacity. Neither company has commented publicly on the breakdown. The Abilene site already has eight buildings operational, and a separate 4.5 gigawatt partnership between Oracle and OpenAI remains on track according to Reuters, but the expansion is dead.
The Stargate Promise
When Stargate was announced, it carried the weight of a national project. The initiative promised $500 billion in AI infrastructure investment across the United States by 2029, targeting 10 gigawatts of compute capacity across data centres in Texas, Oklahoma, and other states. OpenAI, SoftBank, Oracle, and UAE-backed investment firm MGX incorporated the venture as Stargate LLC in Delaware.
Oracle was the infrastructure partner. OpenAI was the anchor tenant. The arrangement made sense on paper: Oracle had data centre expertise and real estate relationships across the US; OpenAI had apparently insatiable demand for compute and access to capital that most companies can only dream of. Together, they were meant to build the physical layer supporting the next generation of AI development.
By February 2026, the consortium had announced five additional Stargate sites across the United States, bringing planned capacity to nearly 7 gigawatts with over $400 billion in investment committed over three years. OpenAI said in a public statement that the progress put them on track to secure the full $500 billion, 10 gigawatt commitment by the end of 2025, ahead of schedule. That confidence makes the Texas collapse more striking.
According to sources cited by Bloomberg, Oracle and OpenAI disagreed over how the flagship expansion should be financed. Oracle reportedly wanted long-term capacity commitments that OpenAI was reluctant to make. Meanwhile, OpenAI's compute requirements kept evolving as its models grew larger and its product roadmap shifted toward agentic applications requiring different infrastructure configurations.
A Financing Problem in a Cash-Rich Industry
The collapse looks particularly odd given the amount of capital flowing into AI. OpenAI raised $110 billion in its most recent funding round, valuing the company at figures that dwarf most publicly traded technology firms. The company is preparing for an initial public offering in 2026 that could value it above $840 billion, which would make it one of the largest IPOs in history.
Money is clearly not the constraint. But the structure of that money creates its own complications. Venture and growth equity investors want returns measured in years, not decades. They want OpenAI to build products, capture markets, and generate cash flows. They do not want real estate exposure. Data centres require billions in upfront capital with returns spread over 20 or 30 years. Bridging these two capital structures is harder than either party may have anticipated.
AI companies are raising capital for growth, not for infrastructure. They want the compute, they don't want to own the buildings.
— Infrastructure analyst, US investment bank
This structural mismatch helps explain why Stargate's partners have struggled to convert announcements into operational facilities. Hyperscalers like Microsoft, Google, and Amazon own their data centres because they have decades of infrastructure experience and balance sheets built for long-duration capital deployment. OpenAI, despite its eye-watering valuation, does not have that luxury. Its investors want high-velocity growth, not patient infrastructure returns.
Oracle occupies an awkward middle ground. It wants to be a cloud infrastructure player competing with AWS and Azure, but it does not have the scale or the balance sheet flexibility of its larger rivals. The Texas deal required Oracle to bear risks that may have exceeded its appetite, particularly given uncertainty about whether OpenAI would actually consume the capacity once built.
The Energy Constraint Tightens
Beneath the financing disagreements lies a more fundamental constraint: power. AI data centres are extraordinarily energy-intensive. A single large language model training run can consume more electricity than a small town uses in a year. Scaling this to dozens of simultaneous training jobs requires power infrastructure that many American regions simply do not have.
Texas was selected in part because of its relatively deregulated energy market and available transmission capacity. But even Texas has limits, and those limits are approaching fast. ERCOT, the state's grid operator, reported in December 2025 that its large load interconnection queue had swelled to more than 230 gigawatts, nearly quadruple the level from a year earlier. Texas peak demand currently sits around 87 gigawatts and is projected to rise to 138 gigawatts by 2030, an increase of nearly 60 per cent. By 2031, ERCOT projects demand could reach 218 gigawatts.
The state government has responded with new regulations. Senate Bill 6, taking effect in 2026, requires new facilities drawing more than 75 megawatts from the grid to curtail operations during grid emergencies. For data centres running AI training workloads, mandatory curtailment introduces operational risks that complicate the economics of expansion.
These constraints are not unique to Texas. Across the United States, data centre developers are competing for access to reliable power. Virginia, which hosts the largest concentration of data centres globally, has seen utilities struggle to keep pace with demand. Some projects have been delayed by years waiting for utility connections. Others have been cancelled entirely when projected energy costs made the economics unworkable.
The Economics of AI Compute
The Texas setback exposes a tension at the heart of the AI industry's growth trajectory. Training large models requires massive, concentrated compute capacity, but operating that capacity requires power infrastructure that takes years to build. The AI labs are moving faster than the utilities and data centre developers can follow.
Consider the scale involved. A single Nvidia H100 GPU, the current workhorse of AI training, draws around 700 watts under load. A rack of eight H100s draws around 10 kilowatts. A training cluster of 25,000 GPUs, roughly what OpenAI used for GPT-4, requires approximately 30 megawatts of continuous power. Next-generation models are expected to require clusters two to four times larger.
At those scales, finding power becomes as difficult as finding capital. Microsoft and Google have signed deals with nuclear operators to secure carbon-free power for future data centres. Amazon has acquired data centre campuses adjacent to nuclear plants. Smaller players, including Oracle, do not have the same options.
What This Means for the AI Buildout
The Oracle-OpenAI collapse does not mean Stargate is dead. Other facilities associated with the project continue to develop, and new partnerships may emerge to fill the gap. The first building at the Abilene site is still expected to come online in 2026. But the Texas setback highlights the gap between AI industry ambitions and the practical realities of building physical infrastructure.
For the broader AI industry, execution risk is becoming unavoidable. Announcements are easy; data centres are hard. The companies that thrive will be those that can solve the capital structure problem, bringing together technology growth capital with infrastructure-grade financing, while securing reliable energy at sustainable costs.
OpenAI has options. The company's capital position gives it leverage to negotiate with other data centre operators. Microsoft, its largest investor, operates one of the world's largest cloud infrastructures and has publicly committed to providing OpenAI with compute capacity. Third-party colocation providers, including Vantage Data Centers, are competing aggressively for AI workloads.
What OpenAI cannot easily replicate is time. Every month of delayed infrastructure is a month of constrained model training capacity. For a company racing to maintain its lead over Anthropic, Google DeepMind, and an increasingly capable open-source ecosystem, these delays have competitive costs that no funding round can offset. The next Stargate partner announcement, if it comes, will face considerably more scrutiny. ERCOT's interconnection queue alone now contains more requested capacity than the entire US grid currently provides.
SOURCES & CITATIONS
- Bloomberg News — Oracle and OpenAI abandon Texas data centre expansion, 6 March 2026
- Reuters — Stargate project financing breakdown
- OpenAI — Stargate advances with 4.5 GW partnership with Oracle
- ERCOT — Large load interconnection queue exceeds 225 GW, December 2025
- Data Center Dynamics — ERCOT electricity demand set to reach 218 GW by 2031
- OpenAI funding round and IPO preparation coverage, Financial Times
FREQUENTLY ASKED QUESTIONS



