When Jensen Huang took the stage at GTC 2026 last week, he devoted considerable time to a project that Nvidia did not build, does not control, and cannot monetise directly. OpenClaw, an open-source AI agent framework created by Austrian developer Peter Steinberger, has in Huang's words become 'the most popular open-source project in the history of humanity' and is 'definitely the next ChatGPT.'
TLDR
OpenClaw, an open-source AI agent framework built by Austrian developer Peter Steinberger, has become what Jensen Huang called 'the most popular open-source project in the history of humanity.' The framework's rapid adoption suggests a structural shift in AI markets: foundation models from OpenAI, Anthropic, and Google may be commoditising, with value migrating to the orchestration layer. This has significant implications for competition policy, enterprise governance, and the trillion-dollar valuations currently assigned to model providers.
KEY TAKEAWAYS
The statement warrants scrutiny because Huang is not given to hyperbole, and his company's fortunes depend on accurate assessments of AI infrastructure trends. If he is correct, the implications extend well beyond technical architecture, and the rise of agent frameworks like OpenClaw suggests that foundation models themselves may be commoditising, with value migrating from the companies that train massive neural networks to the orchestration layers that make those networks useful.
OpenClaw is the most popular, open-source project in the history of humanity. It is definitely the next ChatGPT.
— Jensen Huang, CEO Nvidia, GTC 2026
The market evidence
The commoditisation thesis finds support in several market data points, including the Ramp AI Index, which tracks enterprise AI purchasing patterns and shows that businesses are now 70 per cent more likely to choose Anthropic over OpenAI when making their first AI purchase. This represents a reversal from 2024, when OpenAI dominated enterprise adoption by a similar margin.
The shift suggests that enterprise buyers increasingly view foundation models as interchangeable inputs rather than differentiated products. If Claude, GPT, and Gemini produce sufficiently similar outputs for most business applications, the competitive advantage moves to whatever layer sits above them: the agent frameworks that orchestrate model calls, manage context, handle tool use, and integrate with enterprise systems.
OpenAI's response to OpenClaw's rise has been instructive for understanding how the company perceives the competitive threat. In February 2026, Sam Altman announced that Steinberger would join the company, with OpenClaw moving to an independent foundation structure. The hiring pattern suggests OpenAI recognises that controlling the orchestration layer may matter as much as improving the underlying model.
The company is also scaling aggressively, with plans to double its workforce from 4,500 to 8,000 employees by year end. OpenAI has specifically hired specialists for 'technical ambassadorship' to help businesses implement AI, a function that agent frameworks like OpenClaw are designed to simplify. The investments indicate a defensive posture against the abstraction of foundation model capabilities.
The security governance gap
Not all enterprise adopters are comfortable with the current state of OpenClaw's security architecture. Gavriel Cohen, a developer who evaluated the framework for a client engagement, created a variant called NanoClaw after concluding that the mainline project did not meet enterprise security requirements.
Cohen's assessment was direct: 'It's not responsible to connect customer data to this framework without additional safeguards.' He has since partnered with Docker to provide a containerised version with enhanced isolation, suggesting that market demand exists for enterprise-grade variants of open-source agent frameworks.
Nvidia is attempting to address this gap directly. The company announced NemoClaw, a free security service designed to help enterprises deploy OpenClaw with appropriate governance controls. The offering represents an unusual move for Nvidia, which typically sells hardware rather than providing free software services, and indicates how seriously the company views OpenClaw's infrastructure role.
The China vector
Chinese technology companies have moved quickly to integrate OpenClaw into their products, and on Sunday, Tencent announced that WeChat would incorporate OpenClaw functionality, joining Alibaba, Baidu, and ByteDance in building on the Austrian developer's framework. The speed of adoption in China raises questions about technology transfer that US policymakers have historically viewed with concern.
The dynamic differs from previous technology transfer concerns because OpenClaw is open source by design. Unlike restricted chip architectures or proprietary model weights, agent framework code is freely available to any developer worldwide. The strategic question is whether the value in AI systems ultimately resides in the models themselves or in the infrastructure that coordinates their use.
If foundation models are indeed commoditising, US export restrictions on AI may be addressing the wrong layer of the technology stack. The most valuable AI capability may not be training a frontier model, which requires billions in compute and data, but rather orchestrating multiple models effectively, which requires sophisticated software engineering but is fundamentally replicable.
The competition policy question
For regulators assessing AI market concentration, OpenClaw's rise complicates the analysis. The foundation model layer appears increasingly competitive, with OpenAI, Anthropic, Google, Meta, and various Chinese and European entrants all producing capable large language models. If these models are substitutes for most applications, concerns about monopoly power at the model layer may be overstated.
The agent framework layer presents a different picture. OpenClaw has achieved network effects that may prove difficult to displace. Developers who learn the OpenClaw API, build skills on the platform, and integrate it into their workflows create switching costs even for an open-source project. If a single framework becomes the default orchestration layer for enterprise AI, competition concerns may simply shift rather than disappear.
David Bader, a computer science professor at the New Jersey Institute of Technology, offered a useful analogy: 'The models become the engine; the agent framework becomes the car.' The analogy suggests that while engines are important, the car manufacturer typically captures more value and has more customer relationship than the engine supplier.
Jay Goldberg at Seaport Research has been the lone analyst with a sell rating on Nvidia, but even he now sees OpenClaw as potentially validating the company's AI infrastructure thesis. If agent frameworks become the primary interface for enterprise AI, demand for the GPU compute that powers underlying model inference could accelerate rather than moderate.
The valuation question
OpenAI and Anthropic carry combined valuations exceeding one trillion dollars, based largely on the premise that foundation models represent durable competitive advantages. If models are commoditising while frameworks capture the orchestration value, these valuations may require reassessment.
The contrast is stark: OpenClaw was built by a single developer working independently, while OpenAI has consumed over $10 billion in compute costs to train its models. If an agent framework built with modest resources can become 'the most popular open-source project in the history of humanity,' the capital intensity of foundation model development may not translate into proportionate value capture.
None of this implies that foundation models are worthless or that OpenAI and Anthropic have no moat, and frontier model capabilities continue to improve while the companies maintain leads in specific domains. But the assumption that value would accrue primarily to model providers may be giving way to a more distributed reality, where frameworks, tools, and applications capture increasing share of the AI value stack.
For enterprises evaluating AI investments, the implications are practical. Building deep dependencies on any single foundation model provider may prove unwise if models are becoming interchangeable. Investing in agent framework expertise, conversely, may offer more durable returns as orchestration capabilities become the differentiating factor in enterprise AI deployment.
SOURCES & CITATIONS
FREQUENTLY ASKED QUESTIONS



