Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
AI

Russia Moves to Ban ChatGPT, Claude and Gemini Under New AI Laws

Moscow's digital ministry publishes sweeping regulations that would restrict foreign AI tools transferring Russian user data abroad. Western companies face a familiar ultimatum.

8 min read
Russian government building with digital overlay
Russia's Ministry for Digital Development published the proposed AI regulations this week.
Editor
Mar 22, 2026 · 8 min read
By Nadia Petrova · 2026-03-22

The document runs to forty-three pages of dense legal Russian, published without fanfare on the Ministry for Digital Development's website on a Friday afternoon. Within its clauses lies the framework for what would amount to the effective expulsion of ChatGPT, Claude, and Gemini from the Russian internet, another front in Moscow's campaign to construct what officials call a 'sovereign digital space' free from foreign influence.

TLDR

Russia's Ministry for Digital Development has published draft regulations that would give Moscow sweeping powers to restrict foreign AI tools including ChatGPT, Claude and Gemini. The rules, expected to enter force in 2027, would require AI platforms with more than 500,000 daily Russian users to store data locally for three years. Western technology companies have historically refused such demands. The regulations cite protecting citizens from 'covert manipulation' but the primary driver is data sovereignty, part of Russia's broader push for a 'sovereign internet' insulated from foreign influence. Domestic AI developers like Sberbank and Yandex would benefit from reduced foreign competition.

KEY TAKEAWAYS

01Russia's digital ministry published draft regulations to restrict foreign AI tools
02ChatGPT, Claude and Gemini could face restrictions or bans under the new rules
03AI platforms with 500,000+ daily users must store Russian user data locally for three years
04Regulations expected to enter force in 2027 after further review
05Western companies have historically refused Russian data localisation demands

The draft regulations, which the ministry says will undergo further review before entering force in 2027, would require any artificial intelligence platform serving more than 500,000 daily users in Russia to store all Russian user data on servers physically located within the country, maintain those records for at least three years, and submit to government audits of their algorithms. The stated purpose is protecting Russian citizens from 'covert manipulation and discriminatory algorithms,' but the practical effect would be to force Western AI companies into an impossible choice: comply with demands that would compromise their global data architecture, or withdraw from the Russian market entirely.

Western technology companies have faced this ultimatum before, and LinkedIn was blocked in Russia in 2016 after refusing to relocate Russian user data to local servers. Google, Apple, and Meta have all navigated various standoffs with Russian regulators over data localisation, content moderation, and access to encrypted communications. But the AI regulations represent something new: an attempt to control not just what data flows across borders, but the nature of the intelligence that processes it.

The data sovereignty calculation

During coverage of the early implementation of Russia's 'sovereign internet' law from Moscow in 2019, officials spoke openly about their strategic objectives. The concern was never really about protecting citizens from manipulation, though that framing plays well domestically. The concern was about dependence: the fear that critical infrastructure, communication systems, and now artificial intelligence had come to rely on platforms controlled by companies that answered to Washington, not Moscow.

The AI regulations extend this logic into a domain where the stakes are arguably higher. Unlike social media posts or search queries, interactions with large language models involve users revealing their thinking processes, their questions, their uncertainties. A Russian intelligence officer using ChatGPT to draft analysis creates a record on servers in the United States, while a Russian executive asking Claude to summarise confidential documents sends that information through infrastructure controlled by an American company. From Moscow's perspective, this represents an unacceptable vulnerability.

The regulations specifically mention the requirement that AI systems must respect 'traditional Russian spiritual and moral values,' a phrase that has appeared with increasing frequency in Russian legislation over the past decade. In practice, this clause provides regulators with broad discretion to block any AI system whose outputs they deem inconsistent with state ideology, regardless of data localisation compliance.

Foreign artificial intelligence systems that transfer Russian citizens' personal data abroad create risks of covert manipulation and algorithmic discrimination that are incompatible with the protection of our national interests.

— Russian Ministry for Digital Development, draft regulations preamble

The China model beckons

Russia is not pioneering this approach so much as adapting it. China's AI ecosystem operates almost entirely independently of Western platforms, with domestic champions like Baidu, Alibaba, and ByteDance developing large language models that run on Chinese infrastructure under Chinese government oversight. The emergence of capable open-source models from Chinese developers, including DeepSeek and Qwen, offers Russia a potential pathway to AI capability without Western dependence.

Unlike proprietary models from OpenAI, Anthropic, and Google, open-source AI systems can be downloaded, modified, and run entirely on local infrastructure. Russian technology companies and government agencies could theoretically deploy these models on servers within Russia, train them on Russian data, and fine-tune them to produce outputs aligned with state objectives, ensuring that no user data ever crosses the border.

Domestic AI developers stand to benefit directly from reduced foreign competition. Sberbank, Russia's largest bank, has invested heavily in AI development and operates its own large language model. Yandex, often described as Russia's Google, has AI capabilities that would become more attractive to Russian businesses and government agencies if Western alternatives were unavailable, and the regulations, whatever their stated rationale, create market conditions favourable to these domestic players.

The fragmenting internet accelerates

The proposed Russian regulations arrive as the global internet continues to fragment along geopolitical lines. China maintains its 'Great Firewall' blocking access to most Western platforms, while Iran, North Korea, and several other states operate their own filtered internet environments. Russia has been building its infrastructure for internet isolation since at least 2019, though implementation has been slower and less comprehensive than China's.

Artificial intelligence may accelerate this fragmentation because the technology is simultaneously too important and too sensitive for many governments to tolerate foreign control. The United States has imposed export controls preventing advanced AI chips from reaching China. The European Union is implementing its AI Act with extraterritorial provisions. Now Russia is moving to exclude American AI platforms entirely.

The result may be an AI 'splinternet' in which users in different jurisdictions have access to fundamentally different artificial intelligence systems, trained on different data, optimised for different values, and subject to different forms of oversight. A question asked of ChatGPT in New York would receive a different answer than the same question asked of a Russian-hosted model in Moscow, not because of technical limitations but because of deliberate policy divergence.

We are witnessing the early stages of what may become a permanent division of the global AI landscape along geopolitical lines. The assumption that AI would be a unifying technology accessible across borders is giving way to a reality of competing, incompatible systems.

— Carnegie Endowment for International Peace, Digital Sovereignty Report, February 2026

Implications for Russian users

For Russian technology workers, researchers, and businesses, the regulations create immediate practical problems. Many have integrated ChatGPT and similar tools into their workflows over the past three years, with software developers relying on AI assistants for coding while researchers use them for literature review and analysis. Small businesses use them for customer service and content generation, and a ban would force rapid adaptation to domestic alternatives that may be less capable, at least initially.

Virtual private networks already provide a workaround for many Russian users accessing blocked Western platforms, and the regulations include provisions for blocking VPN services that facilitate access to restricted AI tools, but Russia's track record on VPN enforcement has been inconsistent. Determined users will likely find ways to access foreign AI platforms, even as the official Russian internet becomes increasingly isolated.

The 2027 implementation timeline gives all parties time to prepare, and to negotiate. Western AI companies could theoretically explore data localisation arrangements, though this would require significant architectural changes and raise questions about whether segregated Russian instances would receive the same model updates as global versions. More likely, the companies will simply accept exclusion from a market that, while large, represents a small fraction of their global user base.

What to watch

The draft regulations remain subject to revision, and the 2027 implementation date is not fixed. The key variables over the coming months include whether Western AI companies engage with Russian regulators on possible compliance pathways, whether Russia accelerates development of domestic AI alternatives, and whether other states adopt similar frameworks. India, Brazil, and Indonesia have all expressed interest in data sovereignty measures that could affect AI platforms.

The broader trajectory points toward continued fragmentation, with artificial intelligence becoming another arena of great power competition, technology access increasingly shaped by geopolitical alignment rather than market forces. Russia's proposed regulations are an escalation, but they follow a logic that is spreading, and the era of globally accessible AI platforms may prove shorter than the technology's most optimistic proponents imagined.

FREQUENTLY ASKED QUESTIONS

Which AI tools would be affected by Russia's regulations?
The regulations would affect foreign AI platforms including ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google) that serve Russian users.
When would the Russian AI regulations take effect?
The regulations are expected to enter force in 2027 after further review, according to the draft published by the digital ministry.
What do the regulations require of AI companies?
AI platforms with more than 500,000 daily Russian users would need to store user data on servers in Russia for at least three years and submit to algorithm audits.
Will VPNs allow access to banned AI tools?
The regulations include provisions for blocking VPNs that facilitate access to restricted AI tools, though Russia's enforcement of VPN bans has historically been inconsistent.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.