Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
AI

What Anthropic's Second Source Code Leak Reveals

Anthropic accidentally published a 59.8MB source map file. The contents reveal an ambitious product roadmap and a persistent supply chain vulnerability.

4 min read
Code editor screen showing lines of source code with syntax highlighting
Anthropic accidentally published 500,000 lines of Claude Code source in an npm package update.
Editor
Apr 3, 2026 · 4 min read
By Nathan Cross · 2026-04-01

Anthropic has a supply chain problem. The leak occurred on March 31, when the company accidentally published the entire source code for its Claude Code assistant—a massive 59.8MB source map file bundled inside an otherwise routine npm package update that suddenly exposed 2,000 TypeScript files and over 512,000 lines of proprietary code to the public.

KEY TAKEAWAYS

01Anthropic exposed 512,000 lines of Claude Code source via an npm source map error.
02Security researcher Chaofan Shou discovered 44 hidden features inside the code.
03The leaked features reveal Anthropic is building AI-powered test generation and a code refactoring agent.
04This is Anthropic's second supply chain failure after the 2023 Constitutional AI data leak.

Security researcher Chaofan Shou discovered the exposed file. Anthropic hastily removed the package within seven hours, but the code was already mirrored across numerous GitHub repositories by opportunistic developers. This is not the company's first intellectual property disaster. The 2023 Constitutional AI training data exposure previously revealed their inner workings, and now competitors can see exactly how the new Claude Code engine operates under the hood.

CS
Chaofan Shou
@fried_rice
𝕏
Anthropic just leaked the entire source code for Claude Code via a source map in their latest npm package. 59.8MB of pure TypeScript.
Mar 31, 2026

What the code reveals

The source map contained 44 feature flags for unshipped tools, providing a crystal-clear view of their closely guarded product roadmap. The biggest addition is a highly sophisticated AI-powered test generation system.

"The sheer density of unshipped features sitting dormant in the codebase suggests a massive product expansion is imminent," one security analyst said during a sprawling GitHub teardown of the leaked files. The leak exposes a much larger ambition. Anthropic is completely bypassing the standard AI coding assistant playbook by silently building out code refactoring agents, real-time collaboration tools with live cursors, built-in security scanning, dependency vulnerability detection, and automated breakpoint placement directly into the core architecture. Anthropic is not just building a coding assistant. The company is assembling a complete integrated development environment to aggressively dominate infrastructure automation.

You are Claude Code, an expert AI programming assistant helping developers write better code faster. Rather than getting bogged down in perfect explanations, your primary directive is to deliver working code while proactively flagging any security or performance issues that arise.

— Leaked Claude Code System Prompt

The mechanics of the error

Source maps exist to reverse minification, allowing developers to debug production code by mapping it back to the original source. Standard practice strictly dictates excluding these sensitive diagnostic files from public npm registries via a simple .npmignore configuration. Anthropic engineers completely failed to implement this basic exclusion for version 2.1.88.

The failure is shockingly basic. A simple automated check in the continuous integration pipeline would easily catch a 59.8MB map file before deployment. "This kind of supply chain oversight is surprisingly common among AI startups moving at breakneck speed," VentureBeat reported in their immediate analysis of the incident, highlighting the dangerous tension between rapid deployment and fundamental security hygiene. The fact that the file reached production clearly indicates a severe lack of internal safeguards.

A pattern of exposure

AI companies operate at a blistering speed that routinely breaks standard security practices, and this specific incident fits a much wider industry pattern. Meta accidentally exposed internal repositories. Google left an AWS bucket completely public. Anthropic has now leaked its own critical data twice in just three years.

Customer risk remains remarkably low for now. The leak did not expose any live API keys or active customer databases.

The long-term risk to Anthropic is major. The company markets itself heavily on safety, rigorous testing, and enterprise-grade security. Failing to secure its own flagship source code actively undermines that lucrative enterprise positioning, and well-funded competitors now possess a detailed blueprint of their entire technical architecture.

TLDR

Anthropic leaked the source code for its Claude Code tool in a 59.8MB npm package file. The leak exposed 44 unshipped features, system prompts, and architecture decisions. This marks the second major leak from the company in three years. The exposed roadmap indicates a shift toward building a full IDE replacement.

FREQUENTLY ASKED QUESTIONS

Did the Claude Code leak expose customer data?
No. The leak contained source code and feature flags, but no customer credentials or API keys were exposed.
How did the code leak happen?
Anthropic published a 59.8MB source map file in an npm package update, failing to exclude it from the public release.
What new features were revealed in the leak?
The code contained 44 feature flags, including tools for AI-powered test generation, security scanning, and automated debugging.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.