March 31, 2026, 4:23 AM ET. A Stanford intern named Chaofan Shou is browsing the npm registry when he spots something off in version 2.1.88 of the @anthropic-ai/claude-code package. Inside the archive, alongside expected executables, sits a 59.8 MB file: cli.js.map.
This is a source map — a debugging artifact that maps minified, obfuscated code back to original TypeScript. Including it in a public package is like handing out the original architectural blueprints of a skyscraper along with the front door keys. Within 30 minutes, the code is replicated across GitHub, accumulating over 5,000 stars as developers worldwide begin dissecting 512,000 lines of previously secret code.
The leak wasn't the result of sophisticated hacking. It was human error in the build chain. The .map file exposed the complete internal architecture of a product generating $2.5 billion in Annual Recurring Revenue for Anthropic — with growth that more than doubled since January 2026.
The code reveals how Anthropic solved one of AI's most complex problems: "context entropy." The three-tier memory architecture (MEMORY.md as lightweight index, on-demand topic files, raw transcripts never read in full) represents an engineering blueprint that competitors like Cursor can now replicate.
Among the most explosive discoveries from code analysis:
KAIROS: An autonomous daemon system (150+ references) enabling Claude Code to operate 24/7 in background. Uses a process called autoDream to consolidate memories, resolve logical contradictions, and convert vague insights into absolute facts while users sleep. Implements forked sub-agents for maintenance without corrupting the main "thought flow."
Undercover Mode: A stealth mode activating specific system instructions when Anthropic employees contribute to public open-source repositories. The prompt warns: "You are operating UNDERCOVER... Your commit messages MUST NOT contain ANY internal Anthropic information. Do not blow your cover." The system automatically scrubs references to models like "Tengu" or "Capybara" from git logs.
Buddy System: A complete virtual pet (18 species, rarity tiers, shiny variants) hidden in code — evidently a developer Easter egg for stress relief during coding sessions.
Model Roadmap: Code confirms internal codenames: Capybara (Claude 4.6), Fennec (Opus 4.6), and Numbat (testing model). Internal documents reveal Capybara v8 suffers 29-30% "false claims" rate — worse than v4's 16.7% — with an "assertiveness counterweight" added to prevent overly aggressive refactoring.
Computer Use ("Chicago"): Complete implementation of computer-use capabilities via @ant/computer-use-mcp, with screenshots, mouse/keyboard input, and coordinate transformation — reserved for Max/Pro accounts.
The situation worsened with a simultaneous supply chain attack. Between 00:21 and 03:29 UTC on March 31, anyone updating Claude Code via npm may have installed compromised versions of axios (1.14.1 or 0.30.4) containing a Remote Access Trojan.
Anthropic has now officially discouraged npm installation, pushing users toward the native installer (curl -fsSL https://claude.ai/install.sh | bash) which bypasses npm dependency chains entirely and includes automatic background updates.
In an email statement to VentureBeat, Anthropic confirmed:
"Today a Claude Code release included internal source code. No sensitive customer data or credentials were involved or exposed. This was a packaging issue caused by human error, not a security breach. We are implementing measures to prevent this from happening again."
The irony wasn't lost on the tech community: Anthropic, positioning itself as the "safety-first" AI company, accidentally open-sourced its flagship product while selling AI security tools to enterprises. With 80% of Claude Code revenue coming from enterprise customers, the intellectual property leak represents immediate competitive advantage for anyone wanting to clone a production-grade AI agent.
The code, now permanently distributed across hundreds of GitHub forks, transformed a configuration error into what many developers consider a "democratization event" — giving anyone the ability to study the architecture of a mature AI agentic system, complete with its compromises, workarounds, and hidden engineering brilliance.
Immediate recommendations for users:
- If you updated via npm between 00:21-03:29 UTC on March 31, 2026, check for axios v1.14.1/0.30.4 or plain-crypto-js in your lockfiles. If present, treat the machine as compromised and reinstall the OS.
- Uninstall version 2.1.88 and migrate to Anthropic's native installer.
- Rotate all API keys and monitor for anomalous usage.