At midnight on March 31, an Anthropic engineer pushed a routine update to npm.
They included one extra file. A source map. 59.8 megabytes. The kind of file that takes minified, obfuscated production code and converts it back into readable original source.
Within three hours, researcher Chaofan Shou spotted it and posted on X. Within six hours, someone had backed up all 512,000 lines of TypeScript to GitHub. Within twelve, that repository had been forked 41,500 times.
ENTIRE CLAUDE CODE SOURCE CODE LEAKED:
Anthropic pulled the package. The internet did not notice.
This is the second time in five days that Anthropic has accidentally published something they did not mean to publish. Last week it was internal documents revealing their most powerful unreleased AI model. This week it is the complete source code for Claude Code, the AI coding assistant they have been selling to developers.
The irony writes itself. The product designed to help you write better, more secure code just leaked its own source code to the internet.
What They Were Hiding
The most interesting thing in the leak is not what Anthropic has shipped. It is what they have already built and are choosing not to release yet.
The source contains 44 feature flags. These are switches that turn functionality on or off when the software ships. In the public version, certain flags are set to false. In the internal version, those same flags are set to true. The code behind them is not experimental. It is complete, compiled, and running. It just is not available to you.
Here is what is already built:
Background agents. A feature codenamed Kairos allows Claude to run autonomously without you present. It monitors your GitHub repositories, tracks pull requests, and sends push notifications to your phone. You can query it from anywhere. The agent keeps working while you sleep and tells you what it found when you wake up.
Multi-agent coordination. A system called Coordinator Mode lets one Claude orchestrate a team of worker Claudes. Each worker has a restricted set of tools and its own scratch space. The orchestrator assigns tasks, workers execute them, results come back up the chain. This is not one AI helping you code. It is a small team of AIs working together.
Scheduled tasks. Agent Triggers gives Claude a calendar. You can create cron jobs, schedule recurring tasks, set up external webhooks. The same way a server runs scripts at midnight, Claude can run tasks at midnight.
Voice mode. A full push-to-talk voice interface using Deepgram Nova 3 for speech recognition. It has its own CLI entrypoint. The internal codename is tengu_cobalt_frost, with a kill switch called tengu_amber_quartz. They could not use their own domain for it, for reasons the code does not explain.
Real browser control. Not web scraping. Not fetching URLs. A full Playwright integration that opens an actual browser and controls it the way a human would. Already built. Not shipped.
Persistent memory. Memory that survives across sessions without external storage. You will not have to re-explain your project every time.
Self-resuming agents. Agents that can pause themselves and wake back up without any user prompt. The foundation for genuinely autonomous long-running work.
There are also 18 hidden slash commands sitting as disabled stubs in the code: /bughunter, /teleport, /autofix-pr, and fifteen more. The commands exist. The functionality is coming.
The pattern is clear. Anthropic is not slowly building these features. They have already built them. They are releasing one thing every two weeks because everything is ready and they are choosing the pace.
The Part Nobody Expected
Buried inside the codebase, between all the production code and hidden features, is a complete virtual pet system.
Type /buddy and Claude Code hatches a unique ASCII companion based on your user ID. There are 18 species: duck, capybara, dragon, ghost, axolotl, and something called "chonk." The system has a full gacha rarity structure ranging from common to legendary, with a one percent chance of getting a legendary drop. There are shiny variants. There are hats: crown, wizard, propeller, and tinyduck. Your pet has stats. The stats are DEBUGGING, CHAOS, and SNARK. It sits beside your input box and reacts while you code.
This drops April 1st. The salt used to generate pets is the string "friend-2026-401."
Here is the part that made the Reddit thread go viral: one of the 18 species names collides with an internal Anthropic model codename. Their build scanner would have flagged it. Their solution was to encode all 18 species names in hexadecimal.
export const duck = String.fromCharCode(0x64,0x75,0x63,0x6b)
That is the word duck. They hex-encoded duck. To hide a virtual pet from their own security tooling.
The Other Thing the Leak Reveals
The source code is also an unusually honest look at what it is like to build production software at a company valued at $380 billion.
One file is 803,924 bytes. Almost one megabyte of TypeScript, single file, 4,683 lines. Their print utility is 5,594 lines. The message handler is 5,512 lines. Six files exceed 4,000 lines each.
There are 460 comments that say eslint-disable, which is the code equivalent of posting a "no rules" sign over your workspace. There are more than 50 functions with DEPRECATED in their name that are still actively called in production. The function that saves your login credentials to disk is called writeFileSyncAndFlushDEPRECATED().
The comments left in the codebase are worth reading on their own:
"TODO: figure out why." This is in the error handler.
"Not sure how this became a string." Followed immediately by "TODO: Fix upstream." The upstream is their own code.
"This fails an e2e test if the ?. is not present. This is likely a bug in the e2e test." They kept the fix anyway.
An engineer named Ollie left this in production: "TODO (ollie): The memoization here increases complexity by a lot, and im not sure it really improves performance."
There are nine empty catch blocks in config.ts, which is the file responsible for managing your authentication. When something goes wrong with your login, there are nine places where the code catches the error and does nothing.
The authentication file also contains a function called wouldLoseAuthState(). This was added after a confirmed bug, GitHub issue 3117, where saving config settings wiped your login credentials.
None of this is unusual. Every large codebase looks like this up close. The reason it resonates is that we do not usually get to see inside. The leak pulled back the curtain on a $380 billion company's production code and found exactly what developers already suspected: the same chaos they deal with every day, just at larger scale.
What This Means
Anthropic confirmed the leak was real. No customer data was exposed. No model weights, no API credentials, no proprietary training secrets. Just the CLI tool's frontend code.
But the roadmap is now public. Competitors have the same document Anthropic's own team uses. Every AI company building coding tools now knows exactly what is coming next from the market leader.
The more interesting question is what Anthropic does with the acceleration. The features are built. The reason they were hidden was pacing, not capability. Now that the pacing strategy has been exposed, the argument for holding features back is weaker. The community knows what exists. The pressure to ship it is higher.
This is the second time in five days that Anthropic has accidentally shown the world something they were not ready to show. The first was their most powerful AI model. The second was their product roadmap.
Whether this is negligence, a pattern, or something else entirely, the result is the same: we know more about what Anthropic is building than they intended us to know. And what they are building is more interesting than what they have released.
Signing off,
Wes “this is all unfortunately very real” Roth

