On March 31, 2026, the AI world received an unexpected “April Fools’ Eve gift”: the full source code of Anthropic’s star product, Claude Code, accidentally made its way onto the internet because of a packaging mistake.

This was not a hack. It was not an insider leak. Someone simply forgot to add *.map to .npmignore.

Just like that, 510,000 lines of TypeScript, 44 hidden feature flags, and a mysterious background autonomous agent called KAIROS were exposed to everyone within hours.

What Happened: An Avalanche Started by a .map File

How Did the Leak Happen?

On March 31, 2026, Anthropic published version 2.1.88 of @anthropic-ai/claude-code on npm. The update was supposed to be routine maintenance, but it came with a huge “extra”: a 59.8 MB JavaScript Source Map file with a .map suffix.

Source maps are debugging tools that map compiled or minified code back to the original TypeScript source. This internal debugging file was accidentally bundled into the public npm package.

More importantly, the .map file pointed to a ZIP archive hosted on Anthropic’s own cloud storage, containing the complete source repository. Anyone who followed the clue could download the full codebase.

Root cause: the *.map rule was missing from .npmignore, so the source map was published along with the package.

How Fast Did It Spread?

Within hours of publication, the developer community noticed the code and backed it up on GitHub. According to Layer5, related repositories quickly exceeded 41,500 forks, at one point becoming one of the fastest-growing repositories in GitHub history.

Anthropic confirmed the incident soon afterward, but said it was only a packaging mistake and that no user data or credentials had been leaked.

What Was Inside: Secrets Hidden in 512,000 Lines of Code

The source contained 1,906 TypeScript files. Developers and researchers across the AI world began digging through it like archaeologists, looking for things Anthropic had never publicly disclosed.

Secret One: KAIROS, an Always-On Autonomous AI Agent

This was one of the most discussed discoveries. A feature flag named KAIROS appeared more than 150 times in the code.

KAIROS comes from ancient Greek and means “the right moment.” Judging from the code logic, it represents a major shift in Claude Code’s product direction: from passive response to an active background autonomous daemon.

More specifically, KAIROS mode includes a sub-mechanism called autoDream. When the user is idle, Claude can automatically perform “memory consolidation” in the background, merging scattered observations, resolving logical conflicts, and turning vague impressions into concrete knowledge.

In essence, this gives AI a form of continuous learning and self-optimization. It is not only operating when you actively use it; it is running all the time.

Secret Two: BUDDY, a Cyber Tamagotchi

Yes, you read that correctly. Hidden in the code was a complete digital pet system called BUDDY.

It included:

  • 18 species to choose from;
  • rarity tiers: common (60%) → rare → epic → legendary (1%);
  • shiny variants, similar to shiny Pokémon;
  • unique stats, including DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK.

According to code comments, BUDDY was originally planned as an April Fools’ Easter egg, with a quiet teaser on April 1 and a formal release in May. Instead, the March 31 leak spoiled it early. The timing was almost too ironic.

Secret Three: Stealth Mode, a Mask for Anthropic Employees

The code also revealed a feature called “Stealth Mode,” designed to hide Anthropic employees’ contributions to open-source projects.

In simple terms, when Anthropic engineers used Claude Code to submit code to open-source communities, this mode would conceal their Anthropic employee identity from the outside. The discovery triggered discussion and controversy in the open-source community.

Secret Four: Anti-Distillation, “Poison” for Competitors

The ANTI_DISTILLATION_CC feature flag revealed a more aggressive strategy: injecting fake tool definitions into API requests.

The goal was to pollute the training data of competitors that monitor API traffic and try to learn or copy Claude’s capabilities through knowledge distillation. The code also summarized the AI’s reasoning process and attached encrypted signatures, so eavesdroppers could obtain only the summary rather than the full chain-of-thought output.

Security Warning: Attackers Moving in During the Chaos

The accidental leak itself had limited direct harm, but the security risks that followed were serious.

Axios npm Poisoning: A Precise Strike by North Korean Hackers

On the same day, March 31, attackers compromised the npm account of the popular HTTP library Axios and published two malicious versions, 1.14.1 and 0.30.4. These versions introduced a cross-platform remote access trojan (RAT) through a hidden dependency named plain-crypto-js.

The malicious versions remained live for about two to three hours before npm removed them.

Google’s threat intelligence team attributed the attack to UNC1069, a financially motivated threat actor with North Korean links. The malware used was WAVESHAPER.V2.

High-risk time window: if you installed or updated Claude Code through npm between 00:21 UTC and 03:29 UTC on March 31, your machine may have been infected with malicious code.

Fake Repositories on GitHub

In addition to npm poisoning, threat actors also spread malicious repositories on GitHub disguised as “Claude Code source mirrors.” These repositories tricked users into running a Rust-based dropper, which then deployed Vidar Stealer and GhostSocks proxy malware.

Emergency Self-Check Steps

If you are a developer, run these checks immediately:

  1. Check your project’s lockfile (package-lock.json or yarn.lock) for Axios versions 1.14.1 or 0.30.4.
  2. Check whether the plain-crypto-js dependency is present.
  3. If either is found, treat the host as fully compromised immediately: rotate all keys and credentials, and consider reinstalling the operating system.

Impact and Reflection

Impact on Anthropic

The direct loss for Anthropic was competitive intelligence exposure. Unreleased strategic features such as KAIROS and the anti-distillation mechanism were seen by competitors and researchers ahead of time.

From another angle, however, Anthropic’s response—acknowledging the issue quickly and explaining the cause plainly—helped preserve some degree of public trust.

A Warning for Supply Chain Security

This incident is a classic case study in software supply chain security:

  • On the development side: one forgotten packaging rule (.npmignore) can cause a serious information leak.
  • On the attacker side: hackers can exploit a high-profile event with astonishing speed, even on the same day.
  • On the user side: during a major incident, any download outside official channels is extremely dangerous.

It reminds every team maintaining an open-source project that release process audits and automated checks are just as important as code quality itself.

Conclusion

The Claude Code source leak was one of the most dramatic AI incidents of 2026. In an unexpected way, it gave the public a glimpse into the engineering logic and product ambitions behind a top-tier AI tool, from KAIROS, which points toward a new human-machine interaction paradigm, to BUDDY, a delightfully geeky digital pet.

Yet alongside this accidental transparency came the very real threat of supply chain attacks. The incident once again reminds us that in an era where software depends heavily on the open-source ecosystem, security is never only about code. It is also about release processes, dependency management, and emergency response.


Further reading: