Back to News
Innovation news

Anthropic's Claude Code Source Leak Reveals Unreleased 'Mythos' Model

March 27, 202612 min read
Anthropic's Claude Code Source Leak Reveals Unreleased 'Mythos' Model

More than 512,000 lines of Anthropic's Claude Code CLI source code were exposed through an unsecured map file, revealing details of an unreleased model called 'Mythos', a Tamagotchi-style digital pet feature, and an always-on agent mode.

Executive Summary

  • Massive code exposure: 512,000+ lines of Claude Code CLI source code were publicly accessible through an exposed source map file.
  • Unreleased model revealed: The leak disclosed a next-generation model codenamed "Mythos" that Anthropic has not officially announced.
  • Experimental features: Code references include a Tamagotchi-style digital pet, an always-on background agent, and other unreleased capabilities.
  • Security implications: The incident raises questions about Anthropic's internal security practices despite the company's focus on AI safety.

What was exposed

The leak occurred through an unsecured source map file that allowed anyone to reconstruct the full source code of Anthropic's Claude Code command-line tool. Source maps are normally used for debugging and should not be publicly accessible in production deployments.

Beyond the code itself, the exposed data included references to an invite-only CEO event, internal planning documents, and details about the upcoming "Mythos" model. Fortune first reported on the unsecured data store.

The 'Mythos' model

The most significant revelation is the existence of a model codenamed "Mythos." While no technical specifications were included in the leaked code, the references suggest it is Anthropic's next major model release after the current Claude family. The name and context suggest a model focused on more autonomous, extended reasoning capabilities.

Unreleased features

Among the more unexpected discoveries: references to a Tamagotchi-style digital pet that lives inside the coding environment, and an always-on agent mode that would let Claude Code run continuously in the background rather than just responding to individual prompts. These suggest Anthropic is exploring more persistent, ambient forms of AI interaction.

The security irony

Anthropic has built much of its reputation on AI safety research and responsible development. An unsecured production data store that exposes internal secrets is an uncomfortable contradiction. The company has not yet publicly commented on the full extent of the exposure.

What this means for the AI landscape

For developers and competitors, the leaked code provides a rare inside look at how a leading AI company structures its tools. For everyday users, the key takeaway is that the next generation of AI assistants may be designed to run continuously and integrate more deeply into workflows, not just respond to one-off questions.

What good incident response should look like

In incidents like this, stakeholders usually evaluate four elements: speed of containment, transparency of scope, user-facing remediation, and long-term control changes. Containment answers whether access is closed. Scope clarifies what was exposed and for how long. Remediation addresses potential downstream abuse.

For AI companies handling sensitive enterprise workflows, communication quality is almost as important as technical cleanup. Customers need concrete timelines, artifact-level detail, and clear statements about whether prompts, metadata, or credentials were potentially affected.

Why source maps are a recurring risk

Source maps are a common developer convenience and a common operational footgun. They simplify debugging but can unintentionally expose implementation details when published broadly. Mature deployment pipelines typically gate source map exposure by environment and require explicit production allowlists.

This incident reinforces a broader lesson: as AI tools move faster, security hygiene in build and release systems becomes a competitive advantage, not just a compliance checkbox.

Competitive consequences of the leak

Even if no direct user data exposure occurred, strategic leakage can still shift market dynamics. Competitors gain signal on roadmap direction, user experience bets, and possible model positioning. That can accelerate imitation, influence pricing strategy, and compress time-to-market for rival offerings.

For users and teams choosing AI tooling, this is a reminder to evaluate provider security posture alongside model quality. Reliability, governance, and incident maturity increasingly determine whether an AI platform is safe for core workflows.