
Bottom line up front: Anthropic's next-generation AI model, internally codenamed Mythos, was accidentally leaked via a misconfigured CMS on March 26, 2026. Anthropic confirmed it's real and calls it a "step change." Their own documents warn it's "far ahead of any other AI model in cyber capabilities." Here's what that actually means for defenders, security teams, and regular users, and what you should do right now.
Anthropic didn't announce Claude Mythos. A misconfigured storage bucket did.
On March 26, 2026, Fortune reporter Beatrice Nolan identified nearly 3,000 unpublished Anthropic assets sitting in a publicly searchable data cache. Roy Paz, a principal security researcher at LayerX Security, and Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, independently verified the documents at Fortune's request. Inside: a draft blog post for a model Anthropic calls "by far the most powerful AI model we've ever developed."
Anthropic confirmed it was real. They called it "human error in the configuration of its content management system." Then they quietly restricted access to the cache.
The model is called Claude Mythos. The tier it sits in is called Capybara. This article breaks down what we know, what it means for security professionals, and what it means if you're a regular person trying to keep your accounts safe.
‑3%
Okta / SentinelOne / Fortinet
3,000
Files exposed in data cache
4th
New model tier above Opus
1. What Is Claude Mythos?
Claude Mythos is Anthropic's next-generation AI model, currently in limited testing with early access customers. Anthropic's spokesperson confirmed the model in a statement to Fortune: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it."
According to the leaked draft, Mythos scores "dramatically higher" than Claude Opus 4.6 on benchmarks for software coding, academic reasoning, and cybersecurity. Anthropic described it as a "step change," not an incremental upgrade. The name itself, according to leaked materials, was chosen to evoke "the deep connective tissue that links knowledge and ideas together."
2. How Did the Leak Happen?
This is where the story gets uncomfortable for Anthropic.
The company building what it describes as the most dangerous AI model in existence got exposed by a basic configuration error. Digital assets uploaded to Anthropic's content management system were set to public by default unless someone explicitly marked them private. Nobody did. Roughly 3,000 files, including draft blog posts, images, PDFs, and audio files, became publicly accessible and searchable without any authentication required.
Beatrice Nolan at Fortune found the exposed data. Two independent researchers verified it. Anthropic was notified and only then restricted access.
No nation-state actor. No zero-day exploit. No advanced persistent threat. A default setting that nobody changed. Anthropic called it "human error." That's accurate. It's also the most common cause of data breaches across every industry, every year. The Verizon Data Breach Investigations Report has flagged misconfiguration as a leading breach cause for years running.
11. FAQ
What is Claude Mythos?
Claude Mythos is Anthropic's most advanced AI model to date, leaked accidentally in March 2026 through a misconfigured content management system. Anthropic confirmed it is real and currently in limited testing. It sits above the existing Opus tier in a new category called Capybara.
Is Claude Mythos available to the public?
Not yet. Anthropic is running a controlled early access program currently prioritizing cyber defense organizations. Broader availability depends on safety evaluation and cost optimization during testing.
Why did cybersecurity stocks drop after the Mythos leak?
Investors priced in the risk that Mythos-level offensive capability could reduce the effectiveness of existing security tools. CrowdStrike fell 7%, Palo Alto Networks 6%, Zscaler 4.5%, and Okta, SentinelOne, and Fortinet each around 3%.
How did the Mythos leak happen?
A configuration error in Anthropic's content management system left approximately 3,000 unpublished assets publicly accessible by default. Fortune reporter
Beatrice Nolan identified the exposed data.
Roy Paz at LayerX Security and
Alexandre Pauwels at the University of Cambridge independently verified the documents. Anthropic restricted access after Fortune contacted them.
Does Claude Mythos affect my personal cybersecurity?
Indirectly, yes. Mythos-level capabilities make organizational breaches faster and cheaper to execute, increasing the likelihood that your credentials appear in leaked databases. Using unique passwords for every account and enabling two-factor authentication are the most effective protective steps.
What is the Capybara tier?
Capybara is the name for the new model tier Anthropic is introducing with Mythos. It sits above Opus, Sonnet, and Haiku in capability and cost, described in leaked documents as "larger and more intelligent than our Opus models, which were, until now, our most powerful."
Should I be worried about AI hacking my accounts directly?
The more realistic near-term risk is AI-assisted attacks making organizational breaches more frequent. Those breaches expose your credentials, which are then tested against other services. Unique passwords per account eliminate this cascade risk entirely.
What is the "Floor vs. Ceiling" framework?
A framework for understanding where your actual risk sits. Mythos raises the ceiling of what sophisticated attackers can do. But the floor (basic misconfigurations, reused passwords, unreviewed service accounts) didn't move. Mythos finds existing gaps faster. It doesn't create them.