Claude Mythos is a Disaster...That Could Get You A Good Job
AI destroyed tens of thousands of coding careers. Now the economy-scale threat of Claude Mythos might force companies to bring thousands of people back.
For most of this year, the story has been simple and brutal: AI is eliminating tech jobs, companies are proud of it, workers are cast into the wasteland.
In Q1 alone, companies laid off more than 52,000 U.S. tech workers, according to Challenger, Gray and Christmas, explicitly because they shifted budgets toward AI at the expense of human roles. Block — the company behind Square and Cash App — cut more than 4,000 positions, reducing its workforce to just under 6,000. CEO Jack Dorsey wrote in a letter to shareholders that “a significantly smaller team, using the tools we’re building, can do more and do it better.” Oracle, having posted a 95% jump in net income last quarter, chose to eliminate as many as 30,000 employees anyway — layoffs it framed as a strategic pivot to AI infrastructure, not financial distress.
For the workers on the bad end of unexpected Zoom meetings with their boss and a random person from HR, Goldman Sachs has a cheerless forecast: displaced tech workers take approximately one month longer to find a new job and suffer real earnings losses of more than 3% upon reemployment — because the same technological forces that eliminated their positions also eroded the market value of the skills they already had. Over the ten years following a tech-related job loss, real earnings growth for the displaced lags nearly 10 percentage points behind workers who kept their roles, an effect that Goldman’s economists call “scarring.”
Then, on April 7, the picture changed.
Anthropic announced a new model called Claude Mythos Preview, and put it inside a weird initiative called Project Glasswing. Read one way, the language of the announcement is the calm, responsible language of safety research. But squint at it another way and you realize it’s a lot like how a terrified young scientist breaks it to his boss that the killer robot they built has broken out of the lab:
We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
In short, Anthropic may have made a product that it thinks could topple whole companies, maybe even whole industries, and it supposedly scared them to death. Over a period of just a few weeks, Anthropic says Mythos identified thousands of zero-day vulnerabilities — previously unknown flaws — across every major operating system and every major web browser. And these were vulnerabilities that had somehow remained in place through decades of human review and millions of automated security tests. Casey Ellis, CTO and founder of Bugcrowd, wrote that AI tools like Mythos have succeeded by “living in the places we stopped looking a decade ago,” and that “AI has taken the knob that used to go to eleven and turned it to seven hundred.”
And Mythos didn’t just find vulnerabilities. In its weird, helpful, nightmare-kaizen way, it weaponized them: identifying multiple vulnerabilities, writing code to exploit them, and chaining those exploits together to penetrate complex software — autonomously, from a single prompt.
Anthropic has not released Mythos to the public, citing the severity of its offensive capabilities. Instead, it gave it to the members of Project Glasswing, a restricted coalition of roughly 40 organizations — including Apple, Google, Microsoft, and the Linux Foundation — to test the model defensively, patching vulnerabilities before anything comparable reaches the hands of malicious actors.
Turns out an industry that thought AI could make human expertise unnecessary last month is discovering that AI may mean they’re going to need all the human help they can get this month. The AI industry spent six months telling us that skilled coders were obsolete. Mythos just demonstrated that the infrastructure those coders built — infrastructure that AI is now maintaining — is riddled with critical vulnerabilities that companies will require skilled humans to fix. That means they might need to hire a lot of people back.
This is not something that Anthropic seems to feel it can correct, not least because the company says it did not explicitly train Mythos for cybersecurity, or for defeating it. These disastrous capabilities emerged as a downstream consequence of general improvements in coding, reasoning, and planning. For anyone newly laid off in the industry, that’s a crucial piece of information. Because it means future models will likely be even more capable of this — and the problem of vulnerable software will be that much more enormous.
Some independent experts have urged caution in interpreting Anthropic’s claims. For one thing, because Mythos is not publicly available, independent researchers cannot audit the findings, and the evidence rests entirely on testing by Anthropic and the exclusive club of organizations to which it has shown it. But reputable people who have seen the product are losing their shit. Katie Moussouris, CEO and founder of Luta Security and pioneer of Microsoft’s bug bounty program told NBC News, "it's all very much real." Which raises another dark parenthetical: a private company now holds what could be a master key for critical vulnerabilities in the software that hospitals, banks, and governments run on — and they are deciding who sees it and when.
But let’s get back to the firing and hiring question. Because for the tens of thousands of developers currently updating their LinkedIn profiles, the practical implication is more immediate: the World Economic Forum had already found that 87% of surveyed organizations now identify AI-related vulnerabilities as the fastest-growing cyber risk they face. Cybersecurity is one of the few areas where AI is expanding mid- and senior-level demand rather than contracting it — specifically in cloud security, AI red-teaming, vulnerability management, and security architecture. These are not niche roles. And they are the jobs that could expand wildly now that Mythos exists. Wiz, a major cloud security firm, estimates it will take roughly 12 to 18 months before AI capabilities comparable to Mythos reach open-source models available without restriction — at which point, the company says, the race between attackers and defenders becomes a crisis-level hiring event. “Since the new models are still not widely available, we have time to create these joint teams across security and engineering,” the company writes. That means mission-critical jobs, right now.
The companies that laid off their engineering teams this year bet that AI could replace human oversight of their systems. What Mythos revealed is that AI also understands those systems well enough to destroy them in autonomous, undiscovered, unimaginable ways. You can’t patch that with a data center.
FURTHER READING:
On the layoffs:
Q1 2026 Tech Layoffs: How AI Is Driving the Biggest Workforce Cuts — Yahoo Finance / Challenger, Gray and Christmas
Jack Dorsey’s Job Cuts Arouse Suspicions of AI-Washing — Bloomberg
Goldman Sachs: Losing Your Job to AI Leaves Lasting Scars — CNN
On Mythos and Project Glasswing:
Project Glasswing: Securing Critical Software for the AI Era — Anthropic
Why Anthropic Won’t Release Its New Mythos AI Model to the Public — NBC News
Claude Mythos and Project Glasswing: Why an AI Superhacker Has the Tech World on Alert — The Conversation
How Cyber Heavyweights in the US and UK Are Dealing with Claude Mythos — CyberScoop
Claude Mythos: Preparing for a World Where AI Finds and Exploits Vulnerabilities Faster Than Ever — Wiz Security



I'm not sure your conclusion follows through from the data. I think the laid off workers are better off using AI to build disruption.