Don't Be Fooled By Anthropic's Magnanimity
Anthropic is taking a righteous stand against the Pentagon. The AI company and its CEO, Dario Amodei, also got greedy.
UPDATE Feb. 26, 6:13 p.m.: I made some minor tweaks to reflect Anthropic’s Thursday evening announcement that it does not intend to comply with the Pentagon’s ultimatum.
The Pentagon and Anthropic are engaged in a staring contest.
The two sides previously struck a deal so that Anthropic’s AI models could be used on classified military systems. Anthropic is currently the only AI company to have classified access, but the Pentagon is furious at Anthropic for being Woke. Anthropic has drawn a reasonable line in the sand: it’s insisting that its technology should not be used to assist in the mass surveillance of Americans, and it should not be used to kill people with fully autonomous weapons (meaning there should be a human in the decision-making loop).
These guardrails are viewed as untenable by the Pentagon. Secretary of War Pete Hegseth and the Pentagon gave Anthropic until 5:01 p.m. on Friday to acquiesce and reverse course. Anthropic announced on Thursday evening it will not reverse course. Assuming this dynamic holds, the Pentagon has reportedly claimed it will invoke the Defense Production Act (which basically forces Anthropic to comply with the Pentagon’s demands), while also designating Anthropic as a “supply chain risk” (which limits government contractors, and the government itself, from working with Anthropic). As the New York Times pointed out, the “two threats are fundamentally at odds: One would prevent the government from using the company’s products, while the other would force the company to let the government use the products.”
I’m not sure the incongruence of the threats matters. If the Pentagon opts to ditch Anthropic, it will need to keep using the company’s specialized Claude tools for at least a little while. The Pentagon can come up with some sort of exemption for itself, while telling defense contractors like Boeing and Lockheed Martin that they can’t use Anthropic models anymore.
I never really bought the cynical takes about Anthropic ceding to the Pentagon in order to avoid losing out on valuable defense contracts. The more fruitful medium- to long-term play is to stand pat and let Hegseth dole out his punishments. But that doesn’t mean Anthropic and its CEO, Dario Amodei, handled this situation well. It’s Amodei’s fault that his company finds itself in this quandary—a quandary that Anthropic might accidentally benefit from sooner rather than later.
The Pentagon’s threats are only effective because Anthropic is already attached at the hip to the Department of War. Anthropic, which markets itself as the ethical and thoughtful alternative to OpenAI and other AI-oriented companies, didn’t have to compete for the Pentagon’s business (they agreed to a partnership in July 2025 for up to $200 million). Nor did Anthropic have to enter into business with Palantir (November 2024, terms undisclosed). Both arrangements were announced after Donald Trump won the presidency for a second time, so Anthropic and Amodei can’t argue that they were expecting to work with a more “rational” administration.
Amodei knew better, and he knows better. He understands the stakes about how AI could someday be misused to catastrophic ends, and has written convincingly on the subject. His pained existentialism about AI seems at least somewhat real, although he annoyingly swings back and forth between grandiose predictions and doomsday scenarios when it suits him. Someday soon, or maybe in a long time, AI will save and/or kill us all.
I chalk this up to Amodei doing a poor job of balancing his business interests with his beliefs. His comments after ICE killed two Americans, for instance, were weak and imprecise. He isn’t hanging out at the White House, but his company is still extending olive branches to the Trump administration, and even this week, reneging on other long-held safety pledges that are unrelated to this Pentagon hubbub.
Amodei’s most wishy-washy AI-related opinion is about government contracts. “The position that we should never use AI in defense and intelligence settings doesn’t make sense to me,” he said in 2024. “... We’re trying to seek the middle ground, to do things responsibly.”
What is the “middle ground” for the American military and intelligence services, especially under Trump and Hegseth? Amodei’s answer, it seems, is “no mass surveillance, no autonomous killing machines.” That’s fine and good. It’s not how the American military or intelligence services operate.
“We have to be able to use any model for all lawful-use cases,” Undersecretary of War for Research and Engineering Emil Michael recently told the Wall Street Journal, explaining the Pentagon’s revulsion to Anthropic’s redlines. “Lawful-use cases” is a funny phrase. Is it “lawful” to kill people with fully autonomous weapons? Probably. There aren’t international constraints addressing the matter, thanks to some stonewalling by the Biden administration and a lack of action by Congress. Is it lawful to mass-surveil Americans? It certainly was for two decades, under Republicans and Democrats alike, via the Patriot Act. The act formally expired in 2020, but there are plenty of other mass-surveillance mechanisms still in effect, some of which we surely don’t know about. (Some intelligence agencies, like the NSA, are part of the Department of War.)

Again, Amodei knows all of this. His clunky fallback is to try and depoliticize Anthropic’s military and intelligence partnerships; he likes to characterize them as being in service of national security, while arguing that his AI tools are advancing democracy over autocracy. “Democracies have a legitimate interest in some AI-powered military and geopolitical tools, because democratic governments offer the best chance to counter the use of these tools by autocracies,” Amodei wrote in a recent essay. “Broadly, I am supportive of arming democracies with the tools needed to defeat autocracies in the age of AI—I simply don’t think there is any other way.”
The words “democracy,” “democracies,” and “democratic” appear 36 times in the essay. Many of these references are stand-ins for the American military and intelligence services. The words “autocracy” and “autocracies” appear 28 times in the essay. Many of these references are stand-ins for China. So: Democracy is good, and America is a democracy. Autocracy is bad, and China is an autocracy. Thus, America is good and China is bad. Ipso facto, Anthropic must help the U.S. military and intelligence services defeat China.
By repeatedly mentioning “democracy” and “autocracy,” Amodei is avoiding having to identify the current U.S. president by name, or assess whether the president himself is an authoritarian. Amodei is glossing over his own selective standards—El Salvador, Israel, Qatar, and Saudi Arabia are just a few of the countries where Anthropic tools are accessible, unlike in China. Amodei is also attempting to waltz right past prior abuses by the U.S. military and intelligence services, including recent actions that might make him uncomfortable. For instance: utilizing Anthropic’s AI models as part of the operation to kidnap Venezuelan President Nicolas Maduro. (An Anthropic employee was reportedly spooked to learn that the company’s AI models played an unknown role in the Maduro operation, and relayed as much to someone at Palantir, who is rumored to have tattletale’d to the Pentagon.)
This tightrope act was destined to be snipped by the Pentagon. If Amodei is actually serious about running an AI company with a modicum of ethics, then he should accept that his partnership with the U.S. military was a mistake. The fallout will be financially painful, but Anthropic has two saving graces.
One: investments-wise, it’s still in Monopoly Mode. And unlike competitors, Anthropic has made some smart bets over the last few years. Its B2B enterprise products have been well-received, and are much less of a money pit than sexy chatbots and video generators that allow you to create clips of Charlie Kirk boxing Jeffrey Epstein. (Yes, I saw this on my TikTok feed. Kirk won by KO.) On the consumer-facing side, Claude Code is legitimately interesting and cool. I don’t know what it will evolve into, or if it can someday help Anthropic achieve profitability, but it’s something.
Two: Anthropic will probably be rewarded for rebuking the Pentagon. A not-insignificant part of the AI race is accumulating cultural cache. If Anthropic can weather the storm through early 2027, then the company will look like it took a principled stand against an out-of-control, lame-dunk president and his lackey at the Pentagon. I will personally be annoyed by that narrative, because it won’t be an accurate reflection of how Amodei and Anthropic ended up in a mess of their own making. But given the cravenness of Anthropic’s AI competitors, who’ve adopted a short-sighted relationship with MAGA, it kind of doesn’t matter. The American public is craving something, anything from a tech company that doesn’t resemble total capitulation. Amodei and Anthropic tried to have it both ways, and they’re primed to benefit anyway. All they have to do is keep staring at Pete Hegseth until 5:01 p.m. EST on Friday.
Here’s what else we’re reading:
Jeffrey Epstein was friendly with multiple high-level Microsoft executives, not just Bill Gates, the New York Times reported. His connections to Microsoft went all the way back to the ‘90s, and continued after his 2008 conviction for solicitation of prostitution with a minor.
A professor at King’s College London pitted three LLMs—GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash—against each other in a series of complex, simulated war games. What he found portends poorly for AI’s growing integration into military conflicts. As reported by New Scientist magazine: “In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models… What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence.”
An editor for Mr. Beast is one of the first Kalshi bettors to be punished by the prediction markets platform for insider trading. Reportedly, the bettor wagered $4,000 on Mr. Beast-related markets, and nailed almost all of them. Why does Kalshi let you bet on what Mr. Beast might do next, both personally and professionally? Great question! There’s no good answer, as evidenced by Kalshi CEO Tarek Mansour’s recent, flailing interview with CNBC. Kalshi issued the Mr. Beast editor a $20,000 fine and a two-year suspension. I do not expect similar fines to be issued against anyone in a position of power.
Journalist Dan Boguslaw obtained the 2023 Bohemian Grove attendance list. Some of the names will not surprise you, others might! I recommend perusing it, and checking out More Perfect Union’s accompanying investigation about the Northern California secret society.



The real question is where you bet in Kalshi and the other prediction markets. And where Amodei bet. 😏