The Pentagon Unleashes Its Omnipresent, Unreliable AI System on Iran
With the backing of Palantir, Anthropic, and other tech companies, Maven Smart System is reportedly playing a major role in America's war against Iran.
Almost eight years ago today, the New York Times published an article about how the “Pentagon Wants Silicon Valley’s Help on A.I.”
The concern among military and intelligence officials was that the United States risked losing a nebulous AI race to China; the Pentagon needed a patriotic assist from Big Tech’s best and the brightest, but tech workers weren’t cooperating. At Google, workers were alarmed to learn their company was working with the Department of Defense on something called Project Maven, then described by the Times as “the Defense Department’s sweeping effort to embrace artificial intelligence.” Project Maven was supposed to make it easier and faster to analyze drone footage, but workers were perturbed about what the military would actually do with that information, especially as AI continued to develop.
Alarm turned to action. Thousands of workers signed an open letter to Google executives, pleading with the company to abandon its participation in Project Maven. Google relented, telling workers it would not renew its Maven contract. Palantir, run by Peter Thiel and Alex Karp, quickly stepped in. Palantir’s gleeful involvement was hardly a surprise—it was already infamous because of Thiel’s connections to MAGA, as well as its contracts with ICE and DHS. The bigger unknown was whether other tech companies would listen to their workers and avoid getting into business with the Pentagon, or if they’d go the way of Palantir. Also: Whether Project Maven would ever evolve into something tangible.
The early days of the Iran war have provided some answers. Project Maven is no longer an aspirational line item. Maven Smart System, as it’s now known, is an essential part of U.S. military operations in Iran, thanks in large part to contracts with a confluence of tech companies including Palantir, Anthropic, Anduril, and AWS.
The Washington Post just reported that Maven “is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran.” That constitutes a major escalation from 2024, when Maven partially coordinated strikes against facilities in Syria and Iraq, but wasn’t “ready to recommend the order of attack” in a military conflict, Bloomberg reported.
One difference between 2024 and now is that Anthropic’s AI tool Claude is embedded into Maven, despite the Pentagon’s recent row with Anthropic CEO Dario Amodei. In fact, “Over the last year military planners have seen Claude, paired with Maven, mature into a tool that is in daily use across most parts of the military,” the Post reported this week. Should the (frankly pathetic) renewed talks between Anthropic and the Pentagon prove unsuccessful, other companies like OpenAI and xAI are champing at the bit to replace Claude and provide support for systems like Maven.

It’s fair to wonder about the true effectiveness of Maven, given how often AI companies and War Secretary Pete Hegseth rhapsodize about themselves. But as far as I can tell, Maven’s advancements appear to be significant, especially compared to the late-2010s, when Pentagon officials discussed the AI system as if it might someday be something out of Star Trek. In the lead-up to war with Iran, Maven “suggested hundreds of targets” and “issued precise location coordinates,” according to the Post. A separate, reputable report from Bloomberg confirmed that Maven is among the AI tools utilized by the U.S. military to “quickly manage enormous amounts of data for operations against Iran.”
How much Maven has really improved is the bigger mystery. In 2024, Maven only correctly identified objects 60 percent of the time, a borderline-useless figure that reportedly dipped even further in desert conditions. For argument’s sake, let’s be extremely generous and assume Maven now rivals the purported accuracy rate of Israel’s similar system, Lavender, which Israeli officials claim can identify Hamas combatants 90 percent of the time. (To be clear, I don’t believe the 90 percent figure; Lavender’s system has reportedly been used in the Gaza genocide to indiscriminately kill tens if not hundreds of thousands of Palestinians, with negligible human oversight.)
A 90 percent accuracy rate, coupled with the ability to more quickly locate targets, still suggests lots and lots of catastrophes. Unlike other AI hallucinations, a screw-up by Maven doesn’t result in a funny TikTok about how ChatGPT can’t spell the word strawberry. It means, absent a “human in the loop” catching the error, that innocent people will be killed.

We don’t officially know who was behind strikes on an Iranian elementary school, which killed at least 165 people (many girls). But the relative silence and weak denials from both the Pentagon and the IDF are ominous signs. Satellite imagery from after the strikes points to “very precise targeting,” an expert told NPR. A Times investigation came to the same conclusion. The elementary school was reportedly located next to a military base, which was walled off from the school a decade or so ago. Were American military planners, aided by AI tools like Maven, firing at outdated targets?
Even if it turns out AI played no role in the strikes against the elementary school, Maven’s omnipresence indicates that AI will be responsible for (or at least the excuse for) other war crimes in Iran. Consistent AI hallucinations from an overrated, overhyped system are one terrifying outcome. The other is that Maven works pretty well. It still hallucinates (that part is a given), but it’s also efficient and powerful. It’s responsible for finding and suggesting targets, and the “human in the loop” is able to strike those targets more swiftly than ever before.
That scenario is similarly disturbing. It means the American military will have a much easier time killing Iranians in a war the Trump Administration isn’t bothering to justify—a war with no obvious objectives and no off-ramps, just pain and suffering. It’d be nice if the executives at the tech companies that are supplanting Maven could add “endless wars the Trump Administration starts for no reason” to their paltry list of much-ballyhooed red lines. That seems improbable, though. More likely, it will once again fall on tech workers to express their displeasure, same as they did at Google eight years ago.


