Feud goes beyond AI guardrails and revolves around the dream of the nascent technology’s future
Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei JEREMY LEUNG/WSJ, AP, BLOOMBERG
By Tim Higgins
March 1, 2026 5:30 am ET
It would be so much easier to understand the fight between the Pentagon and AI star Anthropic if we were talking about traditional weapons.
If Anthropic were selling bullets, for example, then obviously Defense Secretary Pete Hegseth wouldn’t want limits imposed by an ammunition maker on whom he could shoot at or when. But AI is way more, a nascent technology featuring the promise of possible superintelligence. Its potential uses and capabilities are still being developed.
So the real fight is over the dream of what AI could be.
It is the same disagreement that is taking place across Wall Street and corporate America. What exactly does artificial intelligence mean for our future?
It is a question that sent the stock market into a tizzy this past week on a report by Citrini Research that painted a doomsday scenario for the economy if AI wipes out the white-collar workforce. “What if our AI bullishness continues to be right…and what if that’s actually bearish?” the firm asked.
It’s at the heart of some Silicon Valley workers’ fears that the American dream is on the verge of vanishing as AI further divides the haves from the have-nots in a world where humans are replaced by robots in factories and cubicles.
For Anthropic Chief Executive Officer Dario Amodei, there is concern over Pentagon demands to remove its self-imposed rules that prevent the company’s AI from being used for mass domestic surveillance and autonomous weapons. “Frontier AI systems are simply not reliable enough to power fully autonomous weapons,” Amodei said Thursday. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
His statement came ahead of a Friday deadline for the company to get on board with the Pentagon’s demands or else face some pretty dire consequences. With Anthropic’s recalcitrance, President Trump responded by announcing the entire federal government would stop working with Anthropic.
Hegseth declared Anthropic a supply-chain risk, imperiling its ability to work with other companies that do business with the U.S. government. That cleared the way for rivals to swoop in, such as OpenAI, which on Friday said it had agreed to a deal with the Pentagon for its AI to be used in classified settings that satisfies its own safety concerns.
For the Pentagon, the Anthropic fight has been about concerns that a private company—especially one the Trump administration has labeled as “woke”—wanted to control how the military uses technology.
Emil Michael, undersecretary at the Pentagon for research and engineering, spent the past week stressing that Anthropic’s AI would be used for lawful purposes—adding that there are laws against mass domestic surveillance and rules that govern autonomous weapons. Anthropic doesn’t “make the rules,” the former Uber executive told Bloomberg TV on Friday. “Congress makes the rules, the president signed them, we execute them,” he said.
At the heart of that debate is a bigger question: Is AI a tool—a silver bullet to answer hard questions or find new efficiencies? For the Pentagon, that might look like a way to combat swarming AI drones launched by an enemy, for example. For businesses, that might look like a way to free up workers from busy work so the company can do more.
Or is AI something more? For many in Silicon Valley, that might mean AI will develop a consciousness of its own, become a godlike power—and replace human labor altogether.
To be clear, it was still early days for the Defense Department’s use of Anthropic’s AI.
Amodei has said his technology has been used for cybersecurity and to support combat operations by the military and intelligence community. Its AI was used—through a partnership with the data company Palantir Technologies—in the U.S. military’s operation to capture Venezuelan leader Nicolás Maduro, my colleagues have reported.
“No one on the ground has actually, to our knowledge, run into the limits” imposed by Anthropic, Amodei told CBS News on Friday. “I can’t say what their plans are—we don’t know—but we have no evidence that these use cases have actually…run into trouble.”
When the Pentagon’s Michael describes how Anthropic’s AI is being used, it sounds familiar to anyone using Claude or OpenAI’s ChatGPT—except running on classified information—as the large organization looks to be more efficient.
“In the military context, there’s a lot of logistics that happen in the military,” he told CBS News. “How do I get something from one place to another? How much stuff do I have in either place? What do I need to move efficiently forward? What supplies might I need for a certain mission? How do I take all these different papers that have been written about what I’m going to do and make it in a consistent, summarized document?”
In other words, boring stuff.
Yet, the Pentagon sees the potential for autonomous weapons—with a human in the loop, as officials stress—as important to national security, given the advances seen in drone technology in places such as Ukraine.
“From a defense standpoint, whether it’s a drone swarm that’s coming at a military base, whether it’s a hypersonic missile coming at the United States…you want to be able to take them down potentially faster than a human could alone,” Michael told Bloomberg News.
What apparently increased tensions between the Pentagon and Anthropic was a hypothetical question poised to the startup about whether it would prohibit the government from using the company’s models to stop an imminent missile attack against the U.S. because of its autonomous-weapons prohibition
And, just like that, it was as if the 1983 sci-fi movie “WarGames” was feeling real.
The debate is a natural byproduct of the hype around AI that has ushered in an era of magical thinking. What was once the stuff of sci-fi now feels possible—even if the actual AI technology is still far from being able to do what’s being imagined.
Still, advances in the AI labs and noticeable improvements in the chatbots fuel more ideas of what could come. It has suddenly become mainstream to think wildly. So much so that it can feel like we’re living in a time when the limits of what might be possible are merely constrained by one’s imagination.
It isn’t surprising, then, that there will be fights today to decide whose dream of AI wins tomorrow.
Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved.
No comments:
Post a Comment