The clash between Anthropic and the U.S. government is not really about a single AI company or a single defense contract. It is about who ultimately controls the most powerful general-purpose technology of the decade: the state, the companies building it, or the citizens who must live with the consequences. And the problem cut across our conventionally polarized political climate. 

Last week, the Pentagon moved against Anthropic after the company refused to loosen guardrails on how its AI systems can be used. War Secretary Pete Hegseth invoked authorities normally used to counter foreign technology threats and designated the company a potential “supply chain risk.” The move cuts Anthropic off from all federal contracts, pressures contractors to remove its tools, and generally casts doubt on the firm’s viability as a trusted AI platform. This is, on its face, absurd. 

The legal tool used here is usually aimed at adversaries. The United States has imposed similar restrictions on companies such as Huawei, ZTE, Hikvision, and Kaspersky because foreign governments might compel those firms to insert backdoors or conduct espionage. Even in those cases, the evidence was often scarce or at least classified, but the foreign ownership was always clear. 

Anthropic is different. It is an American company. The dispute is not about foreign influence. It is about whether the government can require AI companies to allow their systems to be used for any lawful military purpose. Of course, the is is an adminstreation that has decided that putting kidnapping foreign leaders and bombing independent nations without declaration of war is lawful. And that, and an ICE officer shooting an unarmed American citizen in the face,e does not warrant an investigation, let alone a temporary suspension.

Ex-OpenAI researchers founded anthropic as an ethical alternative to ChatGPT’s accelerationist agenda. Predictably, Anthropic CEO Dario Amodei refused. “The Pentagon’s threats do not change our position,” he said. “We cannot in good conscience accede to their request.”

Anthropic’s existing policies prohibit two specific uses of its models: fully autonomous weapons systems and large-scale domestic surveillance.

The Pentagon’s response was blunt. According to reports on the dispute, the military intends to work only with companies whose systems are available for “all lawful uses.”

In Washington, that phrase carries enormous weight. “Lawful use” means the government decides what is acceptable. And despite the Founding Fathers’ best intentions for a system of checks and balances, “the government” essentially means the executive branch. Today, that means one man’s whims. And Anthropic is arguing that legality is not the same as wisdom.

The supply-chain designation stunned the AI industry because the mechanism was designed to deal with adversarial states. Applying it to a domestic company over product policy raised immediate alarms.

Even Sam Altman, whose company OpenAI stepped in to supply AI tools after Anthropic’s removal, questioned the precedent. “It is an extremely scary precedent,” Altman said of the government’s move. “I wish they handled it a different way.”

Note this didn’t stop Altman from signing a deal with the Department of Defense, War without 24 hours of the “scary precedent.”  Scary precedents can present market opportunities. To be far. OpenAI policies also say the technology should not be used for domestic surveillance of Americans, on paper. But that hardly seems to matter now. 

The surveillance infrastructure already exists

To understand why Anthropic drew that line, look at how AI is already being used inside the U.S. government. Immigration and Customs Enforcement relies heavily on software built by Palantir to analyze enormous datasets tied to immigration enforcement. The systems integrate information from government records, law-enforcement databases, and commercial data sources to help investigators identify targets and track networks.

Reporting on ICE’s next-generation platform—often described internally as ImmigrationOS—shows how artificial intelligence is expanding those capabilities by turning large pools of data into predictive tools for enforcement.

Although the focus is on undocumented immigrants, the datasets inevitably include U.S. citizens: employers, family members, landlords, and anyone else connected to an investigation.

Civil-liberties advocates argue that the system effectively builds a national data graph capable of monitoring enormous segments of the population. Artificial intelligence dramatically accelerates that process. Data analysis that once required weeks of investigative work can now be performed in seconds by machine-learning systems.

In other words, the infrastructure that worries Anthropic already exists. Further AI developments will simply scale it further.

Government/ Anthropic negotiations were conducted largely by, Emil Michael, the Under Secretary of Defense for Research and Engineering. He has dismissed fears that AI will lead to domestic spying. 

“The Department of War does not engage in any unlawful domestic surveillance,” he said in response to criticism of the Anthropic dispute. But what about ICE? The FBI? Also, how would we know? There is certainly plenty of precedent for both the surveillance and the secrecy.

Technology almost always moves faster than oversight.

A fight both parties should care about

What makes the Anthropic fight unusual is that it should worry both sides of the political spectrum.

Progressives have spent years warning about the unregulated use of emerging technologies by governments and corporations. Artificial intelligence could dramatically increase the state’s ability to monitor people and conduct warfare abroad.

The trajectory of a surveillance state is easiest to see in China, where artificial intelligence, ubiquitous sensors, and centralized data systems have fused into a comprehensive model of social control. China operates one of the largest networks of surveillance cameras in the world—more than 540 million cameras, according to estimates compiled by IHS Markit and Chinese state planning documents

Many are integrated with facial-recognition systems developed by companies like Hikvision, Dahua, SenseTime, and Megvii, which can identify individuals in real time across cities and transportation systems. Authorities have used these tools to track political dissent, monitor religious minorities, and enforce state policies. In the Xinjiang region, for example, researchers and human-rights organizations have documented an AI-enabled policing system that aggregates biometric data, travel records, and phone activity to flag “suspicious behavior,” often triggering police intervention or detention.

China has also linked digital identity systems, financial records, and online behavior into broader governance tools such as the emerging social credit architecture, which can restrict travel or financial access for citizens who fall afoul of state rules. The result is a model where AI does not simply assist law enforcement; it becomes an infrastructure for governing everyday life.

Note, in China, these are “all lawful uses.” 

Conservatives should see a different risk. If a private American company can be labeled a national security threat simply for refusing to modify its product to satisfy government demands, the precedent gives the executive branch enormous power over the technology sector. Crony capitalism isn’t new, but deliberately targeting a single company to hurt its position in the marketplace? 

As blase as conservatives seem to be about individual liberties, surely the idea of limiting the free enterprise of a corporation must rankle some vestigial Republican sensibilities.

The irony is that both fears come from the same place: a collapse of trust.

Citizens do not trust the government to restrain itself once it has powerful new tools. The government does not trust technology companies to prioritize national security over profit. And technology companies increasingly fear that working with the state means surrendering control of their own creations.

The real question

The dispute between Anthropic and the Pentagon is really a struggle over governance. Artificial intelligence is general-purpose infrastructure, more like electricity than software. Whoever controls it will shape how societies function.

If the government decides the rules, AI becomes an extension of the national security state. If companies decide, the most powerful infrastructure of the modern era will be controlled by private corporations. For now, the United States is heading toward something messier: an uneasy partnership where policy is negotiated through procurement rules, lawsuits, and political pressure.

There are no easy answers, and many of our first reactions will be wrong. That certainly seems to be the case with the government’s reaction here. 

We can only hope that the public can show its preferences with its consumer dollars and political power. As of this writing, Claude has dethroned ChatGPT at the top of Apple’s Free apps charts.