🚫 AI and War: Anthropic Says No to the Pentagon, OpenAI Collaborates with the Military

Autonomous weapons, mass surveillance and the conflict over who controls military AI

For decades the relationship between Silicon Valley and the US military was relatively simple.

The government needed technology. Companies developed it.

The internet, GPS, microchips and much of today’s digital infrastructure were born in military projects.

But artificial intelligence is changing that relationship.

For the first time, a tech company is saying something that had rarely been heard in Washington:

“We will not allow our technology to be used to kill.”

That is the heart of the conflict that has just erupted between the Pentagon and the AI company Anthropic, creator of the Claude model.

And what is happening now could define how AI systems will be used in war for decades to come.

The origin of the conflict

The US Department of Defense has been trying to integrate artificial intelligence into military operations for years.

The objectives are clear:

  • Analyze intelligence faster
  • Identify targets on the battlefield
  • Automate tactical decisions
  • Process huge amounts of data from satellites, drones and sensors

In theory, AI would help the military make better decisions faster.

But the problem appears when that capability begins to approach something far more delicate:

the decision to use lethal force.

According to recent reports (Reuters, Axios, February 2026), the Pentagon had been in negotiations for months with four AI companies —OpenAI, Anthropic, Google and xAI— demanding that their models be available for “all legal purposes” within military operations, including weapons development, intelligence collection and battlefield operations.

That includes:

  • intelligence analysis
  • military planning
  • surveillance
  • and potentially identification of targets in combat

That’s where the conflict began.

Anthropic’s two red lines

Anthropic decided there were two uses it was not willing to allow.

1. Autonomous weapons

The company refused to allow its AI to participate in systems where the technology could:

  • identify a target
  • select an attack
  • execute a lethal action

without direct human intervention.

It’s what many researchers call autonomous weapons. A concrete example: a drone that, using AI, identifies a vehicle as a “hostile target,” selects an attack and fires without a human operator approving each action. The technology already allows that; the question is who authorizes its use.

Anthropic believes current models are not reliable enough to make life-or-death decisions —and the data backs that up: state-of-the-art language models still make classification errors and hallucinations in a non-trivial percentage of cases.

2. Mass surveillance

The second red line was even more political.

The Pentagon wanted to be able to use AI to analyze large volumes of data, which could include data on US citizens.

Anthropic refused to allow its technology to be used for domestic mass surveillance.

The company argues that crossing that line could move Western democracies closer to control models more typical of authoritarian states.

The Pentagon’s response

The government did not react well.

Defense officials said that restrictions imposed by tech companies could interfere with real military missions.

The Pentagon’s position is clear:

If a military operation is legal under US law, a private company should not decide how the technology can be used.

The conflict escalated quickly. At the same time, it emerged (Wall Street Journal, February 2026) that Claude had been used in a real military operation: the capture of former Venezuelan president Nicolás Maduro, deployed through Anthropic’s partnership with data firm Palantir. In other words, the line between “we don’t work with the Pentagon” and “our AI is already used in sensitive operations” turned out to be blurrier than it seemed.

Among the threats discussed were:

  • canceling contracts with Anthropic
  • excluding the company from military projects
  • declaring it a national security risk

In other words: if you don’t cooperate, you’re out.

The twist: OpenAI did agree to work with the Pentagon

While Anthropic held firm, another company took a different path.

OpenAI decided to collaborate with the US government, albeit under certain conditions.

The company states that its technology cannot be used for autonomous weapons or mass surveillance, but can be used for other military applications.

For example:

  • intelligence analysis
  • logistical planning
  • strategic simulations
  • assistance to human operators

Even so, the decision triggered a strong public reaction.

According to some reports, more than 1.5 million users canceled their ChatGPT subscriptions within 48 hours of the Pentagon agreement becoming known. A hard number that shows how closely the public is watching who uses AI and for what.

The debate over the militarization of AI exploded immediately.

The real problem: the “kill chain” of modern war

In modern military systems there is something called the kill chain.

It’s the full process of an attack:

  1. detect a target
  2. identify it
  3. assess threat
  4. decide action
  5. execute attack

Artificial intelligence already participates in several of those stages.

But the big question is: what happens when it participates in all of them?

If an AI detects, identifies and recommends attacking… the human decision could become just a formality.

That’s the scenario that worries many researchers.

What is China doing?

While the United States debates ethical limits, other countries are moving ahead quickly.

China, for example, is investing enormous resources in autonomous drones, AI surveillance systems and massive intelligence analysis. According to think tank and specialist press estimates, global spending on defense AI (including China, the US and allies) runs into tens of billions of dollars per year and is growing at double-digit rates.

The Chinese approach is usually more pragmatic: if the technology exists, it gets integrated into the military system.

That creates strategic pressure.

Because if one power adopts more automated systems, others may feel they cannot afford to fall behind.

The risk no one wants to admit

The comparison to Terminator or Skynet often sounds exaggerated.

But the real problem is more subtle.

Current AI systems have three dangerous characteristics in a military context:

1. They can make mistakes. Generative models still make errors.

2. They can amplify bias. If the data is incomplete or biased, the decisions will be too.

3. They can scale decisions rapidly. A human error affects one decision. An automated error —for example, a model that mislabels civilians as combatants across thousands of hours of video— can affect thousands of decisions in minutes.

That’s why many experts believe AI should assist military decisions, not replace them.

What this conflict really means

The clash between Anthropic and the Pentagon is not just a contractual dispute.

It’s a sign of something bigger.

We’re entering a new stage where three forces are colliding:

  • governments
  • tech companies
  • increasingly powerful artificial intelligence systems

And there is still no clear rule about who should make the final decisions.

The question that remains open

The world will likely see more and more AI in defense.

That seems inevitable.

The real question is not whether it will happen.

It’s this:

Who controls artificial intelligence when it’s used in war?

Governments. Companies. Or the systems we’re building.

Sources

This article is based on the following press sources (external links open in a new tab):

  1. ReutersPentagon clashes with Anthropic over military AI use (Jan 29, 2026).
  2. AP NewsCoverage of Pentagon and AI companies conflict.
  3. The Economic TimesPentagon threatens to cut off Anthropic in AI safeguards dispute (Feb 15, 2026; Axios/Reuters report).

If you want to read more on how AI is already transforming modern conflict (drones, electronic warfare, cyber defense), see: How Artificial Intelligence is Transforming Modern Warfare in 2025.

✍️ Claudio from ViaMind

Dare to imagine, create and transform.


Comments
Comments are shared between English and Spanish versions.

Subscribe

Get a monthly email with the best posts on innovation, technology, and the future. No spam.

* required

Intuit Mailchimp