top of page

THE EXPLAINER

Intermittent posts on buying and selling enterprise software, construction software, AI-enabled applications and more.

The Artificial Intelligence End User Licensing Agreement (EULA) Just Saved Civilization

Anthropic, Hegseth’s Ultimatum and Threats to Safety and U.S. Competitiveness
Pete Hegseth and Donald Trump want to force AI companies to remove safeguards preventing mass surveillance and autokill.

Bucking pressure from the U.S. Department of Defense, Anthropic is refusing to revise its   safeguards in its end user license agreement (EULA) and usage policies for its Claude AI model. 


Hegseth had issued a Friday, March 27 2026 deadline for Anthropic to enable their instance of the technology to conduct mass surveillance and autonomous kill missions. Anthropic CEO Dario Amodei refused, leading to President Donald Trump to order agencies across the federal government to stop doing business with the company.


This may cost Anthropic $200 million per year in the short term, plus any other revenue from agencies beyond the Pentagon. Rival OpenAI immediately inked a deal with the Pentagon, albeit one that includes the safeguards that Anthropic refused to remove from their own EULA. OpenAI CEO Sam Altman, in the announcement of the deal, said:


“We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs [specialized technical professionals working directly with customers to deploy AI models into production environments] to help with our models and to ensure their safety, we will deploy on cloud networks only.”


Interpretation

While OpenAI’s insistence on redline protections similar to Anthropic’s is positive, the risk of irresponsible use of these company’s technologies is not past.


There are AI models designed for mission-critical environments, including targeting and recognition models, and drones that can track targets while requiring human intervention to trigger lethal action. Perhaps more promising is operational AI, which can gather information across disparate systems to support real-time decision-making in situations characterized by volatility, uncertainty, complexity, and ambiguity (VUCA).


Impetuous moves to remove humans from the loop in life and death situations may be attractive to leaders worried that ethical and legal liability concerns may prevent military personnel from executing illegal orders. Artificial intelligence companies like Anthropic and OpenAI have redline clauses in their EULA in part to protect themselves from legal liability—in this case for human rights violations, death and destruction in which their product could be made complicit.


It turns out humans have their own EULA—one written in their hearts and minds, and in the case of mass surveillance or killing—one codified in federal and state law, and in the Uniform Code of Military Justice. Deploying AI will still leave military leadership exposed to liability for unlawful actions that result.


Here is where the EULA becomes very important. If the EULA forbids usage of the product for specific purposes, and makes clear it is not fit for other specific purposes, that software vendor has protection from liability resulting from forbidden use cases. If your software is used to automate recruiting, and winds up making racially biased hiring decisions, that is a different story. So the EULA, and liability limits AI vendors in particular are building into their contracts, are important risk mitigation tools for companies like OpenAI and Anthropic.

But the recent standoff between Anthropic and the DoD exposes broader risks, to both AI companies and society.


  • In his statement, Altman made clear that they would deploy their product on cloud networks only. Failing this, the model could be launched in an on-premise, preventing oversight by the vendor. Chinese vendors have already used distillation attacks to steal large swaths of Anthropic’s intellectual property for their own purposes. A DoD intent on building autokill or mass-surveillance capabilities could cobble together their own tech using similar means.

  • But the greater risks may have to do not with pirating code, but rather from threats to nationalize one or more US-based AI company with its Golden Shares tactic. This enables the government to pick winners and losers in the market as competitors find themselves at a disadvantage. But it also gives the federal government more control, which may be concerning when independent ownership guards the interests of its own private-sector shareholders in contravention to Hegseth’s and Trump’s dictates. Already, this alarmingly interventionist approach has involved:

    • MP Materials

    • Lithium Americas Corp

    • Trilogy Metals Inc

    • U.S. Steel

    • ReElement Technologies

 

This threat, combined with a lack of respect for the independence of corporate entities from immediate federal control, could place downward pressure on the stock prices of companies like Anthropic and OpenAI. Could one be selected as the Golden Child, and then exposed to crushing liability risks as government overseers blow past ethical and moral red lines? Or will others suffer as they lose federal business or face other penalties for refusing to yield? This may be true of other industries where the government is picking winners and losers.


But for AI companies, the risk of being frozen out of federal contracts or other penalties may pale in comparison to agreeing to a federal contract, and then seeing the product used in ways that create crushing liability.

Comments


bottom of page