OpenAI CEO Sam Altman announced the evening of Feb. 27 that his artificial intelligence company had reached an agreement with the Pentagon to allow its technology to be used for classified work within the Department of War.
The announcement came just hours after President Donald Trump had declared he was directing all federal agencies to blacklist Anthropic – a leading competitor to OpenAI – whose executive had refused to accede to War Department demands for the removal of ethics and safety features.
Anthropic CEO Dario Amodei had publicly stated that the conditions the War Department rejected were that the AI firm would not allow its technologies to be used for the mass surveillance of American citizens and would not remove protections against weaponized systems such as autonomous armed drones killing without human prompting.
Trump administration officials vehemently denied Amodai’s characterization of the disagreement, claiming they would never use AI for such purposes. “Here's what we're asking,” stated Pentagon spokesman Sean Parnell: “Allow the Pentagon to use Anthropic's model for all lawful purposes.”
The War Department gave Anthropic until 5:01 pm ET Feb. 27 to decide. “Otherwise,” Parnell said, “we will terminate our partnership with Anthropic and deem them a supply chain risk.”
After Anthropic refused to budge, War Secretary Pete Hegseth announced Feb. 27 that the company would not only be removed from federal agency systems, but be designated a national security risk and blacklisted by all federal contractors “effective immediately.”
“Effective immediately,” Hegseth wrote in a statement published on X, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
Dean W. Ball, who helped draft the Trump administration’s AI police as a senior AI advisor to the White House last year, criticized Hegseth’s decision in a series of viral posts on X.
“This is simply attempted corporate murder,” he wrote. “I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States.”
In another post, Ball wrote Hegseth is “claiming that the DoD can force all contractors to stop doing business of any kind with arbitrary other companies.”
“In other words,” he added, “every operating system vendor, every manufacturer of hardware, every hyperscaler, every type of firm the DoD contracts with—all their services and products can be denied to any economic actor at will by the Secretary of War.” Ball warned the move sends the message that “the United States Government is a completely unreliable partner for any kind of business.”
Altman’s reversal – or the War Department’s?
As Zeale News reported, Altman had originally signaled that he agreed with his competitor. “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” he told CNBC the morning of Feb. 27.
But by that evening, OpenAI was well on its way to signing a contract with the agency that Amodei had claimed rejected his ethical principles.
While some accused Altman of swooping in to gain a contract at the expense of Anthropic (and of the principles Altman had claimed to share with it), others were reassured by the fact that the conditions of a potential OpenAI contract with the War Department appear to match those Amodei had insisted on.
According to Altman, the agreement with the Defense Department includes technical and operational limits on how its technology can be deployed – particularly when it comes to surveillance and armed drones.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote in the statement. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman said OpenAI would implement technical safeguards to ensure model behavior and appoint forward-deployed engineers to assist with implementation and safety oversight. He said deployments would be limited to cloud networks only.
In his statement, Altman also asked the Department of War “to offer these same terms to all AI companies.”
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
— Sam Altman (@sama) February 28, 2026
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of…
Critics argue that OpenAI is not holding to the same strict safety principles it has publicly championed, especially when compared to Anthropic's refusal to drop restrictions.
Litigation experts have warned that AI companies could negotiate a deal that allows "any lawful use" in principle, even if safeguards are embedded.
As of late Feb. 27, a contract between OpenAI and the War Department had yet to be signed, though the two parties seemed to have substantially arrived at an agreement.