Secretary of War Pete Hegseth has given Anthropic CEO Dario Amodei until 5 p.m. Feb. 27 to allow unrestricted military use of its artificial intelligence (AI) model or risk losing Pentagon support, according to multiple media outlets.
The dispute reportedly centers on the AI firm’s usage policies, which bar its model Claude from being used for autonomous “kinetic” actions — armed systems potentially killing without human oversight — and from enabling mass surveillance of U.S. citizens.
According to Jennifer Griffin, FOX News’ chief national security correspondent, Hegseth met with Amodei at the Pentagon Feb. 24 to press for the removal of those restrictions on Claude’s deployment in national security operations. Griffin cited an anonymous source who described the meeting as “cordial” and “all business.”
At high stakes Pentagon meeting today Sec Hegseth gave Anthropic head Dario Amodei ultimatum to allow the Pentagon to use Anthropic’s AI model for mass domestic surveillance and kinetic autonomous operations without human oversight or face censure and be labeled “supply chain…
— Jennifer Griffin (@JenGriffinFNC) February 24, 2026
Axios reported that Hegseth – who was joined by several Pentagon officials in the meeting – told Amodei he would not allow a company to set conditions on how the Pentagon conducts its operations.
Hegseth warned Anthropic that it will “face censure and be labeled ‘supply chain threat’” if it does not comply, Griffin reported. Officials are also considering using the Defense Production Act — which gives the president the authority to require that private companies prioritize contracts deemed necessary for national defense — to compel compliance. According to Axios, the act was used after the outbreak of COVID-19 to increase vaccine and ventilator production.
According to Griffin, Anthropic says it is willing to support “legitimate military operations” but will not remove its ethical guardrails because it believes unrestrained autonomous systems could pose risks to both soldiers and civilians.
The company has cautioned that there is a risk “soldiers and others could lose control of the model and automatically start killing large groups without humans in the ‘kill chain,’” according to a source cited by Griffin.
In a recent interview with conservative New York Times journalist Ross Douthat, Amodei warned that constitutional rights “can be undermined by AI if we don’t update these protections appropriately.”
He pointed to the Fourth Amendment as one example, suggesting that advanced AI systems could erode or circumvent the probable cause requirements for searching private communications. With AI, the government now has the ability to “transcribe speech, look through it, correlate it all,” and analyze personal opinions and viewpoints held by individuals.
Anthropic's CEO Explains His Refusal to Back Down to the Pentagon.
— Wes Roth (@WesRoth) February 24, 2026
Amodei explained his deep concerns over "autonomous drone swarms" and mass surveillance.
He pointed out a crucial reality: our military's constitutional protections rely entirely on human soldiers having the… pic.twitter.com/IsmbKecwsB
“So are you going to make a mockery of the Fourth Amendment by the technology finding technical ways around it?” Amodei asked. “Is there some way of reconceptualizing constitutional rights and liberties in the age of AI?”
Anthropic was awarded a $200 million Pentagon contract in July 2025 to develop AI capabilities for defense applications. TheWall Street Journalreported that the U.S. military used Claude during the Jan. 3 operation that captured Venezuelan President Nicolás Maduro. The report did not disclose details about how AI was used.