Fourteen Catholic moral theologians and ethicists filed a friend-of-the-court brief March 13 in support of artificial intelligence (AI) company Anthropic’s case against the U.S. Department of War, arguing that the company’s refusal to allow its AI systems to be used for surveillance and autonomous weapons shows ethical responsibility rather than a national security risk.
The case in question arose after Anthropic told the Department of War it could allow the use of its AI models in all lawful circumstances except two: making unsupervised decisions that end human life or enabling mass domestic surveillance.
According to Business Insider, the Department labeled Anthropic a “supply chain risk,” citing concerns that the company could be unreliable in national security contexts. Anthropic, however, claimed the move amounted to illegal blacklisting in retaliation for drawing a boundary. The government has said it is exercising its right not to contract with the company.
In their amicus brief, the moral theologians and ethicists — professors at Catholic universities across the U.S. — said they do not necessarily support Anthropic or the goals of AI development. Rather, they said they wish to back a company with a “principled ethical stance on AI use” that aligns with Catholic teaching.
Very proud of this amicus brief filed yesterday in the @AnthropicAI case against the Department of War from Catholic moral theologians and ethicists. The very notion of what it means to have a just war is at stake in how we respond to these matters. https://t.co/Fpm0Ercx40 pic.twitter.com/NvbIoJA8xS
— Charlie Camosy (@CCamosy) March 14, 2026
The professors said Anthropic’s refusal to use AI for mass domestic surveillance reflects Catholic teachings about privacy and the dignity of the human person. Intruding into personal relationships and communication with surveillance technology would violate human dignity and treat people as objects and data sources, they warned.
The professors also pointed to the principle of subsidiarity — another view aligned with Catholic teaching — and raised concerns that turning to mass surveillance would harm human agency. They also said such surveillance could undermine local government and set the federal government on the path toward totalitarianism.
The amicus brief also addressed the department’s wish to use AI tools to direct autonomous weapons, saying that human judgment is critical when determining the justice of acts of war.
“Human involvement is crucial because judgments of proportionality and discrimination are prudential — not mere pattern matching,” the professors wrote in the brief. “Human judgement, then, is built into the conditions of a just war, eliminating the possibility that the deployment of lethal autonomous weapons could ever meet the conditions of jus in bello.”
The professors further raised other ethical concerns about lethal autonomous weapons, saying that they “problematically obscure human agency,” remove the responsibility of decision making, and circumvent practical judgment. They additionally pointed to the state of AI technology, saying that it is “highly imprudent” to use it in its current and still undeveloped form to power autonomous weapons.
They concluded that in setting clear boundaries with the department, Anthropic “sought to uphold minimal standards of ethical conduct for technical progress.”
“In doing so,” the professors wrote, ‘Anthropic was acting as a responsible and moral corporate citizen, not as a threat to the safety of the American supply chain.”