The Anthropic-Pentagon War Shows How Big Tech Has Changed Course on Artificial Intelligence and War - Gazeta Express
string(116) "the-anthropic-war-pentagon-shows-how-big-technology-has-changed-course-on-artificial-intelligence-and-war"

AutoTech

Express newspaper

13/03/2026 21:32

The Anthropic-Pentagon War shows how big tech has changed course on artificial intelligence and warfare

AutoTech

Express newspaper

13/03/2026 21:32

Less than a decade ago, Google employees rejected the military use of the company's artificial intelligence. Today, Anthropic is no longer fighting over "if" the technology should be used, but over "how."

The conflict between Anthropic and the Pentagon has forced the tech industry to confront once again the question: how its products are used for war and what boundaries it will not cross. In a Silicon Valley that has changed political direction under Donald Trump and after signing money-laden defense contracts, the response of big tech looks very different than it did just a few years ago.

Anthropic’s feud with the Trump administration intensified three days ago, when the company sued the Department of Defense, claiming the decision to exclude it from government contracts violates its First Amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic trying to stop its AI model from being used for mass domestic surveillance or lethal autonomous weapons.

Anthropic has argued that meeting the DoD's requirements for "any lawful use" of its technology would violate its founding security principles and open up opportunities for abuse, setting an ethical boundary that others in the industry must set if they want to cross.

Although Anthropic's refusal to remove safeguards and the Pentagon's response have highlighted early concerns about the use of AI for conflict, this battle has shown how much the rules have changed when it comes to big tech's ties to the military.

“If people are looking for a good-for-nothing distinction between good and evil, where good is someone who doesn’t support war, then they won’t find that here,” said Margaret Mitchell, AI researcher and chief ethics scientist at Hugging Face.

From anti-military protests to military contracts

Several factors have influenced the acceptance of big tech for military purposes. The Trump administration’s alliance, including displays of loyalty from top CEOs, has linked the industry to the government’s desire to expand military capabilities. The administration’s promise to overhaul federal agencies with AI has also created opportunities for AI companies to integrate their products into government and military operations, providing revenue for years to come. In the background, concerns about China’s technological advancement and rising international defense spending have changed attitudes in the industry.

But it hasn't always been this way. In 2018, thousands of Google employees protested against a drone imagery analysis program for the Department of Defense, called Project Maven.

“We believe Google should not engage in war,” more than 3,000 employees declared in an open letter. After the protests, Google did not renew Project Maven and published policies prohibiting the development of technologies that could “directly harm people.”

Since then, Google has restricted employee activism, removed language from its 2018 policies that prohibited the creation of weapons technologies, and signed numerous contracts that allow the military to use its products. In 2024, the company fired more than 50 employees who protested Google’s military ties to the Israeli government. CEO Sundar Pichai sent a memo saying that Google is a business, not a place to debate politics.

Recently, Google announced that it will offer its Gemini AI to the military, creating a platform for AI agents in unclassified projects.

Adapting the AI ​​industry to the military

OpenAI also had a blanket ban on military use until 2024, but its chief product officer now serves as a colonel in the U.S. Army’s innovation corps. The startup, along with Google, Anthropic, and xAI, signed a contract last year worth up to $200 million with the DoD to integrate the technology into military systems. On the same day that Defense Secretary Pete Hegseth declared Anthropic a supply chain risk, OpenAI secured a deal for use in classified military systems.

More aggressive companies like Anduril and Palantir have made partnering with the DoD a cornerstone of their business, trying to sway Silicon Valley policy toward their own perspective. Palantir has been working with military intelligence to map explosive sites in Afghanistan since the early 2010s. Palantir, after Google canceled Project Maven in 2019, took over the program, now using Anthropic Claude for the classified system the military uses.

Anthropic and military ethics

Although Anthropic has received public praise for clashing with the Pentagon, CEO and co-founder Dario Amodei has emphasized that the company and the government are largely looking for the same things.

“Anthropic has far more in common with the War Department than differences,” Amodei wrote.

In a lengthy essay, he warned of dangers such as the creation of deadly biological weapons and abuses by China, but at the same time argued that companies should equip democratic governments and militaries with advanced AI to fight autocratic adversaries.

He has indicated that the biggest concern is not that AI will make it easier to kill people, but that the technology could be controlled by a few individuals with a "finger on the button" who could command an autonomous army of drones.

According to the company's lawsuit against the DoD, Anthropic does not impose the same restrictions on the military as it does on civilian users. Claude Gov is more than capable of fulfilling requests that would be prohibited for civilians, such as handling classified documents and threat analysis. The government uses Claude for target selection and analysis in air campaigns against Iran, a use that Anthropic has not objected to.

Dario Amodei told CBS News that the company has no role in the military's operational decision-making, but supports American soldiers and is committed to providing technology to them.

“We have told the War Department that we agree with almost all of the use cases,” Amodei said. “Basically 98 or 99% of the uses they want to do, except for two cases.” /GazetaExpress/

advertisement
advertisement
advertisement