×

Pentagon to use AI on classified systems after tech deals

WASHINGTON — The Pentagon said Friday that it has reached deals with seven tech companies to use their artificial intelligence in its classified computer networks, allowing the military to tap into AI-powered capabilities to help it fight wars.

Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX will provide their resources to help “augment warfighter decision-making in complex operational environments,” the Defense Department said.

Notably absent from the list is AI company Anthropic, after its public dispute and legal fight with the Trump administration over the ethics and safety of AI usage in war.

The Defense Department has been rapidly accelerating its use of AI in recent years. The technology can help the military reduce the time it takes to identify and strike targets on the battlefield, while aiding in the organization of weapons maintenance and supply lines, according to a report in March from the Brennan Center for Justice.

But AI has already raised concerns that its use could invade Americans’ privacy or allow machines to choose targets on the battlefield. One of the companies contracting with the Pentagon said its agreement required human oversight in certain situations.

Concerns about military use of AI arose during Israel’s war against militants in Gaza and Lebanon, with U.S. tech giants quietly empowering Israel to track targets. But the number of civilians killed also soared, fueling fears that these tools contributed to the deaths of innocent people.

Questions about military use of AI being worked out

The Pentagon’s latest contracts come at a time of anxiety about the potential for over-reliance on the technology on the battlefield, said Helen Toner, interim executive director at Georgetown University’s Center for Security and Emerging Technology.

“A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations,” said Toner, a former board member of OpenAI.

“AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets.”

But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said.

“How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?”

Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.

Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.

OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.

Starting at $2.99/week.

Subscribe Today