Site icon Hot Paths

Big Tech’s push into military AI is troubling

Stay informed with free updates

The writer is programme director of the Institute for Global Affairs at Eurasia Group

When OpenAI and Mattel announced a partnership earlier this month, there was an implicit recognition of the risks. The first toys powered with artificial intelligence would not be for children under 13.

Another partnership last week came with seemingly fewer caveats. OpenAI separately revealed that it had won its first Pentagon contract. It would pilot a $200mn programme to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” according to the US Department of Defense. 

That a major tech company could launch military work with so little public scrutiny epitomises a shift. The national security application of everyday apps has in effect become a given. Armed with narratives about how they’ve supercharged Israel and Ukraine in their wars, some tech companies have framed this as the new patriotism, without having a conversation about whether it should be happening in the first place, let alone how to ensure that ethics and safety are prioritised.

Silicon Valley and the Pentagon have always been intertwined, but this is OpenAI’s first step into military contracting. The company has been building a national security team with alumni of the Biden administration, and only last year did it quietly remove a ban on using its apps for such things as weapons development and “warfare.” By the end of 2024, OpenAI had partnered with Anduril, the Maga-aligned mega-startup headed by Palmer Luckey.

Big Tech has changed dramatically since 2018, when Google staffers protested against a secret Pentagon effort called Project Maven over ethical concerns, which led the tech giant to let the contract expire. Now, Google has totally revised its approach.

Google Cloud is collaborating with Lockheed Martin on generative AI. Meta, too, changed its policies so that the military can use Llama AI. Big Tech stalwarts Amazon and Microsoft are all in. And Anthropic has partnered with Palantir to get Claude to the US military.

It’s easy to imagine AI’s advantages here, but what’s missing from public view is a conversation about the risks. It’s now well-documented that AI sometimes hallucinates, or takes on a life of its own. On a more structural level, consumer tech may not be secure enough for national security uses, experts have warned.

Many Americans and western Europeans share this scepticism. My organisation’s recent survey of the US, UK, France and Germany found that majorities support stricter regulations when it comes to military AI. People worry it could be weaponised by adversaries — or used by their own governments to surveil citizens.

Respondents were offered eight statements, half emphasising AI’s benefits to their country’s military and half emphasising the risks. In the UK, less than half (43 per cent) said that AI would help their country’s military improve its workflow, while a large majority (80 per cent) said that these new technologies needed to be more regulated to protect people’s rights and freedoms.

Using AI for war could, at its most extreme, mean entrusting a flawed algorithm in questions of life or death. And that’s already happening in the Middle East.

The Israeli news outlet +972 Magazine has investigated Israel’s military AI in its targeting of Hamas leaders in Gaza and reported that “thousands of Palestinians — most of them women and children or people who were not involved in the fighting — were wiped out by Israeli air strikes, especially during the first weeks of the war, because of the AI program’s decisions”.

The US military, for its part, has used AI for selecting targets in the Middle East, but a senior Pentagon official told Bloomberg last year that it wasn’t reliable enough to act on its own.

An open conversation about what it means for tech giants to work with militaries is overdue. As Miles Brundage, a former OpenAI researcher, has warned: “AI companies should be more transparent than they currently are about which national security, law enforcement and immigration related use cases they do and don’t support, with which countries/agencies, and how they enforce these rules.”

At a time of war and instability around the world, the public is clamouring for a conversation about what it really means for the military to use AI. They deserve some answers.

Exit mobile version