Weaponised AI is a clear and present danger - Hindustan Times
close_game
close_game

Weaponised AI is a clear and present danger

ByCori Crider
Jul 21, 2018 05:53 PM IST

The US has admitted that its drones attack targets whose identities are unknown. That’s where AI comes in. The US doesn’t have deep human intelligence sources in Yemen, so it relies heavily on massive sweeps of signals data. AI processes this data – and throws up red flags in a targeting algorithm.

Warnings about the risks posed by artificial intelligence seem to be everywhere nowadays. From Elon Musk to Henry Kissinger, people are sounding the alarm that super-smart computers could wipe us out, like in the film “The Terminator.” To hear them talk, you’d think we were on the brink of dystopia – that Skynet is nearly upon us.

Academics sometimes say that the field of AI and machine learning is in its adolescence. If that’s the case, it’s an adolescent we’ve given the power to influence our news, to hire and fire people, and even kill them.(Getty Images/iStockphoto)
Academics sometimes say that the field of AI and machine learning is in its adolescence. If that’s the case, it’s an adolescent we’ve given the power to influence our news, to hire and fire people, and even kill them.(Getty Images/iStockphoto)

These warnings matter, but they gloss over a more urgent problem: weaponised AI is already here. As you watch this, powerful interests – from corporations to state agencies, like the military and police – are using AI to monitor people, assess them, and to make consequential decisions about their lives. Should we have a treaty ban on autonomous weapons? Absolutely. But we don’t need to take humans “out of the loop” to do damage. Faulty algorithmic processing has been hurting poor and vulnerable communities for years.

HT launches Crick-it, a one stop destination to catch Cricket, anytime, anywhere. Explore now!

I first noticed how data-driven targeting could go wrong five years ago, in Yemen. I was in the capital, Sana’a, interviewing survivors of an American drone attack that had killed innocent people. Two of the civilians who died could have been US allies. One was the village policeman, and the other was an imam who’d preached against al-Qaeda days before the strike. One of the men’s surviving relatives, an engineer called Faisal bin Ali Jaber, came to me with a simple question: Why were his loved ones targeted?

Faisal and I travelled 7,000 miles from the Arabian Peninsula to Washington looking for answers. White House officials met Faisal, but no one would explain why his family got caught in the crosshairs.

In time, the truth became clear. Faisal’s relatives died because they got mistakenly caught up in a semi-automated targeting matrix.

We know this because the US has admitted that its drones attack targets whose identities are unknown. That’s where AI comes in. The US doesn’t have deep human intelligence sources in Yemen, so it relies heavily on massive sweeps of signals data. AI processes this data – and throws up red flags in a targeting algorithm. A human fired the missiles, but almost certainly did so on the software’s recommendation.

These kinds of attacks, called “signature strikes,” make up the majority of drone strikes. Meanwhile, civilian airstrike deaths have become more numerous under President Donald Trump – over 6,000 last year in Iraq and Syria alone.

This is AI at its most controversial. And the controversy spilled over to Google this spring, with thousands of the company’s employees protesting – and some resigning – over a bid to help the Defence Department analyse drone feeds. But this isn’t the only potential abuse of AI we need to consider.

Journalists have started exploring many problematic uses of AI: predictive policing heatmaps have amplified racial bias in our criminal justice system. Facial recognition, which the police are currently testing in cities such as London, has been wrong as much as 98% of the time. Shop online? You may be paying more than your neighbour because of discriminatory pricing. And we’ve all heard how state actors have exploited Facebook’s News Feed to put propaganda on the screens of millions.

Academics sometimes say that the field of AI and machine learning is in its adolescence. If that’s the case, it’s an adolescent we’ve given the power to influence our news, to hire and fire people, and even kill them.

For human rights advocates and concerned citizens, investigating and controlling these uses of AI is one of the most urgent issues we face. Every time we hear of a data-driven policy decision, we should ask ourselves: who is using the software? Who are they targeting? Who stands to gain – and who to lose? And how do we hold the people who use these tools, as well as the people who built them, to account?

Cori Crider, a US lawyer, investigates the national security state and the ethics of technology in intelligence. She is a former director of international human rights organisation Reprieve.

@Project Syndicate 2018

Unlock a world of Benefits with HT! From insightful newsletters to real-time news alerts and a personalized news feed – it's all here, just a click away! -Login Now!
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Thursday, April 18, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On