Friday, March 03, 2017

The AI geniuses are going to figure it out for us. Phew…

I’m not holding my breath waiting for the arrival of the singularity – the point where some computer, chock full of AI, starts feeling its oats, and runaway technology takes over are starts doing things like manipulating elections, screwing with the stock market, wiping out the grid, and goading all the self-driving vehicles into playing bumper car. Or whatever pops into its artificially intelligent mind. There’ll be no holding it back, because the computer will have gotten so much smarter and more generally competent that we humans are.

The singular awfulness of the singularity may or may not occur in my lifetime. But if it does, I hope it instructs whatever intelligent device sitting on my nightstand, or embedded in my forearm, to do the singularity equivalent of blowing the poison, instantly killing dart into my carotid artery. Kind of like the blowguns the “natives” were armed with in Tarzan movies. Because while I know plenty of computers that are smarter than people – my phone is more intelligent (EQ and IQ) than half the ninnies on “reality” TV, let alone a goodly proportion of those commenting on Boston Globe articles – I don’t want to be around when computers surpass all of us when it comes to sentience, empathy, and wit.

But that’s just me.

Fortunately, I can pack up all my singularity-related cares and woe, now that the big mahoffs of the AI world (40 of them, anyway) have gotten together at an event called “Envisioning and Addressing Adverse AI Outcomes” to figure things out.

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, see king a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it. (Source: Bloomberg)

Given that this is the 21st century, the event did a bit of gamifying, asking participants to submit entries describing (within reason and plausible short-term technology) the worst thing that could happen if they – I mean it – takes over. The winning ideas were discussed at panels composed of experts, who debated what would be the best thing to do if their worst-case scenario happened.

I have to say that this sounds like a nerdfest par excellence. Almost makes me wish I were an AI expert…

Artificial kidding aside, there are some pretty nasty AI possibilities out there.

The possibility of intelligent, automated cyber attacks is the one that most worries John Launchbury, who directs one of the offices at the U.S.'s Defense Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session. What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear program that got out in the wild, but stealthier and more autonomous.

"We're talking about malware on steroids that is AI-enabled," said Fisher, who is an expert in programming languages. Fisher presented her scenario under a slide bearing the words "What could possibly go wrong?" which could have also served as a tagline for the whole event.

Malware on steroids? And here I was worrying about the phishing expedition that followed my recent encounter with UPS’s chat support.

Anyway, I will sleep more easily tonight knowing that there are a bunch of AI geniuses who are going to figure it all out before it’s too late.

No comments: