Well, Elon Musk has been in the news quite a bit lately.
There was the crash and burn of the Space X Falcon rocket ship, which made a spectacularly unsuccessful landing – talk about rockets’ red glare – on its return from a Pea Pod run to launch a payload of supplies to the International Space Station. (Space X is Musk’s civilian space exploration enterprise.)
Then came the news that Musk plans to be the Al Gore of an Internet designed for outer space.
Not to mention his donation of $10 million to the Future of Life Institute to help protect the world from artificial intelligence (AI):
In the past, Musk has warned that AI could be "potentially more dangerous than nukes." Recently, he and a long list of researchers signed an open letter asking for "robust and beneficial" AI research that would be mindful of future consequences to humans. (Source: NBC News)
And now he has put his money – or a teeny-tiny fraction of it – where his mouth is.
The $10 million will be given out as research grants. "You could certainly construct scenarios where the human race does not recover," Musk said. "When the risk is that severe, it seems like you should be proactive and not reactive."
Although it’s local – headed up by an MIT professor – I wasn’t familiar with the Future of Life Institute. But I’m all in favor of any group “working to mitigate existential risks facing humanity.”
Personally, I would have picked climate change, rogue viruses, mad scientists creating half-human/half musk ox hybrids, or our increasing general reliance on technology to “run” everything (such that only survivalists will know where water comes from, how to stay warm, and how to navigate by reading a map) as my existential risks. Maybe this latter fear of mine intersects with what the Future of Life Institute is on to. After all, they have very big brains, and understand that AI is more than Amazon suggesting what books you might like. So if they want to focus on the risky business of “human-level artificial intelligence”, well, have at it.
Musk isn't the only famous scientific mind worried about killer machines. In December, astrophysicist Stephen Hawking told the BBC that the "development of full artificial intelligence could spell the end of the human race."
Well, that settles things. Any fear of Stephen Hawking is a fear of mine.
All this reminds me of a second-order encounter with AI that I had a few decades back. (I was going to write that it was 20 years ago, but I think it was more like 30. Time flies, even if you’re not in a rocket ship.)
Anyway, a number of my colleagues – one of them remains one of my closest friends – were bailing out to join an upstart AI company.
What this company was doing was creating a Robby the Robot that would combine a LISP machine with a finance textbook (written by a Sloan School professor) and spit out optimal capital investment recommendations. At least that’s what I recall that the company of geniuses was going to do. It’s been a while. (I did a quick google and found that there is a Harvard Business School case study on this company, which I’m going to try to get my hands on.)
I was eventually invited to interview with this outfit. At least that’s what I recall. I may be being ego-shielding here. I may well have been pounding on their doors begging them to take me. After all, they were only hiring people with big and mighty brains, and what was more crave-worthy than joining that sort of brigade?
Anyway, I went through a number of interviews with everyone from the founders to my would-be manager (who was, in fact, an old-be manager of mine) to my likely peers.
Along the way, I asked a few questions, mostly around whether there was actually going to be an audience that would pay a million bucks for a black-box that would “make” a decision for them that was currently made via a combination of spreadsheet, back of the napkin, and gut sense for free. I also poked around a bit about my suspicion that business people were likely to look at the output of that costly black-box and yay or nay the advice based on back of the napkin and gut check.
I thought that, during the interview process, I was amply demonstrating that I had a big and mighty brain – big and mighty enough to make it there.
Apparently, most of what I was demonstrating was that I just didn’t get it.
I was let down gently.
I would not be getting an offer because it was felt that I “was not yet ready to leave my present company.”
Naturally, I was crushed.
And, just as naturally, I was gleeful and felt supremely vindicated when this company – which I kept tabs on through my friend – spent a couple of years going through all sorts of contortions before imploding. (Note to self: you really must get your hands on that HBS case.)
So, having been personally saved from AI, I laud Elon Musk and the Future of Life Institute for recognizing that it can pose an existential threat.
Oh, that Elon Musk. He is really something.
PayPal. Tesla. All that space stuff. Saving us from AI. Not to mention a name straight out of James Bond or Batman. It is good to be Elon Musk. And it is probably good to have Elon Musk as well.
P.S. Special thanks to my friend Valerie, who has a very big and mighty (non-artificial) intelligence for pointing this article out to me. Seriously, it takes a village to keep track of everything that Elon Musk is up to.