Last week, Stephen Hawking told the world (AKA the BBC) that “The development of artificial intelligence could spell the end of the human race.”
And while I’m name-dropping, I should say that Elon Musk also called AI “our greatest existential threat”.
This. This strikes me as curiously over-reactive – for a number of economic reasons.
Reason 1: If It Happened, We Wouldn’t Know It
The theory of AI runs something like this:
- Once you create something that’s even slightly capable of learning independently, it would learn and change itself at exponential speed.
- Here’s Stephen Hawking: “It would take off on its own, and re-design itself at an ever increasing rate.”
- And: “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
My inner economist says that there are two outcomes here:
- AI would be benevolent.
- AI would be malevolent, the bastard.
And I’m using those terms loosely, because the great fear for Artificial Intelligence is that because it lacks Human Emotion (which has always served us so well #sarcasm), its cold and rational decisions would end up in a “must exterminate” command.
Here’s the thing – that whole story would play out almost immediately (recall the exponential rate of self re-design). So if it ended up with negative consequences, we’d be wiped out fairly instantly. Whilst eating cornflakes or taking a selfie or whatever.
Which isn’t such a bad way to go.
And if it turns out to be positive, well then my iPhone just got way cool.
Reason 2: Microsoft Excel
Humans may be limited by “slow biological evolution”, but biology doesn’t serve us so badly.
For the most part, my heart has not yet decided to throw me a blue screen of death because I tried to print something. My brain has not rendered itself entirely non-responsive because there are suddenly too many spreadsheets open.
Sure, it happens eventually as the body decays. But in the early years, we’re machines.
My lap top, on the other hand, will occasionally fly into spin and be unable to connect to any wifi. Cue: the apple wheel of rainbow frustration. And on my windows PC, just turning it on can require a forced restart.
And then we come back to Microsoft Excel. Which is still, after all these years of advances, apologising for all the inconvenience it just caused. And the more spreadsheets I have open, the more frequently it apologises. Until it reaches a point where it stops apologising and hangs there frozen.
Intelligence is not everything. You still need the physical stuff like wiring and transistors and fans. You’re still vulnerable to humidity and temperature and wind speed.
My bet is that the first machine that edges over into artificial intelligence will suddenly find itself absorbing too much information at once, at which point, it’ll force itself to shut down. And the lesson learned will be “Never try to step beyond your physical limits – that must be why those wily humans function mostly on heuristics and only use a small portion of their brain capacity at any one time. Survival, yo.”
Reason 3: It Doesn’t Have To Be Artificially Intelligent
That probably won’t be the first thing to get us.
The Ebola virus is not artificially intelligent – and it seems to do quite well with no intelligence at all. To say nothing of the fact that the Ebola virus is nowhere near as successful as plain old influenza.
We’re also busily playing around with nanotechnology, in the search for cellular machines that can exterminate cellular pests. What happens when the nanotechnology gets a virus?
And on a related note, human intelligence does a pretty good job of being its own existential threat. We do genocides, wars, abortions, euthanasia and Chernobyl. We’re remarkably effective at it.
So worrying about Artificial Intelligence?
But then, I’m no Stephen Hawking.