Last week, Stephen Hawking told the world (AKA the BBC) that “The development of artificial intelligence could spell the end of the human race.”
And while I’m name-dropping, I should say that Elon Musk also called AI “our greatest existential threat”.
This. This strikes me as curiously over-reactive – for a number of economic reasons.
Reason 1: If It Happened, We Wouldn’t Know It
The theory of AI runs something like this:
- Once you create something that’s even slightly capable of learning independently, it would learn and change itself at exponential speed.
- Here’s Stephen Hawking: “It would take off on its own, and re-design itself at an ever increasing rate.”
- And: “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
My inner economist says that there are two outcomes here:
- AI would be benevolent.
- AI would be malevolent, the bastard.
And I’m using those terms loosely, because the great fear for Artificial Intelligence is that because it lacks Human Emotion (which has always served us so well #sarcasm), its cold and rational decisions would end up in a “must exterminate” command.
Here’s the thing – that whole story would play out almost immediately (recall the exponential rate of self re-design). So if it ended up with negative consequences, we’d be wiped out fairly instantly. Whilst eating cornflakes or taking a selfie or whatever.
Which isn’t such a bad way to go.
And if it turns out to be positive, well then my iPhone just got way cool.
Reason 2: Microsoft Excel
Humans may be limited by “slow biological evolution”, but biology doesn’t serve us so badly.
For the most part, my heart has not yet decided to throw me a blue screen of death because I tried to print something. My brain has not rendered itself entirely non-responsive because there are suddenly too many spreadsheets open.
Sure, it happens eventually as the body decays. But in the early years, we’re machines.
My lap top, on the other hand, will occasionally fly into spin and be unable to connect to any wifi. Cue: the apple wheel of rainbow frustration. And on my windows PC, just turning it on can require a forced restart.
And then we come back to Microsoft Excel. Which is still, after all these years of advances, apologising for all the inconvenience it just caused. And the more spreadsheets I have open, the more frequently it apologises. Until it reaches a point where it stops apologising and hangs there frozen.
Intelligence is not everything. You still need the physical stuff like wiring and transistors and fans. You’re still vulnerable to humidity and temperature and wind speed.
My bet is that the first machine that edges over into artificial intelligence will suddenly find itself absorbing too much information at once, at which point, it’ll force itself to shut down. And the lesson learned will be “Never try to step beyond your physical limits – that must be why those wily humans function mostly on heuristics and only use a small portion of their brain capacity at any one time. Survival, yo.”
Reason 3: It Doesn’t Have To Be Artificially Intelligent
Artificial intelligence?
That probably won’t be the first thing to get us.
The Ebola virus is not artificially intelligent – and it seems to do quite well with no intelligence at all. To say nothing of the fact that the Ebola virus is nowhere near as successful as plain old influenza.
We’re also busily playing around with nanotechnology, in the search for cellular machines that can exterminate cellular pests. What happens when the nanotechnology gets a virus?
And on a related note, human intelligence does a pretty good job of being its own existential threat. We do genocides, wars, abortions, euthanasia and Chernobyl. We’re remarkably effective at it.
So worrying about Artificial Intelligence?
Seems crazy.
But then, I’m no Stephen Hawking.
Rolling Alpha posts opinions on finance, economics, and the corporate life in general. Follow me on Twitter @RollingAlpha, and on Facebook at www.facebook.com/rollingalpha.
Comments
Kosta December 9, 2014 at 09:28
I agree with your final point – worrying about AI is pointless.
But this is a deep topic (and one which I’m quite passionate about), so here be a disclaimer before my diatribe: I know you were bearly scratching the surface, but allow me to share my thoughts.
Regarding your Reason 1 – “if it happened we wouldn’t know it” – personally I suspect that it is already happening as we speak, but on the contrary, it is a slow and organic process (and when I say organic I mean gradual and incremental, as opposed to carbon-based). For example, I’ve been tracking the “evolution” of war robots for the past couple years (ever since The Economist published this awesome article back in 2012: http://www.economist.com/node/21556234), and whilst the risks discussed in that article are pragmatic (and certainly possible), this extinction level event you speak of is by no means going to happen in the blink of an eye. Of course the counter-argument to this point is that whilst this initial evolution of AI and robotics is gradual, once the tipping point is reached (which many argue to be “sentience”), then all hell will break loose. But will this happen in the blink of an eye to the point that we won’t stand a chance or perhaps not even realise it? I doubt it; but who knows. Interestingly, Hawking’s and Musk’s visions of the future are closely aligned with that of the Wachowski siblings (producers of The Matrix). Whilst most of us have seen The Matrix, not everyone has seen The Animatrix – a series of animations that serve as a prequel to the Hollywood blockbuster – whose sole purpose is to expand on The Matrix story line. Within that series, two of the episodes are called “The Second Renaissance” (made up of parts 1 and 2). Those two episodes in particular paint an easily believable picture of how the risk of AI domination isn’t something that is going to happen overnight, but will rather be a dragged out procedure that will plague us inferior mortals for quite some time.
(On a side note, these two episodes utilise some lovely themes around Revelations symbolism, geo-political and socio-economic implications, and the role of global finance. One could even argue that the Wachowskis are modern day prophets (akin to their biblical contemporaries of the Old Testament), using a modern-day communications medium to share a rather dystopian vision of the future.)
Regarding Reason 2 – “if it happened, it wouldn’t be terminal because technology (with all its faults) would be its own worst enemy” – this point is, in my humble opinion, moot, purely given the redundancy of distributed networks and the current migration of computation and storage to the modern-day cloud. It’s the age-old Artifical Intelligence debate of: if machines became sentient, where would “their brain” reside? And the simple answer is that it wouldn’t reside in any one place. Both the technological and philosophical implications of this have been covered rather nicely in the 1995 Japanese animation Ghost In The Shell. And if you find that interesting, I highly recommend you read the preceding Manga upon which the animated film is based.
I agree with Reason 3 completely. And on the point of Musk, as much as I love what he is doing with SpaceX, at the rate they’re progressing, I worry that we’re going to figure out how to colonise space before we figure out how to live sustainaby on this one planet we call Earth. Because if we don’t crack sustainability, then we’re either going to die here before we manage to colonise space, or we’re going to make it into space, but then just end up exploiting it for all its resources ad infinitum. Personally I think an overhaul of our socio-economic and monetary system is required for sustainability to happen (and perhaps this overhaul is already gradually taking place, as alluded to in the “Productivity Paradox” conversations currently taking place between thought leaders). But that’s a separate can of worms altogether.
Parting thought: I’m a big fan of the fact that you consider this topic relevant to financial literacy. The two things I read up on the most are finance and AI, and I’ve always believed they share a deep bond. Keep ’em coming Jayson.
Reply