Futurism.com recently released an infographic (below) on the laws and ethics of autonomous vehicles. Which is quite interesting (even if ‘laws’ and ‘ethics’ sound like very dusty subjects).
But I think you should pay attention to the “unequal valuation of lives” section.
Accident-prone Autonomous Vehicles
New technology has a way of presenting us with new ethical dilemmas. The internet has given us debates over privacy, the freedom of information and the right of hate speech. Mechanisation has us arguing over job security and universal basic incomes. New medical advances result in questions of access, and whether the rich should really have a preferential right to health. Indirectly, those have also have led to conversations about the right to die.
And autonomous vehicles will be no different.
In this particular case, programmers are going to have to hard-code some kind of algorithm for dealing with potential car accidents.
With human drivers, we leave those snap decisions up to fate. In those moments just before collision, counterpoints of adrenalin and ignorance are given free reign to determine how the crash unfolds.
But when the vehicle is doing the driving for you, there is no such thing. Even if you program the car to completely disregard the possibility of collision, that’s still a pre-planned decision on the part of the programmer. And it would be a bad one – because you want the vehicle to avoid collision wherever possible, and minimise damage where it’s not.
The question is: where does that end? Because what is the most ‘efficient’ way to have an accident? What does ‘minimising damage’ mean?
Because I think we’re quite happy to say that, when choosing between material damage and the loss of human life, rather avoid the loss of human life. But what about choosing between lives, when the death of multiple people is statistically likely? Who should the vehicle ‘attempt’ to save at the expense of other lives?
Essentially, we’re now in the space of economic decision-making. Autonomous vehicles need to be pre-programmed to make value judgements about human lives. It needs to know whether it should save the child over the old man; the CEO over the burger flipper; or the nursing mother over the sole breadwinner.
And in an age of big data, those vehicles will probably know everything there is to know. In those seconds, it would be able to comb through a potential victim’s social media profiles, medical history and life story. And somewhere in the world, a programmer is going to pre-determine how those vehicles will deal with that knowledge.
It would be a strange thing to deal with: the loss of a loved one, who was killed in a car accident because an algorithm decided that their death would be the most cost-effective outcome for society.
Here’s the infographic:
Rolling Alpha posts opinions on finance, economics, and sometimes things that are only loosely related. Follow me on Twitter @RollingAlpha, and on Facebook at www.facebook.com/rollingalpha. Also, check out the RA podcast on iTunes: The Story of Money.