Yes, the ethics of driverless cars are complicated. Image credit: Iyad Rahwan |
In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch, published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.
The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.
The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.
Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)
The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?
The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).
Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.
In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.
So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.
As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.
Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.
The runaway trolley has captivated people's imagination for a long time. I suspect, because it applies to all of life. Just thinking ...
ReplyDeleteEthics is so often assumed to be a single scenario: a trolley on a single track, without the side track. Already, virtually every decision we make has another possible track -- yet we switch such tracks out of circuit. Business interests exclude environmental health, political dogma represses human rights, the scientific method marginalises externalities, and so on. And so we focus on business ethics, political aspirations, scientific rigour -- and autonomous vehicles' path -- to the exclusion of other tracks. Audi already defines the issue of autonomous vehicles as one of 'risk' -- to minimise risks up front, as I understand it. Is that the removal of the side track? Is it a moral decision?
Our cartoonist Youngjin Kang may suggest another approach, in 'Cost of Crime'. There may be 'exemptive licenses' which lead to some people's lives being more highly valued than others. If you can afford an exemptive license -- yet even today, if you can afford a safer car -- you drive more safely. http://www.philosophical-investigations.org/2016/07/doublethink-21-cost-of-crime.html
I was just reading a presentation by Brendan Larvor, of the philosophy department at Huddersfield University in the UK, or should we say, the former UK (unforunate acronym notwithstanding). He argues that: "Utilitarianism and theodicies [meaning ethical theories that involve religous belief] both have a problem with justice. Utilitarianism recommends public execution of an unpopular innocent if there is enough popular demand; theodicies offset pain (shared by all sentient life, ever) against spiritual benefits that accrue only to humans" and concludes that it is"essentially the same point". However, I don't agree really, because isn't religion built not on "spiritual benefits that accrue only to humans" but on spiritual benefits that accrue only to non-beings, souls or gods! Justice is what is in God's interests, and human interests are of no significance.
ReplyDeleteOkay, how's that fit in with the post? Well, only that on utilitarian grounds, surely self-driving cars can be made acceptable. They run over less people than human drivers, etc etc. But from a human perspective, isn't there a difference to being run over by a person-by-accident and run over by a machine-on-purpose?
“There may be ‘exemptive licenses’ which lead to some people’s lives being more highly valued than others.” True enough, Thomas — though perhaps not to the original, hyperbolic intent of ‘exemptive licenses’.
ReplyDeleteI suggest it’s not a stretch to say that societies have always operated on the assumption that ‘some people’s lives [are] more highly valued than others’
The assumption is reflected in the deep and growing inequalities in vital services and resources in all corners of the globe.
Aren’t the lives of the poorest by implication typically valued less than the lives of the richest? Resulting in many such lives consigned to being ‘nasty, brutish, and short’?
It seems to me that everyday examples abound, such as poorer people’s access to starkly inadequate health care services, barely competent legal services, and scanty food and shelter — with life-and-death consequences all the time.
I wouldn’t say such examples are equivalent to the dystopian ‘exemptive licenses’ that I recall Youngjin was envisioning in his cartoon, but I propose these and other examples do mirror the routinized devaluing of certain lives.
‘[F]rom a human perspective, isn’t there a difference to being run over by a person-by-accident and run over by a machine-on-purpose?’ You make an excellent point, Martin. Here’s my take . . .
ReplyDeleteBy virtue of transitioning toward self-driving cars, developers will build into the cars’ AI-enabled ‘brain’ a decision tree that represents all possible decisions — as best we can discern them — locking in how the cars will behave in various circumstances. After all, the human ‘driver’ will no longer really be the ‘driver’ as such, in any tradition definition of the word; he or she may own the self-driving car, but all decisions about the car’s behavior will be baked into the car’s decision tree as opposed to someone gripping a steering wheel. The ‘drivers’ — or perhaps more accurately, simply the cars’ owners passively riding around — will have ‘contracted out’ to automakers, in accordance with society’s regulations, whether cars end up killing one or five people in the case of unavoidable, fatal collisions with people.
Key here, I suggest, is that if society says, for reasons of ethics, that self-driving cars won’t be programmed to make those utilitarian decisions at the design and build stages, then society is merely deferring to the default: to kill whoever happens to be in the cars’ path — whether one or five people. The important point being that ‘no decision’ is, for all truly practical purposes, still a ‘decision’. And by extension — that is, by society deferring to the default (whether that happens to result in the death of one or five people) — the outcome will always and unavoidably amount to people being run over by a ‘machine-on-purpose’.
Yes, I agree, Keith, the 'no decision' decision may have to be part of the programming! Sort of paradoxical, isn't it?
ReplyDelete