By Joshua Lee
Self-driving technologies can already be found in thousands of cars on the streets today, but as we become closer to a world where driverless cars rule the road, there remains a serious moral debate surrounding how a driverless car should respond in complex life or death scenarios.
The moral dilemma is centred on the philosophical ‘trolley problem’. In this thought experiment, a run-away tram is heading towards five people on the tracks and is going to hit them. On another track there is only one person. Should you switch the tracks and kill one person or do nothing and kill five? On the one hand you save four lives by switching tracks, but on the other you are actively deciding to kill.
While this is only hypothetical, understanding the ‘trolley problem’ is important as it applies to potential real world scenarios that a driverless car may encounter; If a pedestrian suddenly walks into the road, should the car swerve to avoid hitting the pedestrian, thereby hitting a tree and killing the passengers inside? Alternatively, should the car continue traveling forward, thereby killing the pedestrian? What if the pedestrian was a child? Would this change the decision the car makes?
Here a driverless car would be forced to decide who to save and who to kill, and this decision has to be programmed by us. Unless we program the car to make what we consider to be a ‘moral’ decision, the car may just decide to do nothing.
To understand how the public believed cars should deal with such a scenario, the dilemma was put to nearly 2000 people as part of a study published in the magazine Science. The study found that the respondents supported a ‘utilitarian’ decision where the cars would sacrifice their own passengers for the ‘greater good’. But the same respondents also said that they would be unwilling to travel in a car that was not programed to prioritise passenger safety.
Some car makers are already adopting this protective approach to their driverless cars. Mercedes-Benz’s manager of driver-assistance systems, Christoph von Hugo, revealed that the company’s autonomous cars will seek to protect its occupants, whatever the cost.
“If you know you can save at least one person, at least save that one. Save the one in the car” said von Hugo, speaking in an interview with the magazine Car and Driver.
While it remains to be seen if other car-makers will follow Mercedes’ direction, engineers developing Google’s autonomous car don’t seem to be as concerned about the issues raised by the ‘trolley problem’.
“Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier” said Andrew Chathan, principal engineer on the project, in an interview with The Guardian: “My goal is to prevent us from getting in that situation, because that implies that we screwed up”.
While there is still a lack of consensus as to how to resolve this moral dilemma, the driverless cars will prevent countless numbers of car accidents and save thousands of lives, and many argue that hindering the introduction of this technology any further would therefore be unethical.