Pell Center

The Pell Center for International Relations and Public Policy at Salve Regina is a multidisciplinary research center focused at the intersection of politics, policies and ideas.

The Problem With Ethical Autonomy

Can We Program Self-Driving Cars To Make Ethical Choices?

Our conception of the future is often produced by a quixotic mix of the fictional and the real. Science fiction authors hand down a perspective of the future warped by a desire to reflect, and change, the present and consequently the future. Intertwined with our reading is our own movement away from the present. Weaving together science fiction and the individual creates a future that is marked by fabulous technology, but inhabited by ourselves. The childish flights of fancy we construct into our memory when examining the future we live in are held against two paradoxical standards: the imaginative conflation outlined above and the technological progress achievable in our lifetime. We are thus simultaneously impressed and disappointed by our technological circumstance. When Captain James T. Kirk first flipped open his communicator requesting a transport back to the Enterprise, audiences were taken by the fantasy. Kirk was able to talk wirelessly, instantly, to his crew in orbit, as well as be “beamed” almost anywhere. Fifty years later cellphones are a pervasive, almost cultural, force in society yet we still lament that we cannot “beam up,” that we are still very much terrestrial beings. Though technologies we now have access to might retroactively seem to be the “obvious” technologies humanity would pursue first, this ascribed logic of technological advance clouds our sight. When a technology seems be of our imagined future it is worthy of extra consideration.

In an op-ed for the Pittsburgh Post-Gazette President Barack Obama outlined an initiative by his administration to provide car manufacturers with a sixteen point compliance list for the safety “autonomous vehicles.” Vehicle autonomy, often referred to as “self-driving cars,” has appeared in the consciousness of government agencies seemingly overnight, though the technology for autonomous vehicles has existed for some time.[1] After Telsa’s “autopilot” program was involved in a fatal car accident articles have begun to appear concerning the varying aspects of these vehicles.[2] The fallout from Tesla and the apparent blessing of the United States Government spurred a wave of technological analysis. Yet economic and legal discourse, and even the President’s own initiative, sidestep ethical issues present at the foundation of vehicle autonomy.

At the very heart of self-driving cars is the programming that allows them to function. Such “Autonomy” is, in a philosophical sense, a bit of a misnomer. The type of “autonomy” that exists inside these cars is not so much a rational and conscious actor, but a machine following a set of pre-programmed commands. Running on algorithms the cars take inputs and run them through a series of rules to create an output. The “autonomy” of these self-driving cars is an illusion. The “freedom” these cars have is one that is merely programmed to react in specific ways per the context of the situation. The cars do not have the freedom to act outside their rule bounds. They are designed to appear autonomous when it is their programming dictating their actions.

In a car accident it is the driver of the car who is the rational actor involved in the decision making process and the driver bears the moral culpability for the results. The United States is a vehicle nation, containing almost as many cars as there are people.[3] Millions of American’s commute to work every day via automobile, on interstates and highways across the country. As the President outlines, roads are dangerous. Thirty-five thousand people died in automobile accidents just last year. He goes on to say that self-driving cars will reduce the number of traffic fatalities.[4] However, it is not a stretch of the imagination to consider a situation where a car accident will indefinitely result in death. How would we program a self-driving car to react in a scenario where, through the actions of the car, the end result would be fatal? What if a truck stops short and if the car swerves in either direction it will collide with a motorcycle but if it stops short the driver will be crushed by an oncoming truck. Do we program autonomous vehicles to always protect the driver? Or to minimize the amount of total harm? Such a scenario has resulted in two competing headlines: “Is your car programmed to kill you?” and “Is your car programmed to kill?” It appears that in allowing self-driving cars we are at least confirming the latter, they will, in some way, be programmed to kill. In a car accident the onus is on the driver, they make the ultimate decision, to swerve or to break, and are thus held responsible. But who bears the responsibility when an autonomous vehicle crashes?

While Captain Kirk and Mr. Spock were guided by the hand of the Prime Directive, in the field of vehicle autonomy there is no overarching principle. With companies like Google and Uber already pushing to deploy this technology and the most recent position by the Government being one of encouragement, it appears our haste to reach the future has superseded any reflection given to this technology’s ethical implications.[5] Improvement and innovation can mask the ethical challenges of new technology and it is yet to be seen how problems of autonomous vehicles are approached.

[1] http://www.post-gazette.com/opinion/Op-Ed/2016/09/19/Barack-Obama-Self-driving-yes-but-also-safe/stories/201609200027

[2] http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s

[3] http://www.latimes.com/business/autos/la-fi-hy-ihs-automotive-average-age-car-20140609-story.html

[5] The 116 page DOT report can be reviewed here: https://www.transportation.gov/AV

 

Leave a Reply

Your email address will not be published. Required fields are marked *