Ethical Implications for Autonomous Vehicles
thesisposted on 01.05.2019, 00:00 by Kathryn Tay
From Batman to Minority Report to Ender’s Game, autonomous vehicles have been a prominent feature of science fiction, representing a futuristic society that could occur only in one’s wildest imagination. However, that future is slowly becoming a reality. In 2009, the Google-run project Waymo began developing a self-driving car project. By 2018, more than two million miles had been driven by Waymo’s autonomous vehicle. It is easy to understand why these companies have an interest in developing a marketable autonomous vehicle. In 2016, The National Highway Traffic Safety Administration reported that human error is involved in 94 to 96 percent of all motor vehicle crashes. There has also been a 5.6% increase in traffic fatalities between 2015 and 2016, increasing from 35,485 deaths in 2015 to 37,461 deaths in 2016. That means that a conservative estimate of the total traffic deaths caused by human error in 2016 is 35,213.34. Autonomous vehicles have been predicted to eliminate 90% of traffic accidents due to their elimination of human error. However, there will inevitably be traffic crashes involving autonomous vehicles. This study will consider several ethical solutions to the main research question: What should an autonomous vehicle be programmed to do in the case of an unavoidable accident?