Remember back in elementary school, when the teacher had you draw what you thought life would be like in 15 years? People living on Mars and flying automobiles dominated my adolescent thoughts. Fast forward 14 years later, and my ten year old dreams have become a reality… sort of. While there aren’t people living on Mars (yet!) or flying vehicles (much to my dismay) we are getting close to the age of driverless cars, which was also once thought impossible.
According to the New York Times, Google has been working on a fleet of 100 experimental electric-powered vehicles that administer all the standard controls found in modern automobiles. The driver will still be in the vehicle, but the only thing the driver will be able to control is a red “e-stop” button for panic stops and a separate start button. The car could be summoned with a smartphone application. It could pick up a passenger and automatically drive to a destination selected on a smartphone app without any human intervention. Think of it like Uber, only with a cheerful Prius as your driver rather than a human.
The idea of self-driving cars becoming a normal part of society is something too far-fetched to even fathom at the moment, so I’ll have to save my thoughts on that for another blog. I would like to further the conversation regarding the ethics behind such a prototype and why the users (you can hardly call them drivers) of “driverless” cars and other future autonomous cars are going to require code of ethics training. Furthermore, the car itself might need to be programmed with a basic code of ethics, which presents some interesting dilemmas.
Moral Dilemma: The Tunnel Problem
According to an article by WIRED, Jason Millar writes about how people should have a say in their robot car’s code of ethics, given that robot cars are the new wave of the future. He provides a thought experiment, similar to the “trolley problem” (a sudden flashback of my crazy philosophy teacher in college comes to mind), Millar calls it “The Tunnel Problem.” He explains:
What would you say?
Millar suggests that an individual’s moral commitments could make the difference between whether the car goes straight or swerves. It’s easy to see how the user, manufacturer or legislator would face such a scenario, and have to determine the most ethical course of action based on their own personal code of ethics.
Millar does suggest a solution to the problem. He says that one would adopt the same approach that medical professionals do; the process of informed consent. In healthcare, when choices that bear a heavy ethical or moral weight must be made, it is standard practice for nurses and doctors to inform patients of the treatment options, side effects, and other associated risks, and let patients make their own decision. This same approach of informed consent can be applied to the engineering of the driverless cars. The idea is that designers and engineers would not be able to make moral decisions on users’ behalf, like the problem above, without first getting users’ explicit consent.
Whether this new idea of rethinking robot liability comes into play or not, what I’m more concerned with is the idea that if Google or other autonomous car manufacturers do allow users to make the moral decisions, and not the manufacturers or legislators, then how can “we” ( the other beings on the road or the people involved in the driverless engineering and designing process) determine that users are going to make the most ethical decision. The situation is particularly hairy because it does get down to the age old question of whether one person’s life can be more valuable than another’s. In the tunnel problem, for example, my coworker may fervently believe that the child should be saved, but what if she was “driving” the car, and her own two children were in it with her? What if her husband was driving the car, even alone? It’s clearly difficult to make such difficult and complex decisions at the individual level.When and if the autonomous cars do become mainstream, companies are going to need to include a code of ethics training program in the operating manual that will educate users on the new ethical choices that this type of driving opens them up to, and how to deal with situations like those above.
Is It Time To Include Code of Ethics Training In Car Manuals?
A code of ethics is a guiding document for an organization and it’s a place where a company can declare its principles, values and ethical operating commitments. Every manually operated car today comes with a car manual that provides detailed information about warranties, safety instructions and maintenance instructions. Companies aiming to break into the autonomous car industry should be required to provide a code of ethics for users alongside the manual that outline the company’s values and policies.
After the policies are in place, companies should then include a code of ethics training program designed to educate and train users of what the principles, values and commitments outlined in the code of ethics mean. I imagine that by the time driverless cars become commonplace, it will go without saying that this training should engage our technology advanced users by providing interactive activities and exercises that people can engage on their smartphones (or Google Glass, or whatever happens to exist as that point) that provides real world ethical dilemmas users of driverless cars could face. This way we are educating the users first on how to face decisions like the one presented in the tunnel problem.
Before we can even think about the idea of an app having the ability to control the direction of our cars, companies need to consider first the ethics involved in such an invention and include training for users alongside the manual to really help drive the message home.
I will leave you with Millar’s reasoning on why society must embrace complexity such as this:
Let me know what you think in the comments below.
For More Information About Code of Ethics Training, Check Out These Resources:
- Blog: Uber, Lyft Stop The Childish Shananigans – Go Sit In Time Out And Reflect On Your Code of Ethics Training
- Blog: Nature or Nurture? Use Code of Ethics Training, Annual Culture Assessments to Nurture Ethical Culture
- Blog: 5 Elements Your Code of Ethics Training Should Address
Whitepaper | Code of Conduct: The Foundation of Your FCPA Compliance Program
In this whitepaper, lawyer and FCPA expert Tom Fox looks at how to build and maintain a Code of Conduct through documentation, training and regular updating – that can drive ethical businesses and reduce your FCPA liability.
Get Your Copy!