client login    languages

Should Google’s “Driverless” Cars Require Code of Ethics Training?

Request A Demo of Our Ethics And Compliance Solution

Should Google’s “Driverless” Cars Require Code of Ethics Training?

Remember back in elementary school, when the teacher had you draw what you thought life would be like in 15 years? People living on Mars and flying automobiles dominated my adolescent thoughts. Fast forward 14 years later, and my ten year old dreams have become a reality… sort of. While there aren’t people living on Mars (yet!) or flying vehicles (much to my dismay) we are getting close to the age of driverless cars, which was also once thought impossible.

According to the New York Times, Google has been working on a fleet of 100 experimental electric-powered vehicles that administer all the standard controls found in modern automobiles. The driver will still be in the vehicle, but the only thing the driver will be able to control is a red “e-stop” button for panic stops and a separate start button. The car could be summoned with a smartphone application. It could pick up a passenger and automatically drive to a destination selected on a smartphone app without any human intervention. Think of it like Uber, only with a cheerful Prius as your driver rather than a human.

The idea of self-driving cars becoming a normal part of society is something too far-fetched to even fathom at the moment, so I’ll have to save my thoughts on that for another blog. I would like to further the conversation regarding the ethics behind such a prototype and why the users (you can hardly call them drivers) of “driverless” cars and other future autonomous cars are going to require code of ethics training. Furthermore, the car itself might need to be programmed with a basic code of ethics, which presents some interesting dilemmas.

Moral Dilemma: The Tunnel Problem

According to an article by WIRED, Jason Millar writes about how people should have a say in their robot car’s code of ethics, given that robot cars are the new wave of the future. He provides a thought experiment, similar to the “trolley problem” (a sudden flashback of my crazy philosophy teacher in college comes to mind), Millar calls it “The Tunnel Problem.” He explains:

You are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child errantly runs into the road and trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car is unable to brake in time to avoid a crash. It has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. Now ask yourself, who should decide whether the car goes straight or swerves? Manufacturers? Users? Legislators?

What would you say?

Millar suggests that an individual’s moral commitments could make the difference between whether the car goes straight or swerves. It’s easy to see how the user, manufacturer or legislator  would face such a scenario, and have to determine the most ethical course of action based on their own personal code of ethics.

Millar does suggest a solution to the problem. He says that one would adopt the same approach that medical professionals do; the process of informed consent. In healthcare, when choices that bear a heavy ethical or moral weight must be made, it is standard practice for nurses and doctors to inform patients of the treatment options, side effects, and other associated risks, and let patients make their own decision. This same approach of informed consent can be applied to the engineering of the driverless cars. The idea is that designers and engineers would not be able to make moral decisions on users’ behalf, like the problem above, without first getting users’ explicit consent.

Whether this new idea of rethinking robot liability comes into play or not, what I’m more concerned with is the idea that if Google or other autonomous car manufacturers do allow users to make the moral decisions, and not the manufacturers or legislators, then how can “we” ( the other beings on the road or the people involved in the driverless engineering and designing process) determine that users are going to make the most ethical decision. The situation is particularly hairy because it does get down to the age old question of whether one person’s life can be more valuable than another’s. In the tunnel problem, for example, my coworker may fervently believe that the child should be saved, but what if she was “driving” the car, and her own two children were in it with her? What if her husband was driving the car, even alone? It’s clearly difficult to make such difficult and complex decisions at the individual level.When and if the autonomous cars do become mainstream, companies are going to need to include a code of ethics training program in the operating manual that will educate users on the new ethical choices that this type of driving opens them up to, and how to deal with situations like those above.

Is It Time To Include Code of Ethics Training In Car Manuals?

A code of ethics is a guiding document for an organization and it’s a place where a company can declare its principles, values and ethical operating commitments. Every manually operated car today comes with a car manual that provides detailed information about warranties, safety instructions and maintenance instructions. Companies aiming to break into the autonomous car industry should be required to provide a code of ethics for users alongside the manual that outline the company’s values and policies.

After the policies are in place, companies should then include a code of ethics training program designed to educate and train users of what the principles, values and commitments outlined in the code of ethics mean. I imagine that by the time driverless cars become commonplace, it will go without saying that this training should engage our technology advanced users by providing interactive activities and exercises that people can engage on their smartphones (or Google Glass, or whatever happens to exist as that point) that provides real world ethical dilemmas users of driverless cars could face. This way we are educating the users first on how to face decisions like the one presented in the tunnel problem.

Before we can even think about the idea of an app having the ability to control the direction of our cars, companies need to consider first the ethics involved in such an invention and include training for users alongside the manual to really help drive the message home.

I will leave you with Millar’s reasoning on why society must embrace complexity such as this:

Robots, and the ethical issues they raise, are immensely complex. But they require our thoughtful attention if we are to shift our thinking about the ethics of design and engineering, and respond to the burgeoning robotics industry appropriately. Part of this shift in thinking will require us to embrace moral and legal complexity where complexity is required. Unfortunately, bringing order to the chaos does not always result in a simpler world.

Let me know what you think in the comments below.


For More Information About Code of Ethics Training, Check Out These Resources:


  1. Jon Dunn
    September 12, 2014 at 8:57 am

    Hi Katherine, This is a great article which highlights what I consider to be one of many hurdles we may face when autonomous technology rolls out across mass vehicle production. The idea of a "code of ethics" training for your car seems to be a good one which could go some way to alleviate the issue but only time will tell how such problems will transpire. I've included this article onto our site where we curate the very best autonomous car news from across the net. You can find it over at Cheers, Jon

    Reply »
  2. September 12, 2014 at 7:19 pm

    […] The era of driverless cars is upon us, and presents a slew of ethical challenges. Read on to find out why both cars and users need code of ethics training. (Read Katherine's blog: "Should Google's "Driverless" Cars Require #Codeof#Ethics #Training?  […]

    Reply »
  3. Pia Adolphsen
    Pia Adolphsen
    September 16, 2014 at 1:29 pm

    Hi Jon, thanks for curating our article! We look forward to more news about this topic as work on autonomous cars progresses. Thanks again!

    Reply »
  4. September 23, 2014 at 6:10 pm

    […] Blog: Should Google’s “Driverless” Cars Require Code of Ethics Training? […]

    Reply »

Leave a Comment

We would be glad to get your feedback. Take a moment to comment and tell us what you think.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Awards & Certifications 2013 GRC 20/20 Technology Innovation Award 2013 TAG Top 40 Innovative Company 2012 IABC Gold Quill Award 2012 MarCom Award We self-certify compliance Safe Harbor Safe Harbor Certification SOC 2 Certification