Amir Tenizbayev
How do you code ethics into autonomous automobiles? And who is responsible when things go awry? Imagine a runaway trolley barreling down on five people standing on the tracks up ahead. You can pull a lever to divert the trolley onto a different set of tracks where only one person is standing. Is the moral choice to do nothing and let the five people die? Or should you hit the switch and therefore actively participate in a different person’s death?
Aug 28, 2015 4:44 AM
Corrections · 1
I would like to contribute a different angle to this debate. One safety feature proposed for this is that the driver can take over from the automatic computer driver in the event of potential problems. However, many visually impaired or blind people are eagerly awaiting the successful development and launch of Google Car (or whichever company is successful in producing this), because it will grant them an element of unprecedented freedom, removing the reliance on carers/family/friends to drive them or do errands. Due to obvious sight limitations, it would not be possible to have the blind driver take over control of the vehicle. Who is responsible in the event of an accident: the computer/car manufacturer for insufficiently handling an accident in the software, or is the blind driver responsible, because they were incapable of wresting control from the computer?
August 28, 2015
Want to progress faster?
Join this learning community and try out free exercises!