Self-driving cars may soon be able to make moral and ethical decisions as humans do

in science •  7 years ago  (edited)

New study demonstrating that human ethical decisions can be implemented into machines has strong implications for managing the moral dilemmas autonomous cars may face on the road.

Story in Brief: 

A ground-breaking new study challenges the assumption that moral decisions are strongly context dependent and cannot be modeled or described algorithmically, finding that human behavior in dilemma situations can be modeled by a simple value-of-life-based model. The research suggests that human moral behavior can be well-described by algorithms and used by machines to manage moral dilemmas on the road. 

Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modeled meaning that machine based moral decisions are, in principle, possible.

The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios.

The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior. The research showed that moral decisions in the scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.

Leon Sütfeld, first author of the study, says that until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modeled or described algorithmically, “But we found quite the opposite. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.” This implies that human moral behavior can be well described by algorithms that could be used by machines as well.

The study’s findings have major implications in the debate around the behavior of self-driving cars and other machines, like in unavoidable situations. For example, a leading new initiative from the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has defined 20 ethical principles related to self-driving vehicles, for example, in relation to behavior in the case of unavoidable accidents, making the critical assumption that human moral behavior could not be modeled.

Prof. Gordon Pipa, a senior author of the study, says that since it now seems to be possible that machines can be programmed to make human like moral decisions it is crucial that society engages in an urgent and serious debate, “we need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

As an example, within the new German ethical principles, a child running onto the road would be classified as significantly involved in creating the risk, thus less qualified to be saved in comparison to an adult standing on the footpath as a non-involved party. But is this a moral value held by most people and how large is the scope for interpretation?

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

The study’s authors say that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more common place. They warn that we are now at the beginning of a new epoch with the need for clear rules otherwise machines will start marking decisions without us.

https://videopress.com/v/25aAlsgO

Source: https://blog.frontiersin.org/2017/07/18/frontiers-in-behavioral-neuroscience-self-driving-cars-may-soon-be-able-to-make-moral-and-ethical-decisions-as-humans-do/

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Congratulations @healthyhappy! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

You published 4 posts in one day

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.sciencedaily.com/releases/2017/07/170705123229.htm

Hi @cheetah. Do you self-drive?