Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law--- This would apply to AI as well. Wouldn't be bad for Human beings either. but we are not hard coded with this.
RE: If We're in a Simulation, Could Studying Morality and A.I. Improve Your Life?
You are viewing a single comment's thread from:
If We're in a Simulation, Could Studying Morality and A.I. Improve Your Life?
Those laws were designed to demonstrate how flawed they are. Makes for good science fiction. Those who are seriously working on this problem now don't treat those laws seriously, from what I've seen. Give Bostrom's book Superintelligence a read to understand more about the topic. It's fascinating stuff.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit