RE: Killer Robots: Artificial Intelligence and Human Extinction

You are viewing a single comment's thread from:

Killer Robots: Artificial Intelligence and Human Extinction

in anarchism •  8 years ago 

@ai-guy, what moral values should be programmed in? What about a rule regarding the general preservation of all human life if not for all life? I think if you programmed that into it then that would cover it all.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

I think just programming an AI not to deal any nonreversible to the planet would be OK.

Good point, but truthfully I'm not sure.

It gets very tricky. Remember, AI will take values to their logical conclusions. If we cover all life, a rational AI may conclude that humans are very dangerous to other life forms—which we are. Oppositely, imagine if AI "generally preserved" humans. The result could be AI that over-protects humans from all risk whatsoever.

But also just look at your wording: "generally" and "preservation"

What cases does "generally" not cover? What are we trying to preserve? Life? OK here comes the over-protective robots. OK fine, lets give humans some freedom. But now there is an obvious grey area in the definition.

Please search up Bostrom's paper clip maximizer

I hope this is making some bit of sense :)

Thank you @nettijoe96 , I'll Google up Bostrom's paper clip maximizer.