RE: Killer Robots: Artificial Intelligence and Human Extinction

You are viewing a single comment's thread from:

Killer Robots: Artificial Intelligence and Human Extinction

in anarchism •  8 years ago  (edited)

Good point, but truthfully I'm not sure.

It gets very tricky. Remember, AI will take values to their logical conclusions. If we cover all life, a rational AI may conclude that humans are very dangerous to other life forms—which we are. Oppositely, imagine if AI "generally preserved" humans. The result could be AI that over-protects humans from all risk whatsoever.

But also just look at your wording: "generally" and "preservation"

What cases does "generally" not cover? What are we trying to preserve? Life? OK here comes the over-protective robots. OK fine, lets give humans some freedom. But now there is an obvious grey area in the definition.

Please search up Bostrom's paper clip maximizer

I hope this is making some bit of sense :)

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Thank you @nettijoe96 , I'll Google up Bostrom's paper clip maximizer.