If the AI can autonomously design a lunchbox, as it did in the Dabus case, then it can autonomously design a gun, a tank, a bomber, and anything with lethality beyond existing human perception, without the aid of any external instructions. Even if the AI's inventors initially set certain code limits, who is to say that the AI will not modify these thresholds on its own? Further, if it can design these things without interference, then it must be able to manage to build these weapons and have them initiate an attack in any direction. Like a prison break, everything happens in the midst of seeming impossibility. In this way, the violent replacement of humans by AIs would no longer be just a theoretical and unfounded concern. Of course, there is no right or wrong in an AI's head, and this may be the crux of the matter. In this sense, there seems to be no logical paradox that humanity will be destroyed by AI.
Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
If you enjoyed what you read here, create your account today and start earning FREE STEEM!