Killing should not be easy

in ai •  6 years ago 

Should machines be allowed to make life and death decisions? With technologies already up to the task, this is a pressing question, but not an easy one.

Although there is a strong opposition from the scientific community, the force seems to be on the proponent’s side. Not only do the weapon manufacturers hold virtually unlimited resources and are backed by their governments, they have pretty strong arguments on their side as well. At least on the first glance that is.

Arguments for and against autonomous weapons


The resistance is natural as killing machines go against our basic instincts. We are frightened by an image of machines that can kill us — without feelings, without a chance to read them, predict them, negotiate with them. It is a combination of hopelessness and the fear of the unknown. The way people put this into words is by saying that the decision to kill people should be left to people, for they are restrained by compassion and human goodness. Allowing machines to kill would mean more deaths as these limitations would not exist.

But the proponents argue that allowing machines to make the decisions would actually lead to fewer deaths and especially eliminate the unwanted ones. Machines are more accurate and effective. But the main reason is the same one the opponents use — machines have no emotions. No anger, no killing spree, no hatred. Machines will not kill anyone they are not supposed to kill. These arguments are correct. Autonomous weapons would indeed make the killing more accurate and safe. But they are wrong about the consequences.

Why is it wrong?


Making killing more accurate and safe means making it easier, and that is not a good thing. Nowadays, ordering a kill strike carries a lot of risk and responsibility. The decision makers need to think twice before they take the risk of the mission not going perfectly right — having to carry the weight of civilian deaths, having to sweep it under the rug, or even worse, being exposed in the media. Because of these risks and occasional accidents, strikes are being questioned — by the public, the decision makers, as well as those who pull the trigger and have to live with it.

On the other hand, imagine that ordering a kill has no risks whatsoever. The public is already convinced that nothing “bad” (i.e. no unintended deaths) can happen, decision makers are free of the civilian death nightmare and those pulling the trigger feel nothing at all — they are machines. Targeted killings would become a simple effortless routine, an easy universal solution that will be used in many places in which it was unthinkable before. Because of the general perception of being safe and moral, there will be no interest from the public and journalists anymore, no scrutiny, no raised eyebrows. The result of that will not be increased safety, as the proponents say, but a wide abuse of the targeted automated killing to remove whoever is inconvenient. Because, why not, when it is so easy?

So while the arguments for autonomous killing machines are safety and less unintended casualties, the actual result will be a large increase in intentional casualties, with accidental deaths of bystanding civilians being replaced by intended deaths of uncomfortable ones.

Therefore, killing should not be easy, and autonomous weapons are not a good thing.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!