‘No being can create another being superior to itself’. I found this quote in one of Asimov’s collections of robot short stories (‘I,Robot’) and it refers to one robot’s disbelief when he learns he was, in fact, created by what it considers an inferior human being.
As a reader, I am inclined to laugh at the notion. Surely, a robot has greater physical strength and better data or numbers capacity, but I am human and I have feelings, originality, creativity. A robot cannot beat that.
Most of Asimov’s works strive to make humans comfortable with the idea of having robots around and assure us they’re safe. The doubting robot, we know it’s deluded, nothing to worry about.
Yet, living in a time when robots are quietly becoming a thing, the whole idea of superiority and inferiority seems irrelevant. It’s not the robots themselves that we need to be afraid of, but the humans who build them.
The other day I ran into this news story about driverless delivery trucks and robots who can carry a package to your doorstep. Obviously, this means one human being out of job. In this case, it doesn’t matter which of the two is a superior being. The one that’s cost-efficient gets the job and the human can take his creativity and use it to beg at a street corner.
In a sense, it’s almost heartbreaking to see just how much Asimov tried to present robots in a good way, when the real problem is who controls the robots.
In a recent conversation on the push to have robots take over mechanical repetitive jobs, in stores or banks, for instance, I was met with an indignant response: ‘But surely the guys at the bank will know I’d prefer dealing with a human employee’. An answer, which I find quite deluded. A bank owner looks at making more money and doesn’t give a crap whether you like robots or not. And there’s no point in moving your business to the bank down the street as the owner of that one has done some maths, too, and found out that robots are cheaper.
Here is Asimov’s lesser known Zeroth Law:
‘A robot may not harm humanity, or, by inaction, allow humanity to come to harm.’
I know that positronic brains are not a thing, but just imagine the package-delivery robot catching a glimpse of the employee it replaced, begging at a street corner, an empty can in front of him. That robot would theoretically understand it had caused harm to a human, wouldn’t it? The laws of robotics would dictate the robot to give the job back to the human.
But the robots we’re building do not have these laws built in, they’re indifferent to the fate of the average humans, and so are their owners.
When the humans left jobless will take to the streets, chances are they will be facing riot-control robots. We already have patrol and security guards robots, not to mention the abominations DARPA might be working on at this very moment.
On the other hand, I’m not even sure any future riots will be directed against robot owners. As history teaches us, unemployed masses are the perfect breeding ground for radicalization and manipulation and the anger of the jobless will probably be directed at a completely different scapegoat.
Asimov’s robots are so nice and likeable, too bad they’re science-fiction.
Hmmmm... We're heading for a future where this rule will be thrown out the window.
For example, the Google project of Driverless Cars. When driving, there will be situations where an accident will be inevitable... and the Artificial Intelligence Algorythm will have to decide between risking killing the car's passengers... or innocent bystanders. One thing is certain, it will be a scary future, full of unknown variables.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit