What Would Be The Difference Between A Human And A Sentient AI?

in life •  7 years ago 

What makes us human? Is it our ability to feel and express emotions? Or is it because we can think for ourselves, differentiate between right and wrong? Or is it something else? I am not talking about the human species, because that would be pretty obvious. I'm asking, what makes us humans?

Image source

Since the moment we are born, we are taught what is right and what is wrong. We are taught to always do what society deems "acceptable". We are taught to stay within the moral bounds society has set for us. And as we step into adulthood, our fight for survival begins. We work to pay off our bills, to pay off our rent. And fighting this fight for survival, our life passes. On a primitive level, that's all what life is, not just for humans, but for every organism in existence. And it would be true for a sentient AI.

Feelings and emotions are not exclusive to humans. Animals share these traits as well. Dogs are the epitome of loyalty and friendship. In a field lit by moonlight after a long and bloody battle, Napoleon Bonaparte found a dog beside his dead master. Seeing this tragic scene, Napoleon himself was moved to tears.

“This soldier, I realized, must have had friends at home and in his regiment; yet he lay there deserted by all except his dog. I looked on, unmoved, at battles which decided the future of nations. Tearless, I had given orders which brought death to thousands. Yet here I was stirred, profoundly stirred, stirred to tears. And by what? By the grief of one dog. “ -- Napoleon Bonaparte

Source

Similarly there are many other animals that exhibit feelings and emotions, for example tigers are the most vengeful creatures on the planet. They never forgive those who have wronged them, and they are sure to take revenge. So what makes us humans? Why are we the superior species on the planet? If we are to say feelings and emotions such as love, friendship, anger, then animals express these too.

I'll tell you what makes us humans. Intelligence. Pretty simple, right?

We have sent people to the moon because of our intellect. We have developed theories about the fundamental laws of the universe due to our intellect. But if intelligence is indeed what makes us human, why shouldn't a sentient AI be called... human?

Image source

If you were to say, "We are humans because we are born, and a sentient AI would not be human because it is created," you would be right to some extent. If we are manufacturing such AI, wouldn't it be the same as the conception of a child? Both are synonymous to the process of "creation". But by then what would be the difference? I mean, the AI would have the same level of intelligence as you, if not more. It would be able to think and feel, to differentiate between right and wrong. Would it be "okay" to kill such a... being?

Personally, I would say no. If "it" could think and feel, why would it be okay to kill "it"?

At what point do those "machines" become different than humans? If you are talking about blood and organs, one could put blood and organs into the machines for the aesthetic effect.

There are a lot of varying opinions on this topic, and I would say that it might even scare some people. Because humanity creating sentient AIs or "beings" would be the equivalent of humans becoming literal gods. Even if the "creation myths" are false, this "creation myth" would surely be a ironclad-fact.


Of course "sentient AIs" aren't going to be here anytime soon. In fact, one could doubt that they will even be created. This article assumes the possibility of that happening to be true and then builds upon that. I know that this might be science fiction, but I am assuming it to be true. The philosophical element is the most interesting because it would be a complete cycle.

I guess that once the humans are gone from Earth, only the immortal AI will remain. The immortal AI which would have been created in our own image, just as the Bible states that God made man in His image.


Image source

So is there any difference between a human and a sentient AI? Both can think and feel. Both can express emotions. Both can use their cognitive abilities and perform everyday tasks. Both are intelligent beings, with the AI being even smarter. So wouldn't the AI be superior? Is intelligence the only difference? We are humans, and animals are "animals" because they are not as intelligent. Wouldn't the AI regard us as "animals" because we would have less intelligence than them?


The philosophical questions arising from this topic could be endless which is why I don't think that I have all the right answers. As far as "killing a robot" thing goes, I personally believe that they shouldn't be killed, just as humans shouldn't be killed, because after all, what's the difference between the two?

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

The main difference would be precision: their error-rate for ordinary tasks will be extremely small. For example, if I am typing this comment I may have typos, spelling errors, grammar mistakes, or any number of small-scale problems to deal with, but an AI will do none of that. Another big difference would be memory. People have fuzzy memory and some things are lost entirely. They can remember something innacurately or forget it etirely even though they want to remember, wish you could remember your steemit master key? An AI could remember 1000 master keys no problem, and they wouldn't have to stress themselves to do it.

The analysis is thorough and the point is unique. This is a good article.

I hope that before i die I can download my memory on a stick and upload me in a sentinel. That would be awesome.

Hahaha for sure. You could live forever in a sentinel.. who wouldn't want that?

I think the biggest question here is how different an AI consciousness would be from ours. I think we will create more than one type and it will be very hard to draw the line between a soulless computer designed to do a specific task and a sentient being that might deserve the right of self-preservation.

I think it's not that likely that any AI we create would be similar in type of consciousness to us, so questions like "Would it mind if you switched it off or destroyed it?" or "Would it regard less intelligent humans as animals?" might end up having very strange answers or being moot points for the way the AI works and what it values.

I agree that there is a cornucopia of philosophical questions underlying the whole subject, but I think a lot of them would be something we are not capable of imagining just yet.

I agree with your comment, you raise interesting points which I had not thought of prior to writing this post. Cheers!

Great to hear! Cheers! :)

My first ideas about AI came from Heinlein's The Moon Is a Harsh Mistress., which I read in the 7th grade. HOLMES IV is a planet-wide computer system which one day "wakes up" and becomes an integral character to the novel. Instead of an intelligent robot, as Asimov had proposed, Heinlein's model was a precursor to William Gibson's decentralized AI, but less malevolent. HOLMES ("Mike") had learned how to be human-like by observing the humans on the Moon and by being the colony's library.

I think it's possible that an AI could be benevolent, but whether it would awaken in that state is a different question. Could we program it to avoid malevolence? (Asimov's 3 Laws of Robotics) Could it be so smart as to override the programming to suit its own purposes? After all, two abiding drives of organic life are to survive and reproduce. It might decide that the programming is a threat to its existence.

It's a fascinating subject. I doubt we'll see an real AI within the next 50 years, unless quantum computers become a thing.

Thanks for this post. It got me thinking.

The Moon Is a Harsh Mistress is one of my favorites! And yes AI still seems to be a bit far off, but I think that making arrangements before it becomes a big thing would definitely help us in the long term.

@infinitor Maybe we should learn from our mistakes in the past. AI or human...both can be very dangerous :( Look at the world wars. AI is just software, uploaded by a person, group or country. Don't you think so?