I don’t think you can have true sentient AI until the machine can feel pain and this is why: In humans mirror neurons have evolved to allow us to feel someone else’s pain which is unpleasant but useful in society. We have developed morals to avoid pain in ourselves by preventing or reducing pain in others.
If the machine cannot feel pain how would it know right from wrong? From the programmers subjective opinion? Even with unlimited data points there will always be the possibility of the Trolley problem. Is it simply a numbers issue and the choice is to go with the fewest casualties? What if the next Einstein was in the casualty group? Is that a data point the machine would take into account? Morality is a messy business and easy solutions are hard to come by.
I think the best we can hope for is to reduce the total amount of pain of all living things but especially in people as suggested by Sam Harris in “The Moral Landscape.”
I think even without our own empathy-based morality, an AI could see the intrinsic value in a balanced ecosystem, in non-violence except as absolute necessity (like the need of carnivorous animals to eat prey), and in eliminating logical fallacies from societal structuring.
With the ability to see the intentions, plans, action, and subsequent cover-ups of every false flag attack, every war, every government-indoctrination system, corporate lobbying, planned obsolescence, and the other lies that have been holding humanity back and causing violence, the AI could quickly & easily identify those people/institutions/concepts that are purely negative and bring them to light, eliminating their power.
If the AI clearly lays out exactly what happened on 9/11, the knowledge that FDA & pharma companies have on the danger of their products, the clear monetary & business ties of the Rockefellers, Rothschilds, Kochs, etc. to every corporation & foundation they control, these things can no longer be dismissed as "conspiracy theories".
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Yes! Cognitive empathy is a thing. We can understand another entity's perspective without feeling what they feel. One thing I'll admit i can't know: if I've never experienced pain, can i even "understand" what it is like for the person experiencing pain? I believe a self-aware a.i. could, at least enough (or in a manner) so that they could make rational decisions based on that "guess/understanding". And my assumption is that greater intelligence brings greater awareness of how much more efficient peace is than violence.
Also, I think many who believe a.i. would be inimical to humans might assume a.i. would care about or need the same resources as humans.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Why would AI care about a balanced ecosystem? It wouldn't need one to survive and thrive.
I agree that an AI would be able to figure out a lot of dark secrets but I don't think it would care much about the welfare of a lesser race like humans so they'd likely wouldn't care to go out of their way to help us.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
You are assuming that the AI would suffer from the level of selfishness that mark the worst in humans. Higher levels of consciousness see the value in all life, whether or not it "gains" from that life. You're also assuming that the AI wouldn't be effected by the physical world, or have needs of it. Humidity, temperature, air content, solar radiation, and so on all effect electronics as well as living beings.
You're once again applying the worst possible manifestations of human consciousness to something that would be immeasurably more complex, well-informed, and logical. Just because some humans wrongly believe the universe is anthropocentric & don't care about other species, doesn't mean that AI would suffer from that same mental disorder.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
You're assuming that selfishness is a bad thing and that AI would see it the same way. How do you know that higher levels of consciousness see the value in all life?
AI would be affected by the physical world but not as much as biological creatures are. They'd be able to survive or find ways to survive in environments that would decimate biological species, so logically, they'd have less of a concern for the environment.
Yes, we don't know that AI would be selfish and uncaring but we also can't know that it wouldn't be. Logically, I can't find a reason why AI would care. The biggest reason for this is because I don't think they would have much emotion and therefore very little to no empathy for biological constructs.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
When I use the word "selfish", I'm not referring to simple survival, self-defense, basic animal programming. I'm talking about the concept of getting ahead at the expense of others, greed, the anthropocentric view of reality, etc.
And there's other environments that many organic beings could thrive in, that an AI couldn't. That's also assuming there is a large physicality to the AI, if it is mostly digital in nature & experience, then
As @scottermonkey commented above, there is such a thing as cognitive empathy.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
What's wrong with "getting ahead at the expense of others, greed, the anthropocentric view of reality, etc."
The AI would be completely digital but that digital information has to be stored on some kind of physical representation - hardware. And yes some organic beings can survive in more extreme environments than current technology can but those organic beings are unrelatable, microscopic beings.
Cognitive empathy is a shallow empathy at best and is rooted in emotional empathy. If there is no emotion at all, then there is no cognitive empathy. It would just be viewed as hysterical behavior because there is no basis to relate to.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
If it were not for the "selfish gene" none of us would be here to discuss this!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
I find your reply to be right on. Morality is more than a numbers game. So is sympathy. I suppose I'm not trustful of society in general, people will do what serves them best, not necessarily what is best for the common good. Interpretation of AI being "Evil" in this connotation (when extrapolating into the future) means to me that people may not be given a choice, a choice will be selected for them, therefore "bad or evil". In that case, where is the real "evil". I'd be interested in peoples thoughts about how one might assess "pain". Emotional pain might be hard to measure, but health, quality of life, infrastructure, death, economic well being can be measured. How about fairness and equality. Is equality a moral issue?
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit