On this post I will expand on the necessity of morality. Why did we develop morals and why is there this almost instinctive gut reaction to immoral acts? The first part focused on the definition of moral and what are the necessary components to establish the concept. Take a read of it first if you have not already.
Origin of Morals - Need
If we take another look at the definition of moral we can easily infer from the reflection requirement that morals only exist if there is at least a group of entities either capable or with the potential to be moral. I will refer to this class of entities as moral agents to encapsulate the notion of choice and reflection.
To reach at the need for morals, instead of a simple set of physics rules that can be used to determine what you can or cannot do, I will need to introduce the notion of calculation.
I already defined that a moral choice needs to perform a double reflection. Now, simulate your action, calculate the physics consequences of it, impact every other entity, simulate their thought process, now do the reverse reflection simulation in order to find out what ought to be the likely outcome if you became each one of those other entities, finally extend this calculation into the future. Had enough? Yep, most sentient beings will cower at the mere suggestion that they do this once, imagine doing this for every minor decision you make in life.
One of the needs for morals is the sheer impracticability to calculate to a perfect result the simulation of your actions. We need knowledge of our past actions and their consequences and we need to distil that experience into approximations that lets us make quick decisions. The concept of a moral rule encapsulated this. The species needs to be able to communicate morals in some form and share their experiences. This allows the species to adopt rules that have been tested and perfected over the generations.
But what are we calculating exactly? What are we trying to optimise? This introduces the concept of values. While one can simply use the Maslow hierarchy pyramid of needs I prefer to use a simplification of it. Calculations on morals for me are based on the following:
- Survival. Your ability to exist and how well you are able to do so. Health, strength, clear mind can also be derivative values. Procreation is a mechanism of survival as well.
- Happiness. The equilibrium of your mental state between the environment and yourself. This pertains to the construct of possession of items that you perceive as wanted, your desired state of relationships with other beings, the fulfilment of your constructed objectives for life.
- Liberty. The ability to make choices and remain a moral entity. Anything that reduces liberty, like going to jail needs to be minimised, anything that increases it like technology that enables access to knowledge or locations needs to be maximised.
I choose this set of values because I think they closely represent the preoccupations of a life form that has acquired the ability to construct choice. In fact this is my interpretation of conscience. The entity must survive physically (survival), the entity must preserve its sentience (liberty), and the entity wants to grow in both (happiness). Morals are a direct consequence of the needs of a sentient being. You can choose any set of values different from these as long as they are representative of the common needs of your type of life form. But for simplicity I will presume the three above.
I don’t think there can be a sentient being that does not develop morals. A complex being that cares nothing for his actions and just follows its desires has no need for the construct of choice, hence no need for moral. Without choice, it’s a “zombie”, a slave to its own will, hence, not a sentient being. This is why we regard a brain dead person (or even a completely mad person) as no longer a person, because of the loss of sentience. Its still a life form, but no longer a sentient one.
These values are often conflicting, and to increase happiness you often will be forced to give up some liberty just like other exchanges. The simple example of a moral values compromise is to institute a code of law that will reduce your liberty but will increase your survival. Your calculations will however be constantly be trying to optimise these values. Remember that the reflective nature of morals needs to be guaranteed, so the optimization is not only for yourself but for the whole moral group.
As you can guess I subscribe, for the most part, to the utilitarian viewpoint of morals. As I see it, it's the only mechanic that allows the leap from having choices to actually being able to decide between them. The utilitarian view provides a clear interpretation of why morals are a benefit to the entity and so natural evolution will select for moral entities over amoral ones. It also avoids the subjective argument that morals are just what the group of entities agrees, in a subjective way, on what they want and morals are made on the spot.
An interesting side effect of this corollary is that aliens, if they exist and contact us, will almost certainly be moral beings. They may not have the same set of morals, but they will be capable of morals.
I refute the subjective argument as follows: a subjective moral, decided only on what a group feels is good, is not a stable system. The group will inevitably splinter as another part of the group feels different and their morals would be just as valid and incontestable. Following this to the conclusion, every individual would end up with different morals. As we have seen, the existence of morals depends on reflection and the sharing of morals, so in effect individual moral would render the construct of moral meaningless as it would be equivalent to simply what each person wants. If moral is equivalent to desire then moral in redundant. We observe that this is not the case in reality, in fact morals often conflict directly with the desires of the individual. Therefore, morals must be objective and calculable.
To clarify, this conclusion does not determine that an individual cannot hold different moral beliefs than another individual. The conclusion states that there is only one correct moral right for a situation and that two different moral beliefs cannot be true at the same time. The objective moral might be an unknown but it is calculable, while the subjective moral is just a belief, an approximation that can be correct or incorrect. The fact the different groups of humans hold different morals is an side effect of locality and evolution, not a demonstration that moral is subjective.
The final need should be obvious now. Morals allow for entities to stop playing zero sum games as individuals and start to improve into better and better Nash equilibriums that will benefit all the moral entities in the group, inclusion in that group ensures the individual will benefit.
Morals are needed as a fast way of summarising the complex calculations involved into maximizing the values of Survival, Happiness and Liberty to a group of entities.
That is a good way to think about morals assuming they evolved from a survival utility point of view.
Asimov's Three Laws of Robotics assume that all properly engineered robots would have this set of morals encoded into their firmware:
Of course, if humans were created, the definition of morals is "whatever their Creator approves of." From that point of view, morality is already encoded into our "firmware".
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Thank you for your feedback. I love Isaac Asimov, especially the Foundation series, I can't find sci-fi like that anymore.
The creator approach to morals unfortunately never seemed very self-consistent for me. First it poses the problem of an evolving morality in adjustment with the environment. Human morals have changed substantially over the centuries, those changes seem more likely to come from our own need for adaptation and our growing knowledge amassed by passing the experiences of parents to children that would ultimately culminate in culture and society.
Also, a creator that, like Asimov, would design rules by which his creations would operate, would seem to me foolish to then allow those creatures to break those same rules of their own free will.
There is an interesting story in 'I Robot' where he explored this inconsistency and an AI took actions to imprison all mankind as the only logical conclusion to avoid that any harm could come to them.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Most of the fun in artificial intelligence is not in programming a robot to do exactly what you want it to do. That's so 1950's. The fun is in creating autonomous systems and giving them the free will to interact with a complex environment.
If you want to evolve a physically well adapted species, you put them in an environment where there is the freedom and penalties necessary for physical evolution to occur.
If you want to evolve a spiritually well adapted species, you ...
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Funny enough, AI was my specialisation in Computer Science. I like your perspective here Stan.
Genetic algorithms are nice to use when you are actually unable to encode the exact rules into your agent because they are so complex. So one basically just programs a bunch of possible behaviours and leave open their parameters to the GA forces of selection, mutation and crossover. Its not so much penalties that happen, more like maladaptions getting more and more pruned out.
This path would mean however that the 'moral rules' would be evolved and soft-coded, not pre-designed.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Perhaps. I'm just saying that in all of the 17 autonomous air, ground, sea, and space systems I've ever designed, there was some mix of hard-coded non-negotiable rules (like don't point your antenna away from earth) and other code designed to deal with uncertainties that couldn't be preprogrammed. My only point was to suggest we shouldn't be dogmatic about what a Creator would or wouldn't do.
If my Mars Rover could really think it would be probably asking, "Why did my creator put me in such a hostile environment?" I'd feel bad if my creation rejected me just because it couldn't figure out what I'm up to and evolved a set of morals that caused it to rebel. Come to think of it, that's might be the real reason why Viking 1 permanently cut off all communication by pointing its antenna somewhere else in a huff. :)
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit