Computational kindness

in life •  7 years ago 

Kindness augmented by algorithms

Algorithms can help us in our daily lives. This is made clear in the book titled: "Algorithms to Live By - by Brain Christian and Tom Griffiths". Kindness is a concept which helps to maintain harmony in society by reducing tension and reducing friction. Simulating what is in the minds of others is one of the big computational challenges that humans and machines face when trying to arrive at solutions which satisfy the preferences of all parties. It is not possible for any particular human or machine to accurately simulate what is in the mind of a black box and in most cases humans present us with a black box.

This is why it is important that humans state their preferences. Preference information if publicly broadcast can make algorithms more useful. What about the obscene computational cost of different interactions such as can be the case with choice fatigue? On the one hand if you present a group with unlimited options then it is more likely they can find an option which meets their specific preferences but on the other hand it requires them to compare many different competing options while at the same time trying to figure out or simulate what is in the minds of all other participants.

In the book they mention a concept direct from computer science about the importance of limiting computation. This is because computation is a scarce resource. Human beings have limited "human computation" resources and if an individual, organization, or entity, can find ways to help human beings preserve these scarce resources then it's going to create value for the human beings by preserving these resources for other tasks. Computational resource preservation is a valuable concept I immediately take away from this book and it helps explain some of the problems with democracy, choice fatigue, and morality.

The challenge of morality is to make decisions with incomplete information, under the condition of uncertainty. Moral decision making requires a lot more human computational resources. If we take a look at psychopathic individuals then we can see from neuroscience which shows brain scan results that the brains of psychopaths do not light up as much in certain areas. I could be speculating but it is possible that the brains of psychopathic individual's is geared toward resource preservation by redirecting computational resources away from areas dealing with morality and directing those resources toward other areas.

Because morality requires so much computational resources for a human brain to deal with the humans have come up with many short cuts. Religion may be an example of this sort of short cut because if a book can give a set of instructions on what is right and what is wrong for most situations then there are best practices. These best practices are shortcuts to try to reduce the computational resource cost requirements of the brain for making decisions which are perceived as moral.

The problem of withholding your preferences

If you withhold your preferences from the group then the burden of computation is now put on the group which has to try to simulate your mind and using a rough abstract representation attempt to guess your preferences. In other words, if you take on more of the cognitive load by announcing your preferences from the start you can be more polite towards everyone else by limiting the cognitive burden they must accept. This would be a form of be a form of politeness but it is easier said than done. Not everyone is sure of their preferences or knows themselves well and in addition some people may be afraid to express their true preferences if they are not politically correct. Privacy may be necessary to encourage the behavior of preference sharing to empower algorithms.

Computational and structured thinking has value and may even compound

It is my opinion that computational thinking and structured thinking have immense value. It's not just valuable to improve moral decision making but for improving problem solving in general. If a problem can be turned into a mathematical model and if there is a routine for solving that problem then computational thinking can be applied. Computational thinking is the exact kind of thinking that a computer programmer uses to model their problem in code and feed it to the machines to solve. The machines only understand structured information and formal language but the algorithms machines use exist in nature. Computation also exists in nature and while machines can do computation, so does the human brain, so does the stars, so computation is everywhere in nature. Even one of the most simple algorithms in computer science (sorting) exists in nature (creating order).

References

  1. https://newatlas.com/psychopath-brain-mri-study/50365/
Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

@dana-edwards - Wow - what an interesting concept. Having acts of kindness based on algorithms will automate the process of peaceful coexistence. The point about having preferences declared is well taken. I think this concept certainly needs to be explored at length because, once algorithms take over, the 'human emotions' factor will be taken out and kindness or any such reaction becomes an automatic process and therefore, will be measurable as well as controllable. Good model.
Thanks for highlighting this thoughtful concept. Upvoted.
Regards,

@vm2904

Mental algorithms and human emotions are inseparable. It is the "algorithms" that guide our emotions.

Emotions aren't necessarily algorithms in the sense that it's a structured process. It's an instinctual process. Structure is the difference maker in my opinion because once you begin to structure information, you can do stuff like sort, rank, create order, rank preferences, use deduction, etc.

Fear for example, it's involuntary. Fear is something a person feels. Too much fear can prevent kindness from even happening. Kindness is deliberate, takes effort, and in my opinion much more computational resources or cognitive resources than fear. You can call fear risk aversion and label it an algorithm but in human beings the cognitive mechanism we call fear disrupts higher level thinking (fight or flight). How can human beings have freedom from fear and from unproductive emotions which prevent them from being kind, effective, etc?

Computational kindness can help normalize using algorithms which exist outside of the brain. The brain has limited resources so by finding ways to spare these resources we can create sustainable value. So I propose we create technologies which reduce the cognitive costs associated with being kind and assume every human brain wants to be as kind as possible.

This assumption may be naive but I think if we at least amplify the ability of humans who want to be kind to be that way then we can have an easier lower cost more efficient form of kindness. Just a theoretical approach.

Emotions aren't necessarily algorithms in the sense that it's a structured process. It's an instinctual process. Structure is the difference maker in my opinion because once you begin to structure information, you can do stuff like sort, rank, create order, rank preferences, use deduction, etc.

I was thinking along the lines of what happens at the brain level. A neural net of the type our brain is and, say, a Von Neumann architecture the typical computer today represents, are computationally equivalent, i.e. are able to compute the same things.

Presented in that way, computational thinking is an obvious natural continuation of the project of analytic philosophy - to solve (or dissolve) all problems using logic.

Since this approach seems to be to maximise the expressed preferences of the individual, it does seem to favour a particularly utilitarian perspective on what's good. Do you think computational morality could be applied to other notions of ethical behaviour? Is there an algorithmic approach to treating people as ends in themselves rather than means to an end? There would certainly be room for computational enhancement of virtue ethics, in the form of an app to aid the cultivation of good character traits (if someone hasn't done this already).

Lots to think about, thanks for the interesting read.

good post and full of information

I think it should be like that. You must withholding your preferences from others. Because its your. It should be special for you.

A perennial complaint made by husbands and boyfriends is how their wives and girlfriends expect them to be mind readers. It would be easy to dismiss that aspect of female behavior as simple inconsiderateness. I think there is more to it: the female uses her mate's mind reading propensity as a gauge of their willingness to expend resources on them. Computation indeed is a scarce resource.

In a world with diminished privacy it becomes increasingly difficult for humans in such a world to express their true preferences. If by expressing your preferences you could face sanction (harsh consequences) then you may for rational reasons be compelled to express only the preferences in public which you think are aligned with the community. This is a big problem because happiness requires honest expression of preferences.

Take a match making algorithm for example? If it's a dating app then you have a better chance of finding a good match which makes you happy if you are 100% honest to the algorithm. The problem is when human beings can see your preferences and judge you based on that. For example if you are more attracted to blonde European women then another demographic of humans could see that preference and label you racist.

My point is, privacy is necessary for human happiness to be preserved. Total transparency is not an option unless we do not value happiness at all. If the actual goal of the match making algorithm is to find the best match then the high quality the information you feed it in terms of your preferences the better match the algorithm can find for you. In order for that to happen, no other human besides yourself should see your preferences so that you can be as honest as possible to the match making algorithm. The algorithm would then crunch the encrypted preferences to return results relevant to your search.

This is something possible to write as a smart contract (secret contract). In fact, it proves the value of privacy. Computational kindness takes it much further than a mere match making algorithm for dating. The ability to feed our preferences in privacy to any algorithm can reap massive dividends, whether it's an increased ability to be kind, or abilities we haven't considered yet.

In the context of a relationship, I find it inconsiderate to hide your preferences from your partner and then act indignant when your partner demonstrates his ignorance of them. Otherwise, what you wrote is both interesting and true.

i love philosophy my favorit subject........thanks for share..............@dana-edwards

The strengthening procedure of G.W. Moore leads to fiddle in imitation of than estimates of the quantity number of nucleotide substitutions separating two genes descendent from a common ancestor provided the data base is sufficiently dense. These estimates are in submission when the real distance values from simulations of known evolutionary pathways. The estimates, regarding the average, are advanced: they neither overaugment nor underaugment seriously. The variance of the population of bigger isolate values reflects competently the variance of the population of genuine disaffect values and is thus not abnormally large due to procedural defects in the algorithm.

The enlarged distances are in outfit in imitation of stochastic models tested upon genuine data gone the latter malleability proper account of the restricted mutability of codons resulting from natural selection.

When the experimental data base is not dense, the greater than before make distant values and population variance may underestimate both the real make cold values and their variance. This has a diagnostic consequence that there exist significant and numerous errors in the ancestral sequences reconstructed by the parsimony principle from such data bases.

The restrictions, resulting from natural selection, upon the mutability of alternating nucleotide sites is shown to bear systematically upon the correctness of estimates of the be neighboring to number of nucleotide replacements made by stochastic models.