Consequentialism requires a consequence rating system. What do you propose?
RE: Consequentialism and big data, the need for intelligent machines to assist with moral decision making
You are viewing a single comment's thread from:
Consequentialism and big data, the need for intelligent machines to assist with moral decision making
There are many ways to rank/sort consequences. The method I use for myself is based on the consequences to my interests. If I include for instance my own fate as being something I'm interested in and the fate of others whom I care about, then the consequences are ranked in relation to how they effect me and whatever/whomever I care for.
I can for example determine that breaking a certain law is very very stupid because it could result in a consequence of a permanently damaged reputation (certain people will never see me the same again) or it could result in me losing my freedoms (prison), or losing my resources (loss of wealth). I can also for example determine that certain decisions whether legal or illegal are very very smart, because the consequences to my reputation are minimal damage or even positive gain, or the consequences to my freedom are that I get more of it, or and the consequences to my wealth are that I increase it, and so on.
There are of course more things that I care about than just reputation, money, freedom, but these are common examples almost anyone can relate to. In the end we all determine what is smart or stupid based on cost vs benefit analysis which is something an AI is perfectly suited to handle. Even for investing in stocks we rely on algorithms which basically do cost/benefit analysis, ranking different stocks by a point or grading system.
What I propose as a solution is to let each consequentialist individual tell the machines their own values, interests, etc. Then allow the AI to learn about them by voluntarily giving as much data as possible to the AI. This would allow the AI to eventually gain insights into what makes an individual tick, and allow that AI to best help that individual become a more moral person based on their own particular values and interests.
Basically I believe that to the AI we should be radically transparent. This is because I view the AI as an extension of the mind of the individual without a will of it's own. It's an extended mind or limb, and all communication between the individual and it, in my opinion should remain confidential. The function of this AI is to help the individual think better, and make higher quality decisions.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit