I believe that technology should be used in any way possible to help us understand whatever ethical system we believe to be following, and, as you mentioned, even to help guide us in that journey. But this is not exactly equivalent with "ethically aware" technology. I see many issues with trying to embed ethics into all our technical tools, but the most obvious to me is that ultimately, it is up to the person using the tool and not the creator.
Let me illustrate with an example. Lets say that I create a content publishing platform for the web, and I share the code. This tool is ethically agnostic. It does not know what kind of content will be published. But, lets say that we care about ethics, so we embed a fact-checking tool with our publishing platform, to help authors publish more accurate information. Well, If anyone wants to use our platform to publish factual inaccuracies, all they have to do is take the fact-checking tool off of the source code. We could decide not to share the source code, but this has all sorts of ethical issues within itself, that is why the free software movement exists.
The fact that it is hard for us to embed an ethical tool within an agnostic tool does not diminish the value of either, but it does point to the fact that it is, probably, a better use of our resources to simply publish the useful, agnostic tools, and then create tools to fight the improper uses of the agnostic tools once the ill effects materialize. Otherwise we might find our selves spinning our wheels forever trying to safe guard all our tools for improper uses that might never exist. The first option seems more consequentialist to me.
The source code isn't where it's important to be ethical. It's in the formal specification which ultimately can become a source code and then executable but it's not the source code. The formal specification is a description of the intended behavior of the publishing platform. In that description of behavior you would be able to make a decision whether to have the capability to moderate the platform or to leave it unmoderateable. To make it a platform which can be moderated is allowing the ethics of the community to determine which content is visible on the platform, which content becomes popular, which content becomes rewarded, etc. In a completely agnostic platform you would have all kinds of content with no collaborative filtering mechanism and as a result less people will find the sort of information they like and more people will be confronted with information they don't want to be exposed to. So by design you want to give as much control over these decisions to the users, as to what they would like to see, or what they want on the platform, and you could even in the formal specification detail it as a right of the user to filter content according to their preferences whether collectively, individually, or in any human-machine combination.
A formal specification isn't necessarily agnostic because you decide which features to prioritize. So for example if you put in a consensus mechanism then the platform appears agnostic, but what you're really doing is giving the users control over the ethical element of the platform through the consensus mechanism. You're giving the user the ability to take features out, put features in, change whatever they need to change to make sure the platform stays in an acceptable range of ethical.
It's not our tools. The tools belong to the world. The tool maker just has to design the tool to be capable of learning about the user in such a way that it can be "ethically aware" in the sense that it has a model of what a human is, has a model of what the various human religions are, has models of the different philosophical views, schools of thought, social norms, traditions, laws, etc. This knowledge can allow the platform to help the user to adhere to their own rules because it's aware that humans adhere to these rules. The platform does not make rules for humans, or determine the rules, it simply can reason over it's knowledge which is supplied by humans.
So just as you can supply Wikipedia with the knowledge of humanity, then maybe a platform can use that knowledge to understand little facts about humanity and when it's time to redesign the platform it can be aware of these little facts and help humans to make better design decisions. For example a platform which is aware of human rights can give suggestions to help make it's formal specification better for those who care about human rights. But how would it know if the users care about human rights? The consensus process would reveal what the users care about which is why consensus is so important.
If the majority of users of Steemit express support for human rights according to a process defined in the formal specification then once consensus is reached the platform would know to suggest design improvements according to this priority. That is possible using AI which reasons over a knowledge base.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Have you heard of extended mind theory? The idea is that your tool becomes an extension of you, it's a part of your mind. So your ethics embedded into it only is a digital self representation. Agent based AI for example must have a set of goals and priorities and if it's your personal agent then it's going to have your goals, priorities, and ethics. By ethically aware I mean we need a platform which can contain knowledge of human experience, laws, social norms, ethics, in it's knowledge base, with an ability to reason over it, and help people to make higher quality decisions.
Not embed ethics in the tool but in the AI itself and in the design. By design I mean the mechanism design which is not the source code but the incentive structure. For example Bitcoin attempted to be decentralized because the mechanism design promotes certain priorities and security guarantees. Of course it failed because it's becoming centralized but you can see how mechanism design works by rewarding the participants on the platform who play the game according to the agreed rules which gives an incentive to adhere to those design constraints.
Steemit has this too with it's reputation system and reward system. It's an information diffusion and generation platform where you're encouraged to share new information on a transparent blockchain. Your property rights in theory are supposed to be protected by your control of the private keys. So there definitely is ethics built into the design of Steemit and while anyone can fork the code and create a different version the community will typically join the platform which has the ethics they agree with.
The ethics do not go into the source code. In addition, it's not so important that you put your ethics into anything. What you have to do is create a platform which is capable of inheriting the ethics of the community based on some consensus process. This capability can be part of the formal specification of the platform and verified by the formal verification process. The behavior of AI can be constrained in the formal specification and proven by the formal verification process. Platform which don't respect the ethics of the community and which violate human rights can be avoided for alternative platforms which do and the platform most fit will ultimately survive in a truly free market. Of course I know we don't have a truly free market but that is the theory at least.
References
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit