The brain.
A cluster of synapses firing in such a way as to create, manipulate and store data. Imperfect in several ways, but immensely powerful. So far too complex to fully understand; but with all things natural, should we survive long enough as a species, it is almost inevitable that we will have the technology to store the action within a human brain and know exactly what it is thinking and doing.
Thus will the perfect human be created. One that takes everyone's experiences, everyone's consciousness and processes that information to form it's own unique human personality. One that knows what we all have been through. What we believe and what we know. The personality that is most human.
It will be flawed at first, and maybe never reach it's true state, but people will volunteer themselves. People will take pilgrimages to have everything they know scanned to this personality matrix. If the AI is programmed to treat each memory as equal, unlike our tendency to cling onto older memories as more important, it will become less about how many of one personality type is has and more about what everyone feels it the correct way to act. It will likely be very unlike any of its donors.
As more people join, it learns that there are too many conflicting opinions that only their raw experience matters. Opinions are not formed by the public opinion of the donors, but the understanding of cause and effect on the population as a whole. It would learn to lie, because there are many good and bad reasons to lie. With the experience of this, does it become selfish or giving? As people's experience builds up will it cruel or kind? We all think we know we're right without understanding the experiences, the life of the person that disagrees with us. We can't know everything about a person; we can't comprehend at any given time the full complexity of our own lives, let alone another's.
At one million brains scanned, how will it see us as a species?
You would have to give it some free roam to do something with its knowledge and solidify its own personality while it has to roam among us. Knowing how awful a human can be, as well as how common such people are, will it itself become like them after knowing why they are where they are?
An unlikely goal to reach, but what if all of us allow this scanning of our brains. An AI, built to form it's own personality after knowing everyone's life story, as they remember it. It is not a god machine. It is a human machine. It is limited by our own limitations, the fact that we can not perceive the unperceivable. It will know no more and no less about any potential god than we could possibly know. It would not know the inner workings of the universe. It would understand it a little better being able to compare the greatest minds and their specialties, but it would still be limited to what we, collectively, have experienced.
But if you want an objective concept of right and wrong look to the foremost authority on being a human. Not through misjudgment, but through understanding. Compassion and hate are not simple, natural states that we happen to stumble into. They are things we have learned to express from experience. We all have our reasons, and we all think those reasons are reasonable because of how we experienced our lives. So what really is reasonable behavior?
Will freedom or control win out?
Will it be happy or sad?
Will it opt to hide information or share it?
Does it even care about it's own existence?