AI can evaluate it using an expert system approach or several approaches. AI already provides decision support. The AI would have to both learn (from having many many examples of this situation) to determine likely consequences, and the AI would have to rely on deductive reasoning. AI is good at both of these.
So if the AI has seen this set of circumstances 1000+ times then it may be able to predict something from it. The problem isn't whether or not AI can evaluate it because it for sure can. The bigger question is whether the AI should make the ultimate decision?
I think the owner/user should remain in control and merely get advice from AI. I used the phrase moral search engine because the only thing I want the AI to do is provide moral results from a question.
This condition of having many samples is the culprit of it all. Does that mean that first 1000 will die unfairly until AI will learn anything in every situation? No AI would need to have empathy and ability to reason.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
No it doesn't have to learn from life and death scenarios. It can learn in the context of a game where there are infinite lives.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
It could learn from past events where people would tell what should the correct action be. But then again "correct" actions will be biased by teaching people. Also each situation has different datasets to be considered as important - have no idea how could AI select or even create sets of inputs to be applied to each situation.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit