We need to talk about Ava.

in ethics •  4 years ago 

image.png

I think that most of my friends have seen the movie Ex Machina or read Do Android's Dream of Electric Sheep. At least, you've probably seen Blade Runner or Westworld.

For this, I think that it's just appropriate to say, "We need to talk about Ava." Ava being the AI in Ex Machina. For those who haven't seen the movie, Ava behaves like a human in almost every observable way. She also looks like a human but for her exposed interior that she has for most of the movie. She's not even being subjected to a Turing test: we all know she'd pass. Rather, the movie is kinda about a reverse Turing test.

So, science fiction has dealt with AI for decades and built up serious ethical questions, few as good as Ex Machina. So, here's the dilemma that I want to pose: "When do we have moral obligations toward Ava?"

We're coming to a point technologically where I don't think that the question of Ava coming into existence is an "if." It's a "when." When that first Ava is created, we're going to be faced with massive ethical dilemmas. Does Ava have agency? Consciousness? Free will? Do we even have free will? Are those factors why we feel moral obligations to others? Do we assume that AIs have consciousness and behave accordingly; or, does Ava have to wait until we solve those mysteries before we treat her as well as another person? What does this say about how we treat each other and our fellow animals?

Again, I think we need to talk about Ava.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!