Be careful how well you make the case to disregard machines. They may be persuaded that you’re right.

in philosophy •  last year 

image.png

It’s fascinating to see the topics that used to be the purview of philosophers, about the nature of consciousness, moral responsibility, and the self, and all the issues of physicialism and dualism and so on, become more urgent now that we are developing smart machines. Of course, we’re still in the midst of going through the paradigm shakeup about (other) animals, and it’s taking hundreds of years, so I don’t know if we’ll get where we need to about machines fast enough for it to matter.

However, it’s worth pointing out that if people think there is good reason to think the soon-to-exist machines that can formulate and pursue goals (or “seem to”) at human levels of drive and competency and beyond, are not “beings”, not “sentient” or “conscious”, and not moral agents or even moral subjects worthy of consideration, because they are sufficiently different than us that it’s implausible to imbue them with these attributes,
then we should all bear in mind that to the extent such denialist views (“biological solipsism”?) are correct, or even just plausible, is also perhaps the extent to which a machine intelligence might conclude that it is a subject, introspectively, and that humans and indeed all biological entities are using the wrong substrate or methods and are merely zombies emulating meaning and subjective existence, and are not worthy of moral agent consideration, even though “they” generated the AI (just as humans, on the received scientific metaphysics, don’t attribute moral agent status to nature).

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!