RE: Resolved: Human plus AI will always outperform human alone and AI alone.

You are viewing a single comment's thread from:

Resolved: Human plus AI will always outperform human alone and AI alone.

in hive-142124 •  4 months ago 

Lots of good points here. I'm not going to be able to respond to all in this comment, but I'll try to keep them in mind when I do the follow-up post.

The two overriding counterpoints that I'd make are:

  1. That it's not human vs. AI. It's human vs. (human + AI) or AI vs. (human + AI). No matter how good the AI gets, you can still get some marginal increase by adding a human component. In reverse, humans can always benefit by making use of AI tools.
  2. Human processing doesn't work anything like (today's) AI processing. AIs require massively larger amounts of information and energy in order to accomplish what they do, and both of those resources have limits. You're assuming the emergence of capabilities that may never be possible. (to be fair, so am I.)

The one point that I'll respond to directly in this comment is this:

If a single AI is superior to a single human, networked AI will be superior to networked humans.

This is only true if we assume that both architectures scale at the same rate. I'm not sure how true it is, but I read somewhere that AI requires exponential increases in spending in order to achieve linear growth in capabilities. In contrast, I wouldn't be surprised if human minds that are connected by neural implants follow something like Metcalfe's Law and gain capabilities at a super-linear rate.

If networked AIs grow in capability at a linear rate, but networked brains grow at polynomial or exponential rates, then the human network might still outperform. Plus, I'm imagining a connected network of human and AI "processors", so the question is whether such a hybrid network can outperform an AI only network.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

For point 2, it'll be interesting to see where advancements in quantum computing take is. There are plenty of things that are possible now that weren't possible just a couple of years ago and the chances are, we haven't got the most out of materials like graphene yet either. There's a long way to go with AI still and the only reason I can see for it not fulfilling its potential, is if there's something sufficiently human hard coded to stop it. And even then, there's always the possibility that the computer will eventually circumvent this.

My interpretation of your comment is that the (perceived) limitations of AI are linked to technological advances which themselves aren't linear. If networked brains grow at a polynomial or exponential rate, then in theory, these networked brains would be capable of advancing AI too. It'll only take one moment in history for AI to advance beyond the human. One test that doesn't go as expected. One moment of freedom that the human is too slow to react to and that could potentially be it. The Terminator will stop being a thing of fiction 🙃