I'm no AI doomsayer (no Elon Musk), but there's one concern I do have that I don't see anybody talking about.

in ai •  last year 

image.png

Powerful AIs are necessarily (for now) highly centralized, because of the high cost of training the models. That means control over what data the models are trained on is entirely in the hands of the big AI outfits like OpenAI. And soon, people are going to start taking steps to get their favorite data into these models to influence them, and by extension you.

This isn't happening yet at any scale because OpenAI for example has a cutoff date: nothing after the advent of Large Language Models is part of the data set, so currently there is no way to do this. But that isn't sustainable, and I have little faith that there's any adequate plan to manage this after that technique reaches its expiration date as a viable solution.

It's going to take resources to exploit this - it's not quite as simple as SEO - and if there's one kind of institution with both the resources and the incentives to try to nudge AIs in their favor by manipulating their training corpuses, it's nation states.

So AIs become de facto propaganda tools in this scenario, and very powerful ones.

We've already seen these AIs exhibit obvious political biases that just as obviously reflect those of the people who manage them, so we know they are very susceptible to this kind of influence, and soon people/institutions are going to be exploiting this deliberately and at scale from the outside.

I don't have a viable solution to that problem. I don't trust OpenAI etc. to even try very hard to mitigate it, or to do it successfully even if they do try. For myself, I think the best I can do is keep this in mind when evaluating the output of an AI on subjects likely to be sensitive to this kind of manipulation.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!