Figuring out AI alignment?

in ai •  last year 

image.png

I'm now more or less fully opposed to the safetyists and prefer progress under people who understand but aren't a slave to the x-risk arguments. They "won" the argument and now they've gotten started preventing the world from improving.

I now smell the grasp for power, the status payoff that safetyists thought they'd never see, the attempt to become the regulatory apparatus/ "Emergency Committee of Atomic Scientists" who will make a living stifling economic growth and being the "trained ethics expert" (who is a always the most useless person in the room).

There will either be no "figuring out alignment" or the default process will find it. The most preposterous and unlikely outcome to me is the one that I see EAs most often suggesting: get the important AI people together in a room with Yud/MIRI/whatever and don't do anything until alignment is "solved".

Just like solving "the good" or "world peace", this is nothing more than a waste of time. Like pausing all production of any products or machinery that might kill someone until these timeless philosophical problems are "solved." Alignment will never be "solved" to the safetyists satisfaction, period, ever. If it was "solved" they wouldn't believe it.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!