Former OpenAI and anthropopic employees are advocating for the right to warn about AI risks.

in hive-166405 •  5 months ago 

Former OpenAI and anthropopic employees are advocating for the right to warn about AI risks.

images - 2024-06-04T212720.580.jpeg

Former OpenAI, Anthropic, and DeepMind employees are advocating for AI companies to enhance whistleblower protections to publicly address AI risks, citing concerns over safety deprivation.

Former AI developers are advocating for improved whistleblower protections to raise concerns about the advancement of advanced AI systems, urging the public to voice their concerns.

On June 4, 13 former and current employees from OpenAI, Anthropic, and DeepMind, along with the "Godfathers of AI" Yoshua Bengio and Geoffrey Hinton, and renowned AI scientist Stuart Russell, initiated the "Right to Warn AI" petition.

The statement aims to establish a commitment from frontier AI companies to enable employees to raise AI-related concerns both internally and with the public.

William Saunders, a former OpenAI employee and supporter, emphasized the importance of sharing information about potential risks with independent experts, governments, and the public.

Experts in frontier AI systems and their deployment risks are restricted from speaking due to potential retaliation and excessive confidentiality agreements.

The text focuses on the principles of right-to-warn.

The proposal proposes four main proposals for AI developers, including eliminating non-disparagement about risks to prevent companies from silenced employees by agreements that prevent AI-related concerns or punishments.

The petition seeks to create anonymous reporting channels for AI risks, fostering open criticism, and protecting whistleblowers. It also calls for a proactive approach to AI companies, ensuring no retaliation against employees who disclose information about AI risks.

AI safety concerns are increasing as AI labs prioritize the safety of their latest models, particularly in the face of artificial general intelligence (AGI). Former OpenAI employee Daniel Kokotajlo left the company due to the company's "move fast and break things" approach, which is not suitable for powerful and poorly understood technology. Former board member Helen Toner claimed that OpenAI CEO Sam Altman was dismissed for withholding information from the board.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

I see you have published your posts several times in the Steem POD Community. I need to mention that this community is specifically for reporting.

Hopefully, you are wise in choosing a community

Screenshot 2024-06-04 at 23.31.11.png