OpenAI employees are leading efforts to protect whistleblowers who raise concerns about AI risks. They are asking for stronger protections so employees can speak out without retaliation.

OpenAI Employees Advocate for Whistleblower Protection

Employees of OpenAI are leading an effort to safeguard whistleblowers who raise concerns about the risks associated with artificial intelligence (AI).

The group, consisting of both current and former OpenAI workers, is urging companies like ChatGPT-maker and other AI firms to implement more robust protections for employees who identify safety issues related to AI technology.

The Associated Press reported that their plea, articulated in an open letter released on Tuesday, emphasized the necessity of granting researchers the freedom to raise alarms about the dangers of AI without fear of retaliation

Daniel Ziegler, a former engineer at OpenAI and one of the organizers of the open letter, highlighted the rapid advancement of AI technologies and the significant incentives to progress quickly, underscoring the need for caution.

Ziegler shared that during his tenure at OpenAI from 2018 to 2021, he felt comfortable voicing concerns internally as he contributed to the development of techniques pivotal to ChatGPT's success.

However, he expressed apprehension that the rapid push to commercialize the technology is prompting OpenAI and its rivals to overlook potential risks.

Daniel Kokotajlo, another co-organizer who departed from OpenAI earlier this year, cited a loss of faith in the organization's commitment to responsible action. His concerns are primarily directed at OpenAI's pursuit of artificial general intelligence, aiming to develop AI systems surpassing human capabilities.

Kokotajlo emphasized that they and other individuals have adopted the "move fast and break things" mindset, which contradicts what is necessary for handling such powerful and poorly comprehended technology.

Read Also: Elon Musk Shares 'Disturbing' Letter About Sam Altman, Greg Brockman from Alleged Former OpenAI Employees

GERMANY-US-INTERNET-AI-ARTIFICIAL-INTELLIGENCE
A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany. (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)
(Photo : Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)

Response of OpenAI

In response to the letter, OpenAI said that it currently offers channels for employees to voice their concerns, including an anonymous integrity hotline. The company emphasized its commitment to providing the most capable and safest AI systems, citing its scientific approach to managing risks. 

OpenAI also acknowledged the importance of rigorous debate surrounding AI technology and expressed its intention to collaborate with governments, civil society, and other global communities.

Among those 13 people who signed the letter are four OpenAI employees who chose to remain anonymous. The others are former OpenAI employees and two from Google's DeepMind.

The letter urged companies to cease enforcing "non-disparagement" agreements, which penalize departing employees by revoking their equity investments if they criticize the company after their departure.

Social media backlash regarding language in OpenAI's departing worker documents prompted the company to waive those agreements for all former employees. Notably, the open letter is supported by three pioneering AI researchers: Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

Their collective advocacy highlights concerns regarding potential risks posed by future AI systems to humanity's survival. They noted that without additional safeguards, AI could pose a threat of "human extinction." 

Related Article: Former OpenAI Board Member Reveals Board Was Unaware of ChatGPT Launch Until Public Release

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion