Friday, July 26, 2024
HomeAICurrent and former OpenAI employees warn of AI’s ‘serious risks’ and lack...

Current and former OpenAI employees warn of AI’s ‘serious risks’ and lack of oversight


A group of current and former OpenAI employees published an open letter on Tuesday, expressing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and whistleblower protections for those who want to speak up.

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the staff members wrote.


OpenAI, Google, Microsoft, and Meta

and other companies are leading a generative AI arms race — a market expected to reach $1 trillion in revenue within a decade — as businesses in seemingly every industry scramble to add AI-powered chatbots and agents to avoid falling behind competitors.

Current and former employees wrote that AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve implemented, and the technology’s risk levels for various types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “presently have only weak obligations to share some of this information with governments and none with civil society.” We do not believe they can all be counted on to share it voluntarily.”

The letter also expresses current and former employees’ concerns about insufficient whistleblower protections in the AI industry, stating that without effective government oversight, employees have a relatively unique opportunity to hold companies accountable.

“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the individuals who signed the document wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The letter requests that AI companies commit to not entering or enforcing non-disparagement agreements; to establish anonymous processes for current and former employees to raise concerns with a company’s board, regulators, and others; to promote an open criticism culture; and not to retaliate against public whistleblowing if internal reporting processes fail.

Four anonymous OpenAI employees and seven former employees signed the letter, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, and Daniel Ziegler. Signatories included Ramana Kumar, who previously worked at Google DeepMind, and Neel Nanda, who now works at Google DeepMind after previously working at Anthropic. Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, three well-known computer scientists who have advanced the field of artificial intelligence, also endorsed the letter.

A spokesperson for OpenAI told CNBC: “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.” The Microsoft-backed company has an anonymous integrity hotline and a Safety and Security Committee comprised of board members and OpenAI leaders, according to a spokesperson.

Microsoft declined to comment.

Increasing controversy for OpenAI
In May, OpenAI reversed its controversial decision to force former employees to choose between signing a non-disparagement agreement that would never expire and keeping their vested equity in the company. CNBC obtained an internal memo, which was sent to former employees and shared with current ones.

The memo, addressed to each former employee, stated that upon leaving OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” a spokesperson for OpenAI told CNBC.

Tuesday’s open letter follows OpenAI’s decision in May to disband its team focused on AI’s long-term risks, just one year after announcing the group, a person familiar with the situation told CNBC.

The individual, who spoke on the condition of anonymity, stated that some of the team members are being reassigned to other teams within the company.

The team was disbanded after its leaders, OpenAI co-founders Ilya Sutskever and Jan Leike, announced their departure from the company in May. Leike stated in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

On X, CEO Sam Altman expressed sadness over Leike’s departure and stated that the company still had work to do. Soon after, OpenAI co-founder Greg Brockman shared a statement attributed to Brockman and Altman on X, claiming that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike posted to X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike stated that he believes that much more of the company’s resources should be directed toward security, monitoring, preparedness, safety, and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he stated. “For the past few months, my team has been sailing against the wind. We sometimes struggled for [computing resources], and it became increasingly difficult to complete this critical research.”

Leike added that OpenAI should become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he stated. “OpenAI bears enormous responsibility on behalf of all humanity. However, in recent years, safety culture and processes have taken a back seat to shiny products.

The high-profile departures occur months after OpenAI experienced a leadership crisis involving Altman.

In November, OpenAI’s board fired Altman, claiming in a statement that he had not been “consistently candid in his communications with the board.”

The issue appeared to become more complex by the day, with The Wall Street Journal and other media outlets reporting that Sutskever had trained his focus on ensuring that artificial intelligence would not harm humans, whereas others, including Altman, were more eager to move forward with delivering new technology.

Altman’s dismissal sparked resignations or threats of resignation, including an open letter signed by nearly all of OpenAI’s employees, as well as outrage among investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley, and Sutskever, who had voted to remove Altman, were dismissed.Sutskever remained on staff at the time, but not in his role as a board member. Adam D’Angelo, who had also voted to remove Altman, stayed on the board.

In May, OpenAI released a new AI model and desktop version of ChatGPT, as well as an updated user interface and audio capabilities, as part of its ongoing effort to expand the use of its popular chatbot. One week after OpenAI debuted its range of audio voices, the company announced that it would remove one of the viral chatbot’s voices, “Sky.”

“Sky” sparked outrage because it resembled actress Scarlett Johansson’s voice in “Her,” a film about artificial intelligence. The Hollywood star claims that OpenAI stole her voice despite her refusal to let the company use it.




RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments