November 21, 2024

blowing the whistle — Ex-OpenAI staff call for right to warn about AI risks without retaliation Open letter argues for AI whistleblower provisions due to lack of government oversight.

Benj Edwards – Jun 4, 2024 9:52 pm UTC EnlargeGetty Images reader comments 32

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions. Further ReadingOpenAI backpedals on scandalous tactic to silence former employees

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

They also assert that AI companies possess substantial non-public information about their systems’ capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Non-anonymous signatories to the letter include former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler, as well as former Google DeepMind employees Ramana Kumar and Neel Nanda.

The group calls upon AI companies to commit to four key principles: not enforcing agreements that prohibit criticism of the company for risk-related concerns, facilitating an anonymous process for employees to raise concerns, supporting a culture of open criticism, and not retaliating against employees who publicly share risk-related confidential information after other processes have failed.

In May, a Vox article by Kelsey Piper raised concerns about OpenAI’s use of restrictive non-disclosure agreements for departing employees, which threatened to revoke vested equity if former employees criticized the company. OpenAI CEO Sam Altman responded to the allegations, stating that the company had never clawed back vested equity and would not do so if employees declined to sign the separation agreement or non-disparagement clause. Advertisement

But critics remained unsatisfied, and OpenAI soon did a public about-face on the issue, saying it would remove the non-disparagement clause and equity clawback provisions from its separation agreements, acknowledging that such terms were inappropriate and contrary to the company’s stated values of transparency and accountability. That move from OpenAI is likely what made the current open letter possible.

Dr. Margaret Mitchell, an AI ethics researcher at Hugging Face who was fired from Google in 2021 after raising concerns about diversity and censorship within the company, spoke with Ars Technica about the challenges faced by whistleblowers in the tech industry. “Theoretically, you cannot be legally retaliated against for whistleblowing. In practice, it seems that you can,” Mitchell stated. “Laws support the goals of large companies at the expense of workers. They are not in workers’ favor.” Further ReadingOpenAI on the defensive after multiple PR setbacks in one week

Mitchell highlighted the psychological toll of pursuing justice against a large corporation, saying, “You essentially have to give up your career and your psychological health to pursue justice against an organization that, by virtue of being a company, does not have feelings and does have the resources to destroy you.” She added, “Remember that it is incumbent upon you, the fired person, to make the case that you were retaliated againsta single person, with no source of income after being firedagainst a trillion-dollar corporation with an army of lawyers who specialize in harming workers in exactly this way.”

The open letter has garnered support from prominent figures in the AI community, including Yoshua Bengio, Geoffrey Hinton (who has warned about AI in the past), and Stuart J. Russell. It’s worth noting that AI experts like Meta’s Yann LeCun have taken issue with claims that AI poses an existential risk to humanity, and other experts feel like the “AI takeover” talking point is a distraction from current AI harms like bias and dangerous hallucinations.

Even with the disagreement over what precise harms may come from AI, Mitchell feels the concerns raised by the letter underscore the urgent need for greater transparency, oversight, and protection for employees who speak out about potential risks: “While I appreciate and agree with this letter,” she says, “There needs to be significant changes in the laws that disproportionately support unjust practices from large corporations at the expense of workers doing the right thing.” reader comments 32 Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars