December 20, 2024

disbanded — What happened to OpenAIs long-term AI risk team? Former team members have either resigned or been absorbed into other research groups.

Will Knight, wired.com – May 18, 2024 3:54 pm UTC EnlargeBenj Edwards reader comments 127

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAIs chief scientist and one of the companys co-founders, was named as the co-lead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAIs superalignment team is no more, the company confirms. That comes after the departures of several researchers involved, Tuesdays news that Sutskever was leaving the company, and the resignation of the teams other co-lead. The groups work will be absorbed into OpenAIs other research efforts.

Sutskevers departure made headlines because although hed helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board.

Hours after Sutskevers departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment teams other co-lead, posted on X that he had resigned.

Neither Sutskever nor Leike responded to requests for comment. Sutskever did not offer an explanation for his decision to leave but offered support for OpenAIs current path in a post on X. The companys trajectory has been nothing short of miraculous, and Im confident that OpenAI will build AGI that is both safe and beneficial under its current leadership, he wrote. Advertisement

Leike posted a thread on X on Friday explaining that his decision came from a disagreement over the companys priorities and how much resources his team was being allocated.

I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point, Leike wrote. Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

The dissolution of OpenAIs superalignment team adds to recent evidence of a shakeout inside the company in the wake of last Novembers governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an Internet forum post in his name.

Two more OpenAI researchers working on AI policy and governance also appear to have left the company recently. Cullen O’Keefe left his role as research lead on policy frontiers in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has coauthored several papers on the dangers of more capable AI models, quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI, according to a posting on an Internet forum in his name. None of the researchers who have apparently left responded to requests for comment.

OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman, who co-leads the team responsible for fine-tuning AI models after training.

The superalignment team was not the only team pondering the question of how to keep AI under control, although it was publicly positioned as the main one working on the most far-off version of that problem. The blog post announcing the superalignment team last summer stated: Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Advertisement

OpenAIs charter binds it to safely developing so-called artificial general intelligence, or technology that rivals or exceeds humans, safely and for the benefit of humanity. Sutskever and other leaders there have often spoken about the need to proceed cautiously. But OpenAI has also been early to develop and publicly release experimental AI projects to the public.

OpenAI was once unusual among prominent AI labs for the eagerness with which research leaders like Sutskever talked of creating superhuman AI and of the potential for such technology to turn on humanity. That kind of doomy AI talk became much more widespread last year after ChatGPT turned OpenAI into the most prominent and closely watched technology company on the planet. As researchers and policymakers wrestled with the implications of ChatGPT and the prospect of vastly more capable AI, it became less controversial to worry about AI harming humans or humanity as a whole.

The existential angst has since cooledand AI has yet to make another massive leapbut the need for AI regulation remains a hot topic. And this week OpenAI showcased a new version of ChatGPT that could once again change peoples relationship with the technology in powerful and perhaps problematic new ways.

The departures of Sutskever and Leike come shortly after OpenAIs latest big reveala new multimodal AI model called GPT-4o that allows ChatGPT to see the world and converse in a more natural and humanlike way. A livestreamed demonstration showed the new version of ChatGPT mimicking human emotions and even attempting to flirt with users. OpenAI has said it will make the new interface available to paid users within a couple of weeks.

There is no indication that the recent departures have anything to do with OpenAIs efforts to develop more humanlike AI or to ship products. But the latest advances do raise ethical questions around privacy, emotional manipulation, and cybersecurity risks. OpenAI maintains another research group called the Preparedness team that focuses on these issues.

This story originally appeared on wired.com. reader comments 127 WIRED Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars