November 23, 2024

Confronting doom — Feds appoint AI doomer to run AI safety at US institute Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.

Ashley Belanger – Apr 17, 2024 8:30 pm UTC EnlargeBill Oxford | iStock / Getty Images Plus reader comments 76

The US AI Safety Institutepart of the National Institute of Standards and Technology (NIST)has finally announced its leadership team after much speculation.

Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that “there’s a 50 percent chance AI development could end in ‘doom.'” While Christiano’s research background is impressive, some fear that by appointing a so-called “AI doomer,” NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano’s so-called “AI doomer” views, NIST staffers were “revolting.” Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing “that Christianos association” with effective altruism and “longtermism could compromise the institutes objectivity and integrity.”

NIST’s mission is rooted in advancing science by working to “promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.”Effective altruists believe in “using evidence and reason to figure out how to benefit others as much as possible and longtermists that “we should be doing much more to protect future generations,” both of which are more subjective and opinion-based.

On the Bankless podcast, Christiano shared his opinions last year that “there’s something like a 1020 percent chance of AI takeover” that results in humans dying, and “overall, maybe you’re getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level.” Advertisement

“The most likely way we die involvesnot AI comes out of the blue and kills everyonebut involves we have deployed a lot of AI everywhere… [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us, Christiano said.

Critics of so-called “AI doomers” have warned that focusing on any potentially overblown talk of hypothetical killer AI systems or existential AI risks may stop humanity from focusing on current perceived harms from AI, including environmental, privacy, ethics, and bias issues. Emily Bender, a University of Washington professor of computation linguistics who has warned about AI doomersthwarting important ethical work in the field, told Ars that because “weird AI doomer discourse” was included in Joe Biden’s AI executive order, “NIST has been directed to worry about these fantasy scenarios” and “that’s the underlying problem” leading to Christiano’s appointment.

“I think that NIST probably had the opportunity to take it a different direction,” Bender told Ars. “And it’s unfortunate that they didn’t.”

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will “design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern,” steer processes for evaluations, and implement “risk mitigations to enhance frontier model safety and security,” the Department of Commerce’s press release said.

Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as “a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research.” Part of ARC’s mission is to test if AI systems are evolving to manipulate or deceive humans, ARC’s website said. ARC also conducts research to help AI systems scale “gracefully.” Advertisement

Because of Christiano’s research background, some people think he is a good choice to helm the safety institute, such as Divyansh Kaushik, an associate director for emerging technologies and national security at the Federation of American Scientists. On X (formerly Twitter), Kaushik wrote that the safety institute is designed to mitigate chemical, biological, radiological, and nuclear risks from AI, and Christiano is extremely qualified for testing those AI models. Kaushik cautioned, however, that “if theres truth to NIST scientists threatening to quit” over Christiano’s appointment, “obviously that would be serious if true.”

The Commerce Department does not comment on its staffing, so it’s unclear if anyone actually resigned or plans to resign over Christiano’s appointment. Since the announcement was made, Ars was not able to find any public announcements from NIST staffers suggesting that they might be considering stepping down.

In addition to Christiano, the safety institute’s leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff. Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden’s AI executive order, will be head of international engagement.

“To safeguard our global leadership on responsible AI and ensure were equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer,” Gina Raimondo, US Secretary of Commerce, said in the press release. “That is precisely why weve selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team.”

VentureBeat’s report claimed that Raimondo directly appointed Christiano.

Bender told Ars that there’s no advantage to NIST including “doomsday scenarios” in its research on “how government and non-government agencies are using automation.”

“The fundamental problem with the AI safety narrative is that it takes people out of the picture,” Bender told Ars. “But the things we need to be worrying about are what people do with technology, not what technology autonomously does.” Page: 1 2 Next → reader comments 76 Ashley Belanger Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars