Miles Brundage, a longtime policy researcher at OpenAI and senior adviser to the company’s AGI readiness team, has left.
In a post on X on Wednesday and in an essay in his newsletter, Brundage said that he thinks he’ll have more impact as a researcher and advocate in the nonprofit sector, where he’ll have “more of an ability to publish freely.”
“Part of what made this a hard decision is that working at OpenAI is an incredibly high-impact opportunity, now more than ever,” Brundage said. “OpenAI needs employees who care deeply about the mission and who are committed to sustaining a culture of rigorous decision-making about development and deployment (including internal deployment, which will become increasingly important over time).”
With Brundage’s departure, OpenAI’s economic research division, which until recently was a sub-team of AGI readiness, will move under OpenAI’s new chief economist, Ronnie Chatterji. The remainder of the AGI readiness team — which is winding down — will be distributed among other OpenAI divisions, Brundage says. Joshua Achiam, head of mission alignment, will take on some of AGI readiness’ projects.
An OpenAI spokesperson told TechCrunch that the company “fully supports” Brundage’s decision to pursue his policy research outside industry and is “deeply grateful” for his contributions.
“Brundage’s plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact,” the spokesperson said in a statement. “We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”
Brundage joined OpenAI in 2018, where he began as a research scientist and later became the company’s head of policy research. Prior to OpenAI, Brundage was a research fellow at the University of Oxford’s Future of Humanity Institute.
On the AGI readiness team, Brundage had a particular focus on the responsible deployment of language generation systems such as ChatGPT. He led other initiatives elsewhere, including OpenAI’s external red teaming program and its first “system card” reports documenting AI model capabilities and limitations.
In recent years, OpenAI has been accused by several former employees — and board members — of prioritizing commercial products at the expense of AI safety. Brundage urged OpenAI employees to “speak their minds” about how the company can do better in his post on X.
“Some people have said to me that they are sad that I’m leaving and appreciated that I have often been willing to raise concerns or questions while I’m here … OpenAI has a lot of difficult decisions ahead, and won’t make the right decisions if we succumb to groupthink,” he wrote.
OpenAI’s been shedding high-profile execs in recent weeks, the culmination of disagreements over the company’s direction. CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph announced their resignations in late September. Prominent research scientist Andrej Karpathy left OpenAI in February; months later, OpenAI co-founder and former chief scientist Ilya Sutskever quit, along with ex-safety leader Jan Leike. In August, co-founder John Schulman said he would leave OpenAI. And Greg Brockman, the company’s president, is on extended leave.
It’s been a rather unflattering day for OpenAI.
Wednesday morning, the company was the subject of a New York Times profile of former OpenAI researcher Suchir Balaji, who said he left the company because he no longer wanted to contribute to technologies he believed would bring society more harm than good. Balaji also accused OpenAI of violating copyright by training its models on IP-protected data without permission — an allegation others have made against the organization in class action lawsuits.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.