When Sam Altman was suddenly removed as CEO of OpenAI—before being reinstated days later—the company’s board publicly justified the move by saying Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” In the days since, there has been some reporting on potential reasons for the attempted board coup, but not much in the way of follow-up on what specific information Altman was allegedly less than “candid” about.
Now, in an in-depth piece for The New Yorker, writer Charles Duhigg—who was embedded inside OpenAI for months on a separate story—suggests that some board members found Altman “manipulative and conniving” and took particular issue with the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.
Board “manipulation” or “ham-fisted” maneuvering?
Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman’s negative attention by co-writing a paper on different ways AI companies can “signal” their commitment to safety through “costly” words and actions. In the paper, Toner contrasts OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype.”
Read 6 remaining paragraphs | Comments