A former OpenAI board member provided the most detailed account yet about Sam Altman’s shocking removal as CEO last November – alleging in a new interview that Altman repeatedly lied to the board about everything from AI safety to the launch of ChatGPT.
Helen Toner – who left OpenAI as part of negotiations that paved the way for Altman’s return – revealed that the board only learned about ChatGPT’s launch after it had already occurred.
“When ChatGPT came out, November 2022, the board was not informed in advance about that,” Toner told host Bilawal Sidhu on “The TED AI Show” in an episode airing Tuesday. “We learned about ChatGPT on Twitter.”
“On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change,” Toner added.
Toner also revealed that Altman had failed to inform the board that he “owned the OpenAI Startup Fund, even though he was constantly claiming to be an independent board member with no financial interest in the company.”
“There’s more individual examples and for any individual case, Sam could always come up with some kind of innocuous sounding explanation of why it wasn’t a big deal or misinterpreted or whatever,” Toner said.
“But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us,” she added.
The Post reached out to OpenAI for comment.
OpenAI board director Bret Taylor pushed back on Toner’s claims in a statement shared with the podcast.
The statement noted that the law firm Wilmer Hale had conducted a review of the circumstances surrounding Altman’s firing and cleared him of wrongdoing.
“We are disappointed that Ms. Toner continues to revisit these issues,” the statement said.
“Over 95% of employees, including senior leadership, asked for Sam’s reinstatement as CEO and the resignation of the prior board,” the statement added. “Our focus remains on moving forward and pursuing OpenAI’s mission to ensure AGI benefits all of humanity.”
The previous iteration of OpenAI’s board provided few specifics when it shocked the business world by firing Altman just before Thanksgiving last year, stating only that he was ousted for not being “consistently candid in his communications.”
Altman later returned as CEO following an employee revolt and the reported intervention of key OpenAI investors, including Microsoft.
OpenAI eventually unveiled a reconstituted board that included former Twitter executive Bret Taylor, who took over as chairman, as well as ex-Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.
Three board members involved in Altman’s firing —Toner, Tasha McCauley and Ilya Sutskever – exited when he returned to the CEO seat.
Sutskever later resumed work at OpenAI, but left the company for good earlier this month as it dissolved the AI “super-alignment” safety team he co-led.
Jan Leike, who co-led the safety team with Sutskever and has also resigned, claimed in an X thread that safety had “taken a backseat to shiny products” at OpenAI.
OpenAI has since announced a new safety oversight unit that includes Altman.
WilmerHale investigators found in March that a “breakdown in trust between the prior Board and Mr. Altman” had caused the turmoil.
The firm determined the ousted board members had acted in good faith.
Investors said Altman’s firing was not related to concerns about the safety or security of OpenAI’s advanced AI research, nor was it a fight over “the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”