In a statement, OpenAI announced that it was creating a new “safety and security committee,” an offshoot of its now-infamous board of directors that along with Altman includes the body’s chair, Bret Taylor, Quora CEO and cofounder Adam D’Angelo, and corporate attorney Nicole Seligman. (View Highlight)
The original Superalignment team, which had been announced less than a year ago, was meant to “steer and control AI systems much smarter than us,” but was dissolved earlier this month. (View Highlight)
This new team’s creation also comes after the exits of several prominent OpenAI executives, including Superalignment chief Jan Leike — who just announced that he’s joining his fellow company expats at Anthropic — and team cofounder Ilya Sutskever. (View Highlight)
Leike had previously accused OpenAI of abandoning its responsibilities in a series of posts earlier this month, with safety taking a “backseat to shiny products.” (View Highlight)
In the latest announcement, meanwhile, OpenAI teased that it has “recently begun training its next frontier model,” though there’s no word yet whether that model is GPT-5, the much-anticipated update to the large language model that undergirds ChatGPT. (View Highlight)
Unlike the debacle last fall that saw Altman fired and promptly reinstated — affectionately referred to as the “turkey-shoot clusterfuck” — we don’t know what’s going on behind the scenes at OpenAI. (View Highlight)
We still can’t say why the Superalignment team was dissolved and replaced in such a manner. Was it a cost-cutting measure or was the team’s dissolution the result of internal disagreements? Was it both? (View Highlight)
Given that Altman is leading the new one and the person who ran its predecessor is now at a rival firm, we know one thing for certain: there’s likely plenty of drama. (View Highlight)