Here’s a recap of the drama that’s been brewing at OpenAI this past year. Ilya Sutskever, OpenAI’s co-founder and former chief scientist, was part of the board that fired CEO Sam Altman last year after citing safety concerns about the technology being developed at the company. (View Highlight)
Then what? Sutskever eventually reversed his decision and supported Altman’s return, but had been notably quiet since the episode, leading to intense speculation about his role at the company. That speculation came to an end last month when Sutskever announced his departure from OpenAI. (View Highlight)
All of the above would be a sufficiently remarkable story in itself. But what makes the story truly astounding is that the company has no plans to develop any AI products or services. Sutskever says the company’s entire product roadmap is to develop safe superintelligence. (View Highlight)
Wait, but why? The decision to not build a commercial product led to intense speculation across social platforms last night, as many questioned the reason behind the move. Among all the reasons put forward, one struck us as particularly interesting: That the SSI team knows that superintelligence is within reach, and that it believes it can get there safely before anyone else. (View Highlight)
This, of course, is speculative. Ilya Sutskever is a mythical figure in the field of AI and his absence led to the popular “Where’s Ilya?“ meme. Now that Sutskever’s whereabouts are known, the AI community has a new favorite guessing game: “What will Ilya do next?“ (View Highlight)