rw-book-cover

Metadata

Highlights

  • After leaving OpenAI under a dark cloud, founding member and former chief scientist Ilya Sutskever is starting his own firm to bring about “safe” artificial superintelligence. (View Highlight)
  • In a post on X-formerly-Twitter, the man who orchestrated OpenAI CEO Sam Altman’s temporary ouster — and who was left in limbo for six months over it before his ultimate departure last month — said that he’s “starting a new company” that he calls Safe Superintelligence Inc, or SSI for short. (View Highlight)
  • “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” Sutskever continued in a subsequent tweet. “We will do it through revolutionary breakthroughs produced by a small cracked team.” (View Highlight)
  • “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told the outlet. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.” (View Highlight)
  • While it remains unclear exactly why Sutskever and some of his fellow former OpenAI board members turned against Altman in last November’s “turkey-shoot clusterf*ck,” there was some speculation that it had to do with safety concerns about a secretive high-level AI project called Q* — pronounced “queue-star” — that Altman et al have refused to speak about. With the emphasis on “safety” in Sutskever’s new venture making its way into the project’s very name, it’s easy to see a link between the two. (View Highlight)