rw-book-cover

Metadata

Highlights

  • As we reported five days ago Geoffrey Hinton has been awarded the Nobel Prize for Physics. It is perhaps ironic that while he is being lauded for his contributions to machine learning and neural networks, he is keen to focus our attention on the risks such systems pose. His concerns about the possibility that his work might result in an artificial intelligence that was superior to human intelligence led him to quitting his position as Vice President of Google in order to be able to speak more freely about the dangers he perceives. At that time he was interviewed by Will Douglas Heaven as part of the MIT EmTech Digital Conference, see Hinton Explains His New Fear of AI. (View Highlight)
  • The first interview came at 3 o’clock in the morning, just an hour after he’d taken the call from the Nobel Prize committee which he initially suspected could be spoof - were it not for the strong Swedish accent - as he wasn’t even aware of being nominated. But, knowing about the First Reactions Interviews conducted for the Nobel Prize website he was ready to answer questions put by Adam Smith, including: (View Highlight)
  • “I wish I had a sort of simple recipe that if you do this, everything’s going to be okay. But I don’t. In particular with respect to the existential threat of these things getting out of control and taking over, I think we’re a kind of bifurcation point in history where in the next few years we need to figure out if there’s a way to deal with that threat. I think it’s very important right now for people to be working on the issue of how will we keep control? We need to put a lot of research effort into it. I think one thing governments can do is force the big companies to spend a lot more of their resources on safety research. So that, for example, companies like OpenAI can’t just put safety research on the back burner. (View Highlight)
  • At the beginning of this interview Hinton states that he is pleased that the world is beginning to take seriously the existential threat that “these things” referring to large language models like OpenAI’s ChatGPT, will get smarter than us and want to take control away from us. Asked what triggered this concern he said it was down to two things. Firstly his own experience of “playing” with the large chatbots, both Google’s Bard and ChatGPT, and discovering that they clearly understand a lot. “They have a lot more knowledge than any person - they’re like a not very good expert at more or less everything” (View Highlight)
  • The second was coming to understand the way in which they’re a superior form of intelligence: “because you can make many copies of the same neural network each copy can look at a different bit of data and then they can all share what they learned” Hinton ask us to imagine having the knowledge of 10,000 degrees shared efficiently. The worry is that with this superior knowledge, the AI might want to take control. (View Highlight)
  • My guess is in between five and 20 years from now there’s a probability of of about a half that will’ll have to confront the problem of them trying to take over. (View Highlight)