rw-book-cover

Metadata

Highlights

  • As whispers of AI hype filled the air in 2018, it seemed almost inevitable that we would soon be facing a whole new world, full of near-human robots and cybernetic dogs. But with that came a host of questions: how would it all change our jobs, how might we protect ourselves from an AI takeover, and more broadly, how could AI be designed for good instead of evil? (View Highlight)
  • Google affirmed its commitment to ethical tech development in a statement on its AI principles, including commitments not to use its AI in ways “likely to cause overall harm,” like in weapons or surveillance tech. (View Highlight)
  • Fast forward seven years later, and those commitments have been quietly scrubbed from Google’s AI principles page. The move has drawn a host of criticism at the change’s ominous undertones. (View Highlight)
  • “Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google,” former head of Google’s ethical AI team Margaret Mitchell told Bloomberg, which broke the story. “More problematically it means Google will probably now work on deploying technology directly that can kill people.” (View Highlight)
  • Google isn’t the first AI company to retract its commitment not to make killbots. Last summer, OpenAI likewise deleted its pledge not to use AI for “military and warfare,” as reported by The Intercept at the time. (View Highlight)
  • And while the company’s news is troubling, it’s drawing on a long history of dubious profiteering. After all, Google was the first major tech company to recognize the value of surveillance through data. (View Highlight)
  • Now, the past feels like prelude. As tech companies like Google dump untold billions into developing AI, the race is on to generate revenue for impatient investors. It’s no wonder that unscrupulous AI profit models are now on the table — after all, they’re the supposed new backbone of the company. (View Highlight)