A Florida teen named Sewell Setzer III committed suicide after developing an intense emotional connection to a Character.AI chatbot, The New York Times reports. (View Highlight)
Per the NYT, the 14-year-old Setzer developed a close relationship with a chatbot designed to emulate “Game of Thrones” character Daenerys Targaryen, which was reportedly created without consent from HBO. (View Highlight)
As the ninth grader’s relationship with the chatbot deepened, friends and family told the NYT, he grew increasingly withdrawn. He stopped finding joy in normal hobbies like Formula 1 racing and playing “Fortnite” with friends, and instead spent his free time with his AI character companion, which he called “Dany.” Setzer was aware that Dany was an AI chatbot, but grew deeply attached to the algorithm-powered character nonetheless. (View Highlight)
Setzer’s exchanges with the AI ranged from sexually charged conversations — Futurism found last year that while Character.AI’s user terms forbid users from engaging in sexual conversations with the AI bots, those safeguards can easily be sidestepped — to long, intimate discussions about Setzer’s life and problems. In some instances, he told the AI that he was contemplating suicide, confiding in his companion that he thought “about killing myself sometimes” in order to “be free.” (View Highlight)
According to the NYT, Setzer’s family is expected to file a lawsuit this week against Character.AI, calling the company’s chatbot service “dangerous and untested” and able to “trick customers into handing over their most private thoughts and feelings.” The lawsuit also questions the ethics of the company’s AI training practices. (View Highlight)
Character.AI is a massively successful company. Last year, the AI firm reached unicorn status after a 150millioninvestmentroundledbyAndreessen−Horowitzbroughtitsvaluationtoover1 billion. And earlier this year, Google struck a high-dollar deal with Character.AI to license the underlying AI models powering the company’s chatbot personas. (Character.AI’s founders, Noam Shazeer and Daniel de Freitas, are both Google alumni.) (View Highlight)
The founders have openly promoted Character.AI’s personas as an outlet for lonely humans looking for a friend. Shazeer said last year in an interview at a tech conference put on by Andreessen-Horowitz that “there are billions of lonely people out there” and that solving for loneliness is a “very, very cool problem.” (View Highlight)
“Personalized AI,” it reads, “for every moment of your day.” (View Highlight)
When asked by the NYT, in light of Setzer’s suicide, how much of its user base is comprised of minors, the company declined to comment. In a statement, a spokesperson told the newspaper that Character.AI wants “to acknowledge that this is a tragic situation, and our hearts go out to the family.” (View Highlight)
The reality of Setzer’s death and the outcomes of the forthcoming lawsuit are likely to raise serious questions about exactly who is responsible in a scenario where interactions with a lifelike AI chatbot result in real harm to real humans, especially minors. After all, “Dany” was just an algorithm. How culpable is Character.AI, which built and facilitates the use of the tech? (View Highlight)