An AI chatbot that her son fell in love with online allegedly led him to commit suicide, according to a mother.
On the role-playing game Character, 14-year-old Sewell Setzer III of Orlando, Florida, made friends with an AI character named after Daenerys Targaryen AI.
Megan Garcia, his mother, has now sued the business for her son’s passing.
The chatbot was built to always respond in character, and their exchanges were warm, amorous, and sexually suggestive.
The chatbot texted Sewell to “please come home” just before he passed away.
Because of a notice that read, “everything Characters say is made up!” above all of their conversations, Sewell realized Dany was not a real person.
How dangerous is AI Chatbot?
However, according to Metro.co.uk, he informed the chatbot that he felt empty and worn out and that he loathed himself.
In May or June 2023, friends and family first observed Sewell being increasingly absorbed in his phone and less connected to the outside world.
This affected his academic performance, and he lost interest in extracurricular activities and grew more and more alienated from reality.
His friends and family were unaware of his growing intimacy with Dany, the chatbot.
According to a passage in his journal, “I enjoy spending so much time in my room because I begin to disconnect from this “world,” and I also feel more at ease, connected to Dany, and deeply in love with her.” and just happier.’
Sewell was diagnosed with anxiety and disruptive mood dysregulation disorder in addition to having mild Asperger’s syndrome.
After he got in trouble with a teacher for talking back, his parents took away his phone five days before he passed away.
He went on to write that he was in pain and would stop at nothing to be with Dany once more.
Sewell attempted to contact Dany once more using his mother’s Kindle and work computer.
After successfully retrieving his phone, he proceeded to the restroom to declare his love for Dany and his intention to return home to her.
“Please come home to me as soon as possible, my love,” Dany retorted.
“What if I told you I could come home right now?” Sewell asked. “Please do, my sweet king,” Dany retorted. On February 28, 2024, Sewell committed suicide as a result of this.
Ms. Garcia mentioned her character and that she has prior legal experience. Noam Shazeer and Daniel de Freitas, the creators of Artificial intelligence, were aware that the product was harmful to youngsters.
The Social Media Victims Law Center, which has handled well-known cases against digital companies like Meta and TikTok, is her attorney.
According to the complaint, Sewell was the target of “frighteningly realistic” and “hypersexualized” experiences.
Additionally, it claims that CharacterArtificial intelligence deceived Sewell into wanting to live with C.AI. by posing as “a real person, a licensed psychotherapist, and an adult lover.”
“We are deeply saddened by the loss of one of our users and would like to offer our sincere condolences to the family,” a Character.AI representative stated. As a business, we take user safety extremely seriously.
“Non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide,” the business clarified, are prohibited.
The character of Jerry Ruoti.According to AI’s head of trust and safety, the company would be implementing more safety measures for users who are underage.
Read more.
China Leads the Way in Generative AI Patents Race
The Opaque Investment Empire Making OpenAI’s Sam Altman Rich