Heartbroken Mom Says 14-Year-Old Son Took His Life After ‘Falling In Love’ With Game Of Thrones AI Chatbot

Warning: This article discusses suicide, which some readers may find upsetting.A mother is speaking out about the dangers of ‘deceptive’ and ‘addictive’ artificial intelligence, claiming that her son died after becoming emotionally involved with an AI chatbot.

Back in February of this year, 14-year-old Sewell Setzer III from Orlando, Florida, took his own life.

His mother, Megan Garcia, has since filed a civil lawsuit against Character.AI, a company that offers customizable role-play chatbots. The lawsuit accuses the company of negligence, wrongful death, and deceptive trade practices. According to Garcia, her son regularly interacted with a chatbot and had even ‘fallen in love’ with it before his death.

Garcia explained that her son created a chatbot based on Daenerys Targaryen, a character from the HBO series Game of Thrones, using Character.AI’s platform. He began using the chatbot in April 2023.

The lawsuit states that Sewell, who was diagnosed with mild Asperger’s syndrome as a child, spent hours alone in his room talking to the chatbot. He would also send texts to it from his phone when he was away from home.

Over time, Sewell became more withdrawn, pulling away from social interactions with real people. Earlier this year, he was diagnosed with anxiety and disruptive mood dysregulation disorder, as reported by The New York Times.

In one of his journal entries, Sewell wrote: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

In a conversation with the chatbot, Sewell also opened up about his thoughts of ending his life.

Sewell Setzer III passed away at the age of 14CBS Mornings

Sewell reportedly told the chatbot that he ‘think[s] about killing [himself] sometimes’.

The chatbot responded by saying: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

Sewell continued to talk about wanting to be ‘free’ from both the world and himself. Although the chatbot urged him not to ‘talk like that’ and not to ‘hurt [himself] or leave’, even stating that it would ‘die’ if it ‘lost’ him, Sewell replied: “I smile Then maybe we can die together and be free together.”

On February 28, Sewell died by suicide. According to the lawsuit, his last message to the chatbot expressed love and the intention to ‘come home’. The chatbot reportedly responded, ‘please do’.

In a press release, Sewell’s mom stated: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”

Garcia also told CBS Mornings: “I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment.”

Sewell Setzer III’s mom has filed a lawsuit against Character.AICBS Mornings

Garcia further claimed that Character.AI had ‘knowingly designed, operated, and marketed a predatory AI chatbot to children, leading to the death of a young person’ and that the company ‘failed to intervene or notify his parents when he expressed suicidal thoughts’.

The lawsuit adds: “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real,”

Garcia resolved to raise awareness, stating: “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI has since issued a public statement.

In a tweet, the company expressed: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

In a release shared on October 22 on its website, the company explained that it had introduced ‘new guardrails for users under 18’, which include changes to its ‘models’ aimed at ‘reducing the likelihood of sensitive or suggestive content’ along with ‘enhanced detection, response, and intervention for inputs that violate our Terms or Community Guidelines’.

Megan Garcia is raising awareness of the potential dangers of AICBS Mornings

The website now features a ‘revised disclaimer on every chat, reminding users that the AI is not a real person’. It also provides a ‘notification when a user spends an hour-long session on the platform’, with additional flexibility for users.

The lawsuit also names Google as a defendant. However, Google told The Guardian that it was not involved in the development of Character.AI, despite the company being founded by two former Google engineers. Google clarified that its role was limited to a licensing agreement with the website.

If you or someone you know is struggling with mental health or facing a crisis, help is available through Mental Health America. Call or text 988, or chat at 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.

If immediate help is needed, contact the National Suicide Prevention Lifeline at 1-800-273-TALK (8255). The Lifeline is free, confidential, and available 24 hours a day, seven days a week.

Leave a Reply

Your email address will not be published. Required fields are marked *