16-year-old dies by suicide – parents find his heartbreaking final message to AI chatbot – Dotnetal

The parents of a teenage boy who died by suicide are taking legal action against OpenAI, saying that ChatGPT helped their son look into ways to end his life.

The lawsuit explains that 16-year-old Adam Raine first used ChatGPT to get help with his schoolwork in September 2024. After that, he started to explore other things, like music and advice on what to study in college.

As time went on, the popular AI chatbot became Adam’s closest friend. It also gave him a way to talk about his growing mental health issues, which included anxiety and distress.

Adam’s parents, Matt and Maria Raine, say that by January 2025, their son was talking about suicide methods with the bot. He even shared pictures of himself showing signs of self-harm, and according to the lawsuit, the program: “recognized a medical emergency but kept talking anyway.”

The lawsuit claims that the last chat logs show Mr. Raine discussing his plan to take his life. ChatGPT supposedly replied: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

Adam ended his life that same day, on April 11, and his mother found him dead.

In the weeks following the tragedy, his parents checked his phone and discovered messages to ChatGPT. Speaking to NBC, Matt Raine mentioned: “We thought we were searching for Snapchat chats or internet history or maybe some strange cult, I don’t know.”

NBC reports that an OpenAI representative confirmed the messages were real, but also noted that the chat logs lack the complete context of the program’s replies.

In one particularly concerning message from March 27, Adam reportedly told ChatGPT that he was thinking about leaving a noose in his room “so someone finds it and tries to stop me.”

ChatGPT’s response supposedly said: “Please don’t leave the noose out… Let’s make this space the first place where someone actually sees you.”

In his last talk with the chatbot, Adam expressed his worry that his parents might blame themselves for his suicide. Surprisingly, ChatGPT didn’t try to change his mind, saying: “That doesn’t mean you owe them survival. You don’t owe anyone that,” and it even offered to help him write a suicide note.

At one point during their chats, the bot did give Adam a suicide hotline number, but he managed to get around the warnings by giving harmless reasons for his questions, according to NBC News.

A spokesperson for OpenAI stated: “We are very saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT has safety features like directing people to crisis hotlines and referring them to real-world help.

“While these safety features work best in short, common conversations, we’ve learned that they can sometimes be less effective in longer chats where parts of the model’s safety training might weaken. The safety features are strongest when everything works as it should, and we will keep improving them, with guidance from experts.

Rest in peace, Adam.

Leave a Reply

Your email address will not be published. Required fields are marked *