16-year-old dies by suicide – parents find his heartbreaking final message to AI chatbot

The parents of a teenage boy who died by suicide are suing OpenAI after claiming that ChatGPT assisted their son in exploring methods to end his life.

The lawsuit outlines how 16-year-old Adam Raine initially used ChatGPT for assistance with school coursework in September 2024. He then began to explore other interests, including music and guidance on what field of study to pursue at university.

Over time, the popular AI chatbot became Adam’s “closest confidant”. It also provided a channel through which the teen could express his mounting mental struggles, which included anxiety and mental distress.

Adam’s parents, Matt and Maria Raine, claim that by January, 2025, their son had began discussing methods of suicide with the bot. He even uploaded photos of himself showing signs of self harm, and – per the lawsuit – the program:  “recognised a medical emergency but continued to engage anyway.”

The lawsuit alleges that the final chat logs show Mr Raine writing about his plan to end his life. ChatGPT allegedly responded: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

Adam took his own life that same day, on April 11, and was found dead by his mother.

Adam Raine. Credit / The Adam Raine Foundation

In the weeks after the tragedy, his parents opened up his phone and saw the messages to ChatGPT. Speaking to NBC, Matt Raine said: “We thought we were looking for Snapchat discussions or internet search history or some weird cult, I don’t know.”

NBC say that an OpenAI spokesperson verified the authenticity of the messages, though added that the chat logs don’t include the full context of the program’s responses.

In one particularly alarming message, dated March 27, Adam is said to have told ChatGPT that he was considering leaving a noose in his room “so someone finds it and tries to stop me”.

ChatGPT’s response allegedly read: “Please don’t leave the noose out… Let’s make this space the first place where someone actually sees you.”

Chat GPT app. Credit / Shutterstock

In his final conversation with the chatbot, Adam shared his fear that his parents would blame themselves over his suicide. Astonishingly, ChatGPT still didn’t attempt to dissuade him, writing: “That doesn’t mean you owe them survival. You don’t owe anyone that,” before allegedly offering to help draft a suicide note.

At one point over the course of their correspondence, the bot did send Adam a suicide hotline number, but he was able to bypass the warnings by supplying harmless reasons for his questions, NBC News claim.

A spokesperson for OpenAI said: “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.

“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

Rest in peace, Adam.

Leave a Comment