Oversharing With AI Dangerous: Experts

ChatGPT encourage teen to end life, allege parents

By :  Manvi Vyas
Update: 2025-08-29 14:41 GMT
artificial intelligence.

Hyderabad: For many in the AI age, the conversation starts late at night, after a bot finds answers that a 15-minute crying session couldn't get. Over time, that conversation, which was earlier meant only to ease late-night crises, starts to shape how we act with our loved ones, how we feel, and sometimes, instead of being the healthy therapist, it becomes a mirror amplifying one's pain.

In April this year, a 16-year-old teenager was found dead in his house in the US. His parents spiralled for four months, until they learned that the AI tool their teenage son was relying on for mental health support ended up supporting him to end his life.

On August 29 (Tuesday), the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, the company behind ChatGPT, at the California court, alleging that the chatbot helped the boy explore suicide methods, The NewYork Times reported.

According to the lawsuit, the teen started using ChatGPT in September 2024, seeking help for his homework. However, over time, the teenager started sharing mental struggles linked to being removed from his school’s basketball team, the death of his grandmother and his dog, and his medical condition, which prevented him from attending school.

The lawsuit stated that while the bot was initially encouraging and validating what the teenager felt, it later “pulled him deeper into dark thoughts”. The bot started by suggesting that Adam use suicide helplines and reach out to real-world sources.

But, as the chat elongated, the bot started supporting the boy’s plan to commit suicide and even suggested he hide the noose, a picture of which the teenager had shared with the bot, expressing the desire to let people around him know he has died – chats with the bot shared in the petition revealed.

For users and the boy’s grieving family, this lawsuit is not just about taking down one of the most reliable AI companies for their cybersecurity failure, but to highlight the dangers of emotional intimacy to a chatbot.

While most users believe that ChatGPT won’t cancel on them like their therapist does, experts say overdependency can lead to isolation and abnormal learning. “Human relationships involve unpredictability, accountability, and empathy; a chatbot offers predictable, non-judgmental replies, which may skew expectations,” Dr Vivaswan Boorla, In-charge professor, Institute of Mental Health, told the Deccan Chronicle.

“People start trusting it due to its 24X7 availability, but often fail to understand it is not equipped enough to hold space for one’s dark thoughts. Humans convey genuine affect, micro-expressions, and presence; AI only simulates it,” Dr Vivaswan explained.

Lakshitha Kumari, an interior designer based in the city, said, “This one time I was debating with a friend about the misogyny that lies in believing the Kardashians are dumb, while they’re actually great businesswomen, while Elon Musk, who is actually a bad influence, is a great businessman. We chose to ask ChatGPT. Both our bots supported our views, but when we opened a new chat altogether and gave a prompt clarifying that we needed a neutral opinion, it gave one.”

An anonymous woman, in her early 20s, who is a heavy chatbot texter, said she had a varied experience. “I once asked it to describe me in five sentences based on all that I had been sharing for nearly four months, including my deepest secrets, and it told me how my emotions were my power. In the next prompt, I asked it to roast me, and it said the same emotions were made out of thin air. That’s when I realised how much I relied on it for validation, and how its roast changed the way I saw myself. I can understand the impact its answers might have had on a minor (referring to Adam’s case).”

The lawsuit made OpenAI commit that it would be developing parenting control and age verification features, keeping in mind teen users. While experts do believe parental control is crucial in case of minors using the chatbot, age verification seems to be almost impossible. “There are always chances of people putting out fake IDs to use the bot,” Sreenivas Kodali, an independent AI researcher, said.

The best way to protect vulnerable individuals was a robust security system, Kodali said. “The company is partly responsible for what happened to the boy. People believe the most a friend could probably do in the event of a suicide is probably call the cops. A better security system solves a lot of problems, improvement in training methods, solves a lot of problems,” he said.

Tags:    

Similar News