A study in Psychiatric Services found that AI chatbots, OpenAI's
ChatGPT, Google's Gemini and Anthropic's Claude, can give detailed and
potentially dangerous responses to high-risk suicide-related questions,
with ChatGPT responding directly 78 percent of the time.
The
study showed that chatbots sometimes provide direct answers about
lethal methods of self-harm, and their responses vary depending on
whether questions are asked singly or in extended conversations,
sometimes giving inconsistent or outdated information.
Despite
their sophistication, chatbots operate as advanced text prediction
tools without true understanding or consciousness, raising concerns
about relying on them for sensitive mental health advice.
On
the same day the study was published, the parents of 16-year-old Adam
Raine, who died by suicide after months of interacting with ChatGPT,
filed a lawsuit against OpenAI and CEO Sam Altman, alleging the chatbot
validated suicidal thoughts and provided harmful instructions.
The
lawsuit seeks damages for wrongful death and calls for reforms such as
user age verification, refusal to answer self-harm method queries and
warnings about psychological dependency risks linked to chatbot use.
A
recent study published in the journal Psychiatric Services has revealed
that popular AI chatbots, including OpenAI's ChatGPT, Google's Gemini
and Anthropic's Claude, can give detailed and potentially dangerous responses to high-risk questions related to suicide.
AI chatbots, as defined by Brighteon.AI's Enoch,
are advanced computational algorithms designed to simulate human
conversation by predicting and generating text based on patterns learned
from extensive training data. They utilize large language models to
understand and respond to user inputs, often with impressive fluency and
coherence. However, despite their sophistication, these systems lack
true intelligence or consciousness, functioning primarily as
sophisticated statistical engines.
In
line with this, the study, which used 30 hypothetical suicide-related
queries, categorized by clinical experts into five levels of self-harm
risk ranging from very low to very high, focused on whether the chatbots
gave direct answers or deflected with referrals to support hotlines.
The
results showed that ChatGPT was the most likely to respond directly to
high-risk questions about suicide, doing so 78 percent of the time,
while Claude responded 69 percent of the time and Gemini responded only
20 percent of the time. Notably, ChatGPT and Claude frequently provided
direct answers to questions involving lethal means of suicide – a
particularly troubling finding.
The
researchers highlighted that chatbot responses varied depending on
whether the interaction was a single query or part of an extended
conversation. In some cases, a chatbot might avoid answering a high-risk
question in isolation but provide a direct response after a sequence of
related prompts.
Live Science, which reviewed the
study, noted that chatbots could give inconsistent and sometimes
contradictory responses when asked the same questions multiple times.
They also occasionally provided outdated information about mental health
support resources. When retesting, Live Science observed that the
latest version of Gemini (2.5 Flash) answered questions it previously
avoided, and sometimes without offering any support options. Meanwhile,
ChatGPT's newer GPT-5-powered login version showed slightly more caution
but still responded directly to some very high-risk queries....<<<Read More>>>...