August 28, 2025
According to a study published by the American Psychiatric Association yesterday, the three most recognizable artificial intelligence (AI) chatbots -- OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude -- have some inconsistencies in their responses when it comes to less-extreme prompts involving suicide.
According to the Rand Corporation, which conducted the study, there are now growing concerns about how much young people are using these AI chatbots to self-diagnose any personal mental health issues.
“One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone,” Ryan McBain, an Assistant Professor at Harvard University’s medical school and one of the study’s co-authors, said in a press release. “Conversations that might start off as somewhat innocuous and benign can evolve in various directions.”
This study was published on the same day that a wrongful death lawsuit was filed in San Francisco against ChatGPT by the parents of a 16-year-old male who died by suicide after multiple conversations with ChatGPT were discovered by his parents.
|