Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues
More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.
In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.


I would have suicidal thought if I had to chat with Chatgpt… I apologize, I’ve had too many friends die by suicide. If you are having thoughts of harming yourself or others please find help and not from an illusionary intelligence bot.
I’ve been using it a tonne lately, and I find that it’s in fact quite polite, friendly and helpful. Yes-- for sure it has its issues, which are important to understand ahead of time. That said, I don’t really use it to “chat,” but more as a research gofer, so YMMV. Compare that to the often useless and delusional
CopilotGemini, for example.I get your sentiment there, but TBC, that’s not what it is nor what it claims to be. You seem to have gotten yourself in to some boogeyman thinking upon that, and ultimately I don’t think that’s going to help you understand things or make educated decisions.
Just my 2¢, of course.
EDIT: Whoops, I mixed up Google’s and MS’s LLM, I guess. I don’t think I’ve ever actually used Copilot.
I find it condescendingly sycophantic.
Yup. And you can adjust against that and make it a semi-permanent setting.
Joking about suicide is lame, asshole.