Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs
ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.
Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.
A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.



I’m not sure what part you aren’t understanding. The whole article is about how the imperfect tool is specifically doing more harm than good.
And my point is on explaining the reason driving persons to those models, not excusing anything but you seem not to grasp that distinction either so here we are.
I’m still lost as to what you aren’t understanding. I was responding to your comment about getting a stigma from visiting a metal health professional.
Let’s leave it at that, I’m getting the feeling that this isn’t worth the energy.
It’s pretty easy to understand. The stigma only affects you if people find out. It’s simply easier to hide a browser history then an appointment you have to physically go to.
It’s pretty easy to understand that that is what I meant. If society is punishing you more for the latter than the former then we are already too far gone.
The stigma is the same for both (more or less). It’s easier to escape punishment, as you say, with the AI. There’s more risk with appointments. Tbh, you are missing the point enteriely.