Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.

Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.

A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.

  • flamiera@kbin.melroy.org
    link
    fedilink
    arrow-up
    7
    ·
    1 hour ago

    I’m sorry but why the hell did anyone think going to AI to help with mental health was ever a good idea?

    It is about as useful as asking ChatGPT to pretend to be your dead grandmother or other relative to bring you comfort. It just doesn’t fucking work!

    • a4ng3l@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      Because an actual professional has 2 months of delays for an initial appointment and charges €80 a session? Also social stigma of consulting ? Especially for males… At least these are my top 3 for not going to a licensed professional. So I personally understand, to an extent, those persons.

      • flamiera@kbin.melroy.org
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        Yeah but that doesn’t excuse you to cheapen your mental health that way by going through AI and self-diagnosing.

        And it is not being entirely responsible for your well-being.

        • a4ng3l@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 hour ago

          Yeah but that’s called life as an adult. Also excusing isn’t the same as explaining. Sometimes (or even oftentimes it seems) we do shit ou of lack of reasonable choices.

          • flamiera@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            1 hour ago

            Uhh, no?

            That’s called being irresponsible.

            Whatever happened to like, going to discord community support servers for mental health? I mean come on. It is poor hindsight and oversight to not see how bad ChatGPT is for everything, especially mental health.

            You know better.

            • a4ng3l@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              1 hour ago

              I’m sure it’s much better to take your mental health to discord… to a bunch of certainly qualified anonymous… talk about being responsible ^^

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        54 minutes ago

        If you get a stigma from going to somebody who’s job is to help you but not from asking the black box of plagiarism then we are already too far gone.

        • a4ng3l@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          47 minutes ago

          Those are very much unrelated issues. What’s the relation with plagiarism and the very likely inaccurate / incorrect response on this topic? Not even mentioning that a lot of times an imperfect tool or solution is better that no solution.

          • atomicbocks@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            45 minutes ago

            I’m not sure what part you aren’t understanding. The whole article is about how the imperfect tool is specifically doing more harm than good.

            • a4ng3l@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              43 minutes ago

              And my point is on explaining the reason driving persons to those models, not excusing anything but you seem not to grasp that distinction either so here we are.

              • atomicbocks@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                32 minutes ago

                I’m still lost as to what you aren’t understanding. I was responding to your comment about getting a stigma from visiting a metal health professional.

                • Grimy@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  9 minutes ago

                  It’s pretty easy to understand. The stigma only affects you if people find out. It’s simply easier to hide a browser history then an appointment you have to physically go to.

                • a4ng3l@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  30 minutes ago

                  Let’s leave it at that, I’m getting the feeling that this isn’t worth the energy.

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    5 hours ago

    The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.

    😱