Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

      ChatGPT repeatedly recommended that Adam tell someone about how he was feeling.

      When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        14 hours ago

        />dude wants to end it

        />Trys to figure out the most humane way

        />PlEaSe ReAcH oUt

        />unhelpful.jpg

        I can’t wait until humans have a right to die.