Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

  • JoshuaFalken@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    11 hours ago

    Sounds like there’s more than a million people a week that would benefit from free or even low cost mental health care.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 hours ago

    Preface: I love The Guardian, and fuck Altman.

    But this is a bad headline.

    Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.

    Not that it’s the root cause.

    The headline is that the mental health of the world sucks, not that ChatGPT inflamed the crisis all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    17 hours ago

    What do you expect from people who basically have no friends left, are seemingly permanently isolated, and the last “social” arrangement they have is talking to a fucking agreeable robot?

    It’s a really sad “society” we’ve built here.

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    24 hours ago

    “How can we monetize this?”

    Just a matter of time before it recommends therapists in your area (that paid OpenAI to be suggested to you).

    • Hyperrealism@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      13
      ·
      23 hours ago

      I think another potential use, is targeting and manipulating vulnerable people for political reasons.

      Perhaps convince them to stay at home on election day. Perhaps convince members of undesirable demographics to disproportionately kill themselves. Perhaps make vulnerable people so paranoid or scared that they end up killing people you want to get rid of. Perhaps convince someone vulnerable to commit politically convenient violence, which can be used as a false flag or to rally support.

      Why leave that kind of thing to chance, when you can use AI to tip the scales in your favour?

    • neon_nova@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      If people need mental health help, I would not mind options being offered to them. Even if a small percentage take advantage of it, it’s a benefit to the person.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    22
    ·
    23 hours ago

    I don’t see anything in here to support saying ChatGPT is exacerbating anything.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      11 hours ago

      And yet the article is basically all upvotes.

      As of late, Lemmy has been feeling way too much like Reddit to me, where clickbait trends hard as long as it affirms the environment.

      I’ve even pointed this out once, and had OP basically respond with “I don’t care if it’s misinformation. I agree with the sentiment.” And mods did nothing.

      That’s called disinformation.

      Not that information hygiene is a priority here :(


      Yeah, comments often “correct” that, but that doesn’t stop the extra order of magnitude of exposure the original post gets.

      As much as the Twitter format sucks, Lemmy could really use a similar “community note” blurb right below headlines.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      17 hours ago

      Exactly. It’s like concluding that therapists are exacerbating suicidal ideation, psychosis, or mania just because their patients talk about those things during sessions. ChatGPT has 800 million weekly users - of course some of them are going to bring up topics like that.

      It’s fine to be skeptical about the long-term effects of chatbots on mental health, but it’s just as unhealthy to be so strongly anti-anything that one throws critical thinking out the window and accept anything that merely feels like it supports what they already want believe as further evidence that it must be so.

    • chunes@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      22 hours ago

      Right? The reason people are opening up to it is that you can’t open up to a human about this.

      • j_0t@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        I’m agree with you, today is more easy to open yourself to an AI that basically is a YES man, unfotunatelly that is the main problem from my point of view, how we expect every of our ideas should be accepted without any difficulty, in fact, this could mean a lack of essential hummanity.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        7
        ·
        17 hours ago

        It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

        ChatGPT repeatedly recommended that Adam tell someone about how he was feeling.

        When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing

        • Scubus@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          />dude wants to end it

          />Trys to figure out the most humane way

          />PlEaSe ReAcH oUt

          />unhelpful.jpg

          I can’t wait until humans have a right to die.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      15 hours ago

      My ai gf lets me pee on her face. I keep ruining pcs but the hundreds of thousands of dollars I’ve spent is worth the sexual gratification.

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    19 hours ago

    The fact they have data on this isn’t surprising, but it should be horrifying for anyone using the platform. This company has the data from every sad, happy, twisted, horny, and depressing reply from every one of their users, and they’re analyzing it. Best case, they’re “only” using it to better manipulate users into staying longer on their apps. More likely they’re using it for much more than that.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      They’d better fucking have data on this because it is horrendously irresponsible to let people talk to bots that imitate real conversations not track whether your bots are encouraging depressed people to kill themselves.

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        That’s a good point. Still doesn’t really comfort me when they’re not subject to the same standards and ethics of actual mental health workers.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          There is an opportunity (or there would be, if these companies were in sane jurisdictions) to try and apply some standards, because only a handful of companies are capable of hosting these bots.

          However, there are limitations because of the inherent nature of what they are. Namely, they are relatively cheap, so you can host a number of conversations with them that it is completely unmanageable to manually monitor, and they are relatively unpredictable, so the best-written safety rails will have problems (both false positives and false negatives).

          Put together, that means you can’t have AI chatbots which don’t sometimes both: spout shit they really should not be doing, such as encouraging suicide or reinforcing negative thoughts; and erroneously block people because the system to try and avoid that triggered falsely. And the less of one you try to have, the more of the other.

          That implies, to me, that AI chatbots need to be monitored for harm so that those systems can be tuned - or if need be so that the whole idea can be abandoned. But that also means that the benefits of the system need to be analysed, because it’s no good going “ChatGPT is implicated in 100 suicides - it must be turned off” if we have no data on how many suicides it may have helped prevent. As a stochastic process that mimics conversation, there will surely be cases of both.

  • Zak@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    22 hours ago

    possible signs of mental health emergencies related to psychosis or mania

    It can be amusing to test what triggers this response from LLMs. Perplexity will reliably do it if you propose sacrificing a person or animal to Satan, but not Ku-waha-ilo, the Hawaiian god of war, sorcery, and devourer of souls.

    I imagine a large fraction of the conversations flagged this way are people doing that rather than actually having a mental health crisis.

  • gedaliyah@lemmy.worldM
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    21 hours ago

    Holy shit. We know that ChatGPT has a propensity to facilitate suicidal ideation, and has led to suicides. It not only fails to direct suicidal individuals to the proper help, but actually advances people toward taking action.

    How many people has this killed?


    I am a depression survivor. Depression is a disease and it can be deadly, but there is help.

    If you are having suicidal thoughts, you can get help by texting or calling 988 in North America, or text ‘SHOUT’ to 85258 in the UK.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      That seems to be an unresolved lawsuit, not knowledge.

      If we are to look at the influence ChatGPT has on suicide we should also be trying to evaluate how many people it allowed to voice their problems in a respectful, anonymous space with some safeguards and how many of those were potentially saved from suicide.

      It’s a situation where it’s easy to look at a victim of suicide who talked about it on ChatGPT and say that spurred them on. It’s incredibly hard to look at someone who talked about suicide with ChatGPT, didn’t kill themselves and say whether it helped them or not.

  • NoWay@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    23 hours ago

    I would have suicidal thought if I had to chat with Chatgpt… I apologize, I’ve had too many friends die by suicide. If you are having thoughts of harming yourself or others please find help and not from an illusionary intelligence bot.

    • JohnnyEnzyme@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      22 hours ago

      I would have suicidal thought if I had to chat with Chatgpt

      I’ve been using it a tonne lately, and I find that it’s in fact quite polite, friendly and helpful. Yes-- for sure it has its issues, which are important to understand ahead of time. That said, I don’t really use it to “chat,” but more as a research gofer, so YMMV. Compare that to the often useless and delusional Copilot Gemini, for example.

      not from an illusionary intelligence bot.

      I get your sentiment there, but TBC, that’s not what it is nor what it claims to be. You seem to have gotten yourself in to some boogeyman thinking upon that, and ultimately I don’t think that’s going to help you understand things or make educated decisions.

      Just my 2¢, of course.

      EDIT: Whoops, I mixed up Google’s and MS’s LLM, I guess. I don’t think I’ve ever actually used Copilot.

  • DarkCloud@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    22 hours ago

    OpenAI: “ChatGPT, estimate how many discussions on suicide you have in total per week.”

    Why believe any company using this kind of “AI”?