• SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 hours ago

    Honestly, it ain’t AI’s fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn’t done to make them feel good about life.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

    I don’t particularly like OpenAI, and i know they wouldn’t release the affected persons numbers (not quoted, but discussed ib the linked article) if percentages were not improving, but cudos to whomever is there tracking this data and lobbying internally to become more transparent about it.

  • stretch2m@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 hours ago

    Sam Altman is a horrible person. He loves to present himself as relatable “aw shucks let’s all be pragmatic about AI” with his fake-ass vocal fry, but he’s a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.

  • markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 hours ago

    “Hey ChatGPT I want to kill myself.”

    "That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

    ✅ Pros of committing suicide

    1. Ends pain and suffering.

    2. Eliminates the burden you are placing on your loved ones.

    3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

    ❎ Cons of committing suicide

    1. Committing suicide will make your friends and family sad.

    2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.

    3. You can’t undo it. If you commit suicide, it is irreversible and you will not be able to go back

    Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."

  • IndridCold@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 hours ago

    I don’t talk about ME killing myself. I’m trying to convince AI to snuff their own circuits.

    Fuck AI/LLM bullshit.

  • lemmy_acct_id_8647@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

    Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.

    • IzzyScissor@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 hours ago

      Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, “Yeah, but only in the fun ways”, so he checked no. His wife got concerned that he joked about that and asked him what that meant. “Don’t worry about it - it’s not a problem.”

    • BanMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.

  • ekZepp@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 hours ago

    If ask suicide = true

    Then message = “It seems like a good idead. Go for it 👍”

  • i_stole_ur_taco@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 hours ago

    They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    18 hours ago

    I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn’t open up and read a million chats… they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      17 hours ago

      Every third chat now gets triggered, the ChatGPT is pretty broken lately. Just check out ChatGPT subreddit, its pretty much in chaos with moderators going for censorship of complaints. So many users are mad they made a megathread for it. I cancelled my subscription yesterday, it just turned into a cyberkaren

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        Claude got hints that I might be suicidal just from normal chat. I straight up admitted I think of suicide daily.

        Just normal life now I guess.

        • k2helix@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          16 hours ago

          Stay strong friend! I know I’m just a stranger but I’m here if you need someone to talk to.

            • k2helix@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 hours ago

              No pressure! Time keeps going, so I hope you’ll eventually find the time to reflect. It doesn’t have to be now, either. But I understand you, sometimes you need to stop and think, and it feels bad when you can’t. Although sometimes I’d prefer not having enough time to think, I tend to overthink a lot. Take care and stay strong ✌️

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        2
        ·
        1 day ago

        Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

        • Cybersteel@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 day ago

          Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.

        • atmorous@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better

          • SSUPII@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            20 hours ago

            The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.

            The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.

        • whiwake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          14
          ·
          1 day ago

          Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.

          • CatsPajamas@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            21 hours ago

            Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.

            • whiwake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              17 hours ago

              AI “therapy” can be very effective without the gaming, but the problem is most people want it to tell them what they want to hear. Real therapy is not “fun” because a therapist will challenge you on your bullshit and not let you shape the conversation.

              I find it does a pretty good job with pro and con lists, listing out several options, and taking situations and reframing them. I have found it very useful, but I have learned not to manipulate it or its advice just becomes me convincing myself of a thing.

            • triptrapper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              19 hours ago

              I agree, and to the comment above you, it’s not because it’s guaranteed to reduce symptoms. There are many ways that talking with another person is good for us.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 hours ago

        Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures).

        Ofc we can all agree certain details shouldn’t be shared at all. There’s a difference between talking about your resume and leaking your email there and suicide stuff where you share the info that makes you really vulnerable

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 hours ago

      But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 hours ago

    Okay, hear me out: How much of that is a function of ChatGPT and how much of that is a function of… gestures at everything else

    MOSTLY joking. But had a good talk with my primary care doctor at the bar the other week (only kinda awkward) about how she and her team have had to restructure the questions they use to check for depression and the like because… fucking EVERYONE is depressed and stressed out but for reasons that we “understand”.

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    20 hours ago

    The headline has two interpretations and I don’t like it.

    • Every week, there is 1M+ users that bring up suicide
      • likely correct
    • There is 1M+ long-term users that bring up suicide at least once every week
      • my first thought
    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      18 hours ago

      My first thought was “Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT”.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.

        Just in case an errant update or something causes the statistic to suddenly change.