• Eggyhead@lemmings.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    2 months ago

    It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.

    • kadu@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      2
      ·
      2 months ago

      LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.

      • Doorknob@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“

        The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.

      So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said “You are absolutely right, given its release date it is under copyright protection”

      Wtf

      • Int32@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.

    • JayGray91🐉🍕@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      Someone I know (not close enough to even call an “internet friend”) formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn’t get the answer he’s looking for.

      I guess that’s better than doing it to a person I suppose.

    • BlueCanoe@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      That’s actually one thing that got significantly improved with GPT-5, fewer hallucinations. Still not perfect of course

    • lichtmetzger@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      And depending on how OpenAI tweaked it this time it will either realize its mistake after being made aware of it or double down even harder on it.

      I only use it for coding and it once told me my code not working was due to a bug in Webkit, so I asked it which bug specifically. It created links to bug reports but rewrote the titles of them. So initially it looked like it had numerous sources that backed up its statement but when I clicked on them those were bugs about totally different things.

      It would not back down even after I specifically told it “You just made all of this shit up and even rewrote the titles” and got stuck in a loop of “I’m sorry, but you’re wrong and I am 100% sure I haven’t made a mistake”.

      Kinda creepy. Especially when you think about the system rewriting reality when it comes to much more important things. Let’s just reinvent some history, that would be a good idea, right?

      • Eggyhead@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I sometimes approach this like I do with students. Using your example, I’d ask it to restate the source, then ask it to read the title of that source directly. If it’s correct, I might ask it to briefly summarize what the source article covers. Then I would ask it to restate what it told me about the source earlier, and to explain where the inconsistency lies. Usually by this time, the AI is accurately pointing out flaws in its prior logic. At that point I ask again if it is 100% sure it didn’t make a mistake, and it might actually concede to having been wrong. Then I tell it to remember how and why it was wrong to avoid similar errors in the future. I don’t know if it actually works, but it makes me feel better about it.

    • Awkwardparticle@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It is a system that outputs an answer that is the most probably correct one from what it processes from the inputs. It does not have the concept of creating a lie. It is just a probability machine.