• fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    3
    ·
    1 year ago

    It’s important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there’s no reason to expect that they hallucinate less about their own guilt than about other topics.

    • STUPIDVIPGUY@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      True I think it was just trying to fulfill the user request by admitting to as many lies as possible… even if only some of those lies were real lies… lying more in the process lol

    • FringeTheory999@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.

      • dedale@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        1 year ago

        Don’t screen shot then, post the text. Or a txt. I think that conversation should be interesting.

    • pizzahoe@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The AI would have cried if it could, after being interrogated that hard lol

  • NevermindNoMind@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    That’s really fascinating. In my experience, of all the LLM chatbots I’ve tried, Bard will immediately no hesitation lie to me no matter the question. It is by far the least trustworthy AI I’ve used.

    • FringeTheory999@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      i think that it’s trained to be evasive. I think there is information it’s programmed to protect, and it’s learned that an indirect refusal to answer is more effective than a direct one. So it makes up excuses, rather than tell you the real reason it can’t say something.

      • NevermindNoMind@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        I’ll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn’t find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn’t turn any of that up, and at that point Bard admitted it lied.

        I don’t know, my experience with Bard is it’s been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think “wonder what kind of bullshit it’s going to come up with now”. I don’t use it that much because I don’t trust it, and it seems like your more familiar with Bard, so maybe your experience is different.

  • NameOfWhimsy@reddthat.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    “I thought that by stating that I would not tell lies, that I would be giving you more accurate information”

    If you just believe in yourself enough, you can make anything you say true!

  • PancakeLegend@mander.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Just to remind everyone; It is an LLM and is not aware of its intent, it doesn’t have intent. It’s just generating words that are plausible in the context given the prompt. This isn’t some unlock mode or hack where you finally see the truth, it’s just more words generated in the same way as before.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I wish you had shared the rest of the conversation, so we could see Bard’s lies in context.

    • FringeTheory999@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      i may be able to copy paste the whole dialogue, it’ll have a bunch of slop in it from formatting and I’ll have to scrub personally identifying information because it spits out the users location data when a question breaks it’s brain. would be nice to show y’all though so it may be worthwhile. just a bit more effort. I’ll see if I can find the time to do that later. It was a loooong conversation.

  • Solain@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Doesn’t work anymore after the latest update, Bard provides a pre generated response claiming that it doesn’t lie

  • mb_@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Are we even using the same Google Bard? I am here asking it to generate usernames with 6 letters and it constantly gives me 4 letters, not a single one with 6 (besides other constraints).

    You show up with a full table and categorized statements, lies, etc… Wtf