• MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 hour ago

    Obviusly.

    And like hallucinations, it’s undesired behavior that proponents off LLMs will need to “fix” (a practical impossibility as far as I’m concerned, like unbaking a cake).

    But how would you use words to explain the phenomenon?

    “LLMs hallucinate and lie” is probably the shortest description that most people will be able to grasp.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      15 minutes ago

      Except that “hallucinate” is a terrible term. A hallucination is when you perceive something that doesn’t exist. What AI is doing is making things up; i.e. lying.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 minutes ago

        Yes.

        Who are you trying to convince?

        What AI is doing is making things up.

        This language also credits LLMs with an implied ability to think they don’t have.

        My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.

        So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.