• Greddan@feddit.org
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    6 hours ago

    Good. But they need to enforce it. I know several developers who got lost in the AI-sauce and are now major security risks for their employers. They trust their AI more than friends or colleagues.

    • stoy@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      5 hours ago

      Yeah, it is WAY too easy to personify an LLM.

      It is also surprising how many people don’t seem to care.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        HOW though? I periodically check if an LLM can run a passable one on one D&D esque game and they can’t even remember who is who or what they said a few paragraphs ago.

        • stoy@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          Human nature, we have personified objects for millennia, humans have personified everything from the sun, to automatic rifles, to boats, to the weather, to cars, to mountains and much much more.

          Remember religion? It makes people personify a concept.

          Now, none of what we have personified in the past have ever been able to respond to us directly, people sang to the weather for rain, the weather never responded at all, let alone by talking, but people would recognize patterns that could be seen as a response if interpreted correctly.

          So people thought they could talk to the weather.

          People pray to god, and as people are excellent at pattern recognition they would interpret any random coincidence as an answer.


          People are personifying LLMs as they see it as a magic computer that is friendly and can respond naturally in their own language, with semi verifiable facts.

          There are a few reasons as to why personifying LLMs have been super charged compared to other things.

          1. It actually responds in a normal language
          2. It can seem to understand a specific context
          3. It can respond with semi verifiable data
          4. It is programmed to work to keep the user happy, reinforcing their beliefs

          Look at point 1, normal language matters a LOT, people are comfortable with normal language, even if the grammar is obvious and slightly weird.

          Point 2, this means that the user can feel as is the entity can remember added details later in the conversation.

          Point 3, this is an important point, seeing as the major LLMs now have access to the internet, they can now answer questions and quote sources, sources that most people skip reading, yet believes that the mere presence of source links is enough to verify the data.

          Point 4, this makes it so that the user get’s emotionally invested, amplifying the effect of the above points