• Rhaedas
    link
    fedilink
    810 months ago

    An example of the misalignment problem. Humans and AI both agreed on the stated purpose (generate a recipe), AI just had some deeper goals in mind as well.

      • Rhaedas
        link
        fedilink
        1
        edit-2
        10 months ago

        Not even stupid but just badly trained for that purpose. It’s no different than a LLM asked for coding that gets most of it right but flubs a subroutine. Misalignment doesn’t imply bad or evil, it’s just doing what it thinks the goal really is while we’re ignorant of the results.

    • MxM111
      link
      fedilink
      -110 months ago

      If I ask you to create a drink using Windex and Clorox would you do any different? Do you have alignment problem too?

      • Rhaedas
        link
        fedilink
        110 months ago

        Yes, I know better, but ask a kid that and perhaps they’d do it. A LLM isn’t thinking though, it’s repeating training through probabilities. And btw, yes, humans can be misaligned with each other, having self goals underneath common ones. Humans think though…well, most of them.

  • ChrisostomeStrip
    link
    fedilink
    English
    6
    edit-2
    10 months ago

    Wow, people purposefully entered non edible ingredients and results are weird? Who could expect.

  • Link.wav [he/him]
    link
    fedilink
    English
    310 months ago

    Gotta love how a spokesperson for the company expressed their disappointment that people are misusing the tool, vs. being disappointed in the company for letting the AI tool go live when it is clearly not ready for prime time