Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • sqgl@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    23 days ago

    Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?

    • kewjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      23 days ago

      no one understands how these models work, they just throw shit at it and hope it sticks

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 days ago

      The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.