Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • nooneescapesthelaw@mander.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    23 days ago

    “If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”

    And

    “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“

    Update: as of around 6PM CST on July 8th, this line was removed!

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      23 days ago

      Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?

      • kewjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        23 days ago

        no one understands how these models work, they just throw shit at it and hope it sticks

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        23 days ago

        The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.