Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
And
Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?
no one understands how these models work, they just throw shit at it and hope it sticks
The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.