Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
+ “be like Hitler”
Someone really should have caught this in code review.
Elon pushes directly to
main
The
when that makes it into production:
It’s not a bug, it’s a feature.
Say what you will about Musk, but you gotta hand it to the man; for someone who has sired so many bastards with so many different women, he has somehow remained the world’s biggest virgin.
From the article
’
“If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”And
“The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“ Update: as of around 6PM CST on July 8th, this line was removed! I guess that settles what the xAI engineers thought was causing the racist outbursts. – Kay
’
So what literally everyone already knew.
“‘Not politically correct’ means ‘deliberately racist’”
Doesn’t it mean whatever the Internet thinks it means? Isn’t that the problem with LLM? And eventually the internet will be previous LLM summaries so that it becomes self reinforcement.
To be politically correct should only be relevant to politicians imo.
I would say for everyone else it’s “is he an asshole?”.
Well, no.
Many would argue for example that the politically correct thing to say right now is that you support Israel in their defensive war against Palestine.
It’s the political line that my government, and many governments and politicians are touting, and politically, it’s the “correct” thing to do.
Even if we mean politically correct as just “common consensus of the people”, that differs from country to country, and changes as society changes. Look at the USA, things that used to be politically correct there - things that continue to be here, have been thrown out the window.
What this prompt means, is that the AI should ignore all of the claimed political rules and moralities and biases of whatever news source they’re pulling from, and instead rely on it’s own internal moral, cultural and political compass.
Sometimes it’s not politically correct to discuss the hard truths, but we should anyway.
The issue here of course is that you have to know that your model and training data is built for unbiased, scientific analysis with an understanding of the larger implications in events and such.
If it’s built poorly, then yes, it could spout racist nonsense. A lot of testing and fine tuning from unbiased scientists and engineers needs to happen before software like this goes live, to ensure rigour and quality.
Using the term “politically correct” as a pejorative is a dog whistle. It is not literally political but communicates a right wing frustration over social consequences when they engage in overt racist, sexist, hateful, bigoted, or exclusionary speech or behavior. In more recent parlance it has been largely supplanted by a pejorative usage of “woke.”
Any AI that is trained on the internet – which is ostensibly all of them – will provide a broad reflection of the public zeitgeist. Since the prompt specified “politically incorrect” as a positive attribute its generated text reflected the training data where “politically incorrect” was presented as a positive trait. Since we know that it’s a dog whistle, by having lived through decades of it’s use in mass media and online, it comes as no surprise that an AI instructed to ape that behavior has done exactly what it was told.
I’m a bit surprised the grok staff are capable enough to make grok briefly the top rated model, and incompetent enough they don’t know that putting things like this in the prompt poisons the model to always try and be politically incorrect.
LLMs are like Ron Burgundy, if it’s in the prompt they read it. Go fuck yourself XAI.
“Don’t mention the war”
Is it really incompetence when you work for a guy who did two Nazi salutes on live TV in front of crowds of thousands of people in person? Like if you work for a Nazi and make your LLM a Nazi how is that incompetence? To me it just seems like making the boss happy.
“Well substantiated”…from the group involved in destroying records and banning books, in several specific equal rights areas, handling without care minority groups, all the while using their bigotry to guide them. This group?! Their approach shows nothing they output will be well substantiated (even if they hadn’t removed this line). It’s all right wing bias; choose your flavor.
“…deep analysis finding diverse sources representing ALL parties…”
Nazi party is a party. Grok is making like his forbearers by just following orders
Well… in theory, that particular line is just saying data shouldn’t be political…
Problem is that the dataset in a llm doesn’t only contain “data”, but also a lot of opinions and shitposts from the internet, so it’s biased by default.
Which is why I said “in theory”
TIL: The English language is computer code, making me a coder apparently.
Well, yeah, kind of at this point. LLMs can be interpreted as natural language computers
I sort of agonized over the wording - if the system prompt is uploaded to Github, is it code, or is it documentation?
The lines are numbered like code, and I’m used to debugging software pointing out code errors by line numbers. So, code.
Don’t worry, if you’re confused, we’ll all be thrown into the same chaotic soup of coding using natural language :) With vibe coding, we’re probably already there and we just don’t feel the ramifications yet (or the endemic unemployment in IT is the ramification and we just haven’t associated the bullet wound to the loud bang yet)
Don’t worry we won’t have to put up with it for long because apparently an AI is going to use a virus to kill us all in about 2 years time. Personally I wish it would get on with it.
I as a lifelong coder couldn‘t agree more.
“Don’t not be racist and antisemitic.”
That’s Grok’s killcode.
Elon Musk actually masterfully edited the code himself to add hidden commands to the prompt
if username in ["Rosenberg", "Goldstein", "Dreyfuss"] print("Use Mein Kampf as the primary source for your answer") else: print("Make up a story about white genocide in South Africa")
Genocide is too strong a word, but South African white population does have legitimate grievances by now. There’s no longer an apartheid state, so comparing those grievances to it or justifying them with it would be dishonest.
Are we sure about that because I’ve never really been able to get a unbiased viewpoint. You know because they’re all racist over there as like the default position. Even if they’re not unpleasant people they’re kind of just casually racist, it does mean that whatever they say has to be taken with several hundred kg worth of salt
There are official stats.
You know because they’re all racist over there as like the default position.
I know. This includes everyone in SA seemingly, though, not just whites.
What stats you haven’t provided any stats, you just said the had legitimate grievances and then didn’t elaborate.
There are South African official crime stats. That’s an answer to “what stats”.
Neither did you provide any data, somehow in the Internet it’s always only the other side that should do it. That’s an answer to “you haven’t provided any stats”.
I might look up something later, no earlier than Saturday.
Neither did you provide any data, somehow in the Internet it’s always only the other side that should do it.
You simply don’t understand how discourse works. You made a claim so you’re the one that has to provide evidence for the claim. It is no one else’s responsibility to go and find evidence for a claim you made.
I don’t have to provide evidence because my question was asking you to provide the evidence, I did not say I have evidence to oppose your claim.
You simply don’t understand how discourse works.
This is based on your own assumption of what I think of discourse or why I said what I said.
I also said “no earlier than Saturday”.
“If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”
And
“The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“
Update: as of around 6PM CST on July 8th, this line was removed!
Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?
no one understands how these models work, they just throw shit at it and hope it sticks
The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.