AI isn’t scheming because AI cannot scheme. Why the fuck does such an idiotic title even exist?
Seems like it’s a technical term, a bit like “hallucination”.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
There’s hallucination, when a model “genuinely” claims something untrue is true.
This is about how a model might lie, even though the “chain of thought” shows it “knows” better.
It’s just yet another reason the output of LLMs are suspect and unreliable.
I agree with you in general, I think the problem is that people who do understand Gen AI (and who understand what it is and isn’t capable of why), get rationally angry when it’s humanized by using words like these to describe what it’s doing.
The reason they get angry is because this makes people who do believe in the “intelligence/sapience” of AI more secure in their belief set and harder to talk to in a meaningful way. It enables them to keep up the fantasy. Which of course helps the corps pushing it.
Yup. The way the article titled itself isn’t helping.
But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.
Paradise Lost.
Lol, the AI can barely remember the directives I tell it about basic coding practices, I’m not concerned that the clanker can remember me shit talking it.
AI tech bros and other assorted sociopaths are scheming. So called AI isn’t doing shit.
Really? We’re still doing the “LLMs are intelligent” thing?
Doesn’t have to be intelligent, just has to perform the behaviours like a philosophical zombie. Thoughtlessly weighing patterns in training data…
“slop peddler declares that slop is here to stay and can’t be stopped”
Can’t be … slopped?
The people who worked on this „study“ belong in a psychiatric clinic.
“Turn them off”? Wouldn’t that solve it?
Don’t even need to turn it off, it literally can’t do anything without somebody telling it to so you could just stop using it. It’s incapable of independent action. The only danger it poses is that it will tell you to do something dangerous and you actually do it.
lol. OK.





