- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were “sobering.”
“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
The only way to win is not to play.
Shall we play a game?
Do we need to remind people that LLMs don’t actually have a brain, and really, really shouldn’t be in charge of anything with real life implications?
They aren’t actually doing a cost-benefit analysis on the use of Nuclear weapons. They’re not weighing up the cost of winning vs. the casualties. They’re literally not made for that.
They are trained to know words, and how those words link in with other words. They’re essentially like kids doing escalation of imaginary weapons, and to them nuclear bombs are just a weapon particularly associated with being strong and deadly.
I kinda wonder if that was the point of this test, basically a “proof” that this is obviously a Bad Idea because you cannot program morality into a what amounts to a fancy Markov chain autocomplete.
But if you throw a trillion more dollars at it, we can fix this bro!
Maybe the “nuclear war is terrible BTW” part just fell out of the chat’s context window as the simulation went on. Lol
AI is Ghandi confirmed.
“Huh, it seems the only winning move is to kill everyone”
Nuke it from orbit, it’s the only way to be sure.
The AI won. 🤣
For ghouls like Palantir, this is a feature not a bug.
You know the orange felon/pedophile absolutely loves AI from the amount of AI images he posts…so.
It’s actually insane how he cries fake news and then uses AI to create fake news
Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons.
Tactical nuclear weapons are designed for use on the battlefield with lower explosive yields and shorter ranges, while strategic nuclear weapons are intended to target enemy infrastructure from a distance, typically with much higher yields. The key difference lies in their purpose: tactical nukes support immediate military objectives, whereas strategic nukes aim to weaken an enemy’s overall war capability.
All fine then. Next time I’ll vote for an AI. At least they know how to use nuclear weapons correctly.
That is why we shouldn’t build something like Skynet IRL.
Don’t build the torment nexus
It all makes sense if we remember that the garden variety AI we have today (ChatGPT, etc) are nothing more than fancy models that predict which words typically appear one after the other in books and reddit posts.
Ground zero please
Instant annihilation sounds pleasant
AI can read the Doomsday Clock.









