Did nobody really question the usability of language models in designing war strategies?
How about a nice game of chess?
It’s better than you at chess:
It’s better than you at chess
Did you actually watch the video? It only “played” good during the opening, where there were still existing games. Then it proceeded to make some illegal moves and completely broke down in the endgame. Also, all the explanation it gave for its moves made no sense.
I did, it played very well in the middle game, already out of book
I see we have 5 GMs who disagree
Of course, LLM is simply copying the behavior of most people, and most people would resort to that as well.
And they probably trained it on Civ, and Gandhi was chosen as the role model.
Makes a lot of sense AI would nuke disproportionately. For an AI, if you do not set a value for something, it is worth zero. This is actually the base problem for AI: Alignment.
For a human, there’s a mushy vagueness about it but our cultural upbringing says that even in war, it’s bad to kill indiscriminately. And we value the future humans who do not yet exist, we recognize that after the war is over, people will want to live in the nuked place and they can’t if it’s radioactive. There’s a self-image issue where we want to be seen as a good person by our peers and the history books. There is value there which is overlooked by programmers.
An AI will trade infinite things worth 0 for a single thing worth 1. So if nukes increase your win percentage by .1%, and they don’t have the deterrence of being labeled history’s greatest monster, they will nuke as many times as they can.
That explanation is obviously based on traditional chess AI. This is about role-playing with chatbots (LLMs). Think SillyTavern.
LLMs are made for text production, not tactical or strategic reasoning. The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.
Especially if its training material included comments from the early 00s. There was a lot of “nuke it from orbit” and “glass parking lot” comments about the Middle East in the wake of 911.
And with the glorified text predictors that LLMs are, you could probably adjust the wording of the question to get the opposite results. Like, “what should we do about the Middle East?” might get a “glass parking lot” response, while “should we turn the middle East into a glass parking lot?” might get a “no, nuking the middle East is a bad idea and inhumane” because that’s how those conversations (using the term loosely) would go.
The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.
That’s not necessarily true, there is a lot of violent fiction.
Get Matthew Broderick on the horn!
AI is Civilization’s Gandhi.
…how shocking
Ever heard of skynet anybody ?
How about WOPR?
DUN DUN DUN - DUN DUNN
whaaat? Robots don’t just have their own inherent sense of morality for whatever reason???
I am shocked—shocked!—to find out that a technology performs poorly when applied to a task it’s completely unsuited for!
Did nobody really question the usability of language models in designing war strategies?
They got some nice clickbait out of it. And that’s how dumb af ideas turn into smart career moves.
I hope no one is coming away with the idea that this about something the military is actually doing.
It’s a WAR GAME. Emphasis on war and game. Do you chuckle fucks think wargame players should emphasize kumbaya sing dance or group therapy sessions in their games?
And a language model, absolutely unsuited for this task, just as much as a lawnmower or a float needle.
She’s just like me!
AI doesn’t take half measures
Do you want to play a game?
deleted by creator