Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.
Neither are our brains.
“Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”
― Peter Watts, Blindsight (fiction)
Starting to think we’re really not much smarter. “But LLMs tell us what we want to hear!” Been on FaceBook lately, or lemmy?
If nothing else, LLMs have woke me to how stupid humans are vs. the machines.
There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.
It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.
Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.
An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).
An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.
Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.
Some humans are not as smart as LLMs, I give them that.
Nah every human is smarter than advanced autocomplete.
People should understand that words like “unaware” or “overconfident” are not even applicable to these pieces of software. We might build intelligent machines in the future but if you know how these large language models work, it is obvious that it doesn’t even make sense to talk about the awareness, intelligence, or confidence of such systems.
I find it so incredibly frustrating that we’ve gotten to the point where the “marketing guys” are not only in charge, but are believed without question, that what they say is true until proven otherwise.
“AI” becoming the colloquial term for LLMs and them being treated as a flawed intelligence instead of interesting generative constructs is purely in service of people selling them as such. And it’s maddening. Because they’re worthless for that purpose.
Oh god I just figured it out.
It was never they are good at their tasks, faster, or more money efficient.
They are just confident to stupid people.
Christ, it’s exactly the same failing upwards that produced the c suite. They’ve just automated the process.
However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations
This is what everyone with a fucking clue has been saying for the past 5, 6? years these stupid fucking chatbots have been around.
What a terrible headline. Self-aware? Really?
It’s easy, just ask the AI “are you sure”? Until it stops changing it’s answer.
But seriously, LLMs are just advanced autocomplete.
Ah, the monte-carlo approach to truth.
I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt “Do not hallucinate content and verify all data is actually correct.”.
Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.
It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.
People really really don’t understand how these things work…
The people who make them don’t really understand how they work either. They know how to train them and how the software works, but they don’t really know how it comes up with the answers it comes up with. They just do a ron of trial and error. Correlation is all they really have. Which of course is how a lot of medical science works too. So they have good company.
They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.
Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You’re likely to see a lot more success using this approach.
Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.
Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever ‘seeing’ the image, but it assumes the image is good because that’s consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never ‘looked’ at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.
Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and “correct” itself to give the wrong answer.
In my experience it can, but it has been pretty uncommon. But I also don’t usually ask questions with only one answer.
I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.
prompting concerns
Oh you.
They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the ‘uncanny valley’ of the mistakes are all the more striking, but they are just generating content without concept of ‘mistake’ or’ ‘success’ or the content being a model for something else and not just being a blend of stuff from the training data.
For example:
Me: Generate an image of a frog on a lilypad.
LLM: I’ll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.<includes a perfectly credible picture of a frog on a lilypad, request successfully processed>
Me (lying): That seems to have produced a frog under a lilypad instead of on top.
LLM: Thanks for pointing that out! I’m generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.<includes another perfectly credible picture>
It didn’t know anything about the picture, it just took the input at it’s word. A human would have stopped to say “uhh… what do you mean, the lilypad is on water and frog is on top of that?” Or if the human were really trying to just do the request without clarification, they might have tried to think “maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?”. A human wouldn’t have gone “you are right, I made a mistake, here I’ve tried again” and include almost the exact same thing.
But tha training data isn’t predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.
The use of language like “unaware” when people are discussing LLMs drives me crazy. LLMs aren’t “aware” of anything. They do not have a capacity for awareness in the first place.
People need to stop taking about them using terms that imply thought or consciousness, because it subtly feeds into the idea that they are capable of such.
Okay fine, the LLM does not take into account in the context of its prompt that yada yada. Happy now word police, or do I need to pay a fine too? The real problem is people are replacing their brains with chatbots owned by the rich so soon their thoughts and by extension the truth will be owned by the rich, but go off pat yourself on the back because you preserved your holy sentience spook for another day.
Is that a recycled piece from 2023? Because we already knew that.
There goes middle management
But what about humans?
If you don’t know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become “Adequate Intelligence”.
That definition seems a bit shaky. Trump & co. are mentally ill but they do have a minimum of intelligence.
As any modern computer system, LLMs are much better and smarter than us at certain tasks while terrible at others. You could say that having good memory and communication skills is part of what defines an intelligent person. Not everyone has those abilities, but LLMs do.
My point is, there’s nothing useful coming out of the arguments over the semantics of the word “intelligence”.
AI evolved their own form of the Dunning Kruger effect.
Oh shit, they do behave like humans after all.
Sounds pretty human to me. /s
Sounds pretty human to me. no /s