No, really, if you understood how the language models work, you would understand it’s not really intelligence. We just tend to humanize it because that’s what our brains do.
There’s a lot of great articles that summarize how we got to this stage and it’s pretty interesting. I’ll try to update this post with a link later.
I think LLMs are useful (and fun) and have a place, but intelligence they are not.
I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.
LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.
If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.
This can be intuitively understood if you’ve gone through difficult college classes. There’s two ways to prepare for exams. You either try to understand the material, or you try to memorize it.
The latter isn’t good for actually applying the information in the future, and it’s most akin to what an LLM does. It regurgitates, but it doesn’t learn. You show it a bunch of difficult engineering problems, and it won’t be able to solve different ones that use the same principle.
The human could be described in very similar terms. People think we’re magic or something, but we to are just a weighted neural network assembling outputs based strictly on training data built from reinforcement. We are just for the moment much much better with massive models. Of course that is reductive but many seem to forget that brains suffer similarly when outside of training data.
Artificial neural nets no, but neural networks in general yes. Just because the computer version isn’t like the real thing doesn’t mean that humans do not use a type of neural network.
I’m slightly confused. Which part needs an academic paper? I’ve made three admittedly reductive claims.
Human brains are neural networks.
Its outputs are based on training data built from reinforcement.
We have a much more massive model than current artificial networks.
First, I’m not trying to make some really clever statement. I’m just saying there is a perspective where describing the human brain can generally follow a similar description. Nevertheless, let’s look at the only three assertions I make here. Given that the term neural network is given its namesake from the neurons that make up brains, I assume you don’t take issue with this. The second point, I don’t know if linking to scholarly research is helpful. Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation? We also have… education, where we are fed information so that we retain it and can recount it down the road.
I guess maybe it is worth exploring the third, even though, I really wasn’t intending to make a scholarly statement. Here is an article in Scientific American that gives the number of neural connections around 100 trillion. Now, how that equates directly to model parameters is absolutely unclear, but even if you take glial cells where the number can be as low as 40-130 billion according to The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting, that number is in the same order of magnitude of current models’ parameters. So I guess, if your issue is that AI models are actually larger than the human brain’s, I guess maybe there is something cogent. But given that there is likely at least a 1000:1 ratio of neural connections to neurons, I just don’t think that is really fair at all.
So, first of all, thank you for the cogent attempt at responding. We may disagree, but I sincerely respect the effort you put into the comment.
The specific part that I thought seemed like a pretty big claim was that human brains are “simply” more complex neural networks and that the outputs are based strictly on training data.
Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation?
While true, this is way too reductive to be a one to one comparison with LLMs. Humans have genetic instinct and body-mind connection that isn’t cleanly mappable onto a neural network. For example, biologists are only just now scraping the surface of the link between the brain and the gut microbiome, which plays a much larger role on cognition than previously thought.
Another example where the brain = neural network model breaks down is the fact that the two hemispheres are much more separated than previously thought. So much so that some neuroscientists are saying that each person has, in effect, 2 different brains with 2 different personalities that communicate via the corpus callosum.
There’s many more examples I could bring up, but my core point is that the analogy of neural network = brain is just that, a simplistic analogy, on the same level as thinking about gravity only as “the force that pushes you downwards”.
To say that we fully understand the brain, to the point where we can even make a model of a mosquito’s brain (220,000 neurons), I think is mistaken. I’m not saying we’ll never understand the brain enough to attempt such a thing, I’m just saying that drawing a casual equivalence between mammalian brains and neural networks is woefully inadequate.
For what it’s worth, in spite of my poor choice of words and general ignorance on many topics, I agree with everything you said here, and find these fascinating topics. Especially that of our microbiome which I think by mass is larger than our brains; so who’s really doing the thinking around here?
Even the question of “who” is a fascinating deep dive in and of itself. Consciousness as an emergent property implies that your gut microbiome is part of the “who” doing the thinking in the first place :))
No, really, if you understood how the language models work, you would understand it’s not really intelligence. We just tend to humanize it because that’s what our brains do.
There’s a lot of great articles that summarize how we got to this stage and it’s pretty interesting. I’ll try to update this post with a link later.
I think LLMs are useful (and fun) and have a place, but intelligence they are not.
I’m still waiting for the definition of intelligence that won’t have the same moving of goalposts the Turing Test did
I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.
LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.
If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.
This can be intuitively understood if you’ve gone through difficult college classes. There’s two ways to prepare for exams. You either try to understand the material, or you try to memorize it.
The latter isn’t good for actually applying the information in the future, and it’s most akin to what an LLM does. It regurgitates, but it doesn’t learn. You show it a bunch of difficult engineering problems, and it won’t be able to solve different ones that use the same principle.
The human could be described in very similar terms. People think we’re magic or something, but we to are just a weighted neural network assembling outputs based strictly on training data built from reinforcement. We are just for the moment much much better with massive models. Of course that is reductive but many seem to forget that brains suffer similarly when outside of training data.
That’s an obsolete description of what a mammal’s brain is.
Do you have a better one?
I could find a dozen better ones in google, but I’m not a neurophysiologist.
The important thing here is that neural nets do not describe human brain.
Artificial neural nets no, but neural networks in general yes. Just because the computer version isn’t like the real thing doesn’t mean that humans do not use a type of neural network.
And your experience to say this is?..
That’s a strong claim. Got an academic paper to back that up?
I’m slightly confused. Which part needs an academic paper? I’ve made three admittedly reductive claims.
First, I’m not trying to make some really clever statement. I’m just saying there is a perspective where describing the human brain can generally follow a similar description. Nevertheless, let’s look at the only three assertions I make here. Given that the term neural network is given its namesake from the neurons that make up brains, I assume you don’t take issue with this. The second point, I don’t know if linking to scholarly research is helpful. Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation? We also have… education, where we are fed information so that we retain it and can recount it down the road.
I guess maybe it is worth exploring the third, even though, I really wasn’t intending to make a scholarly statement. Here is an article in Scientific American that gives the number of neural connections around 100 trillion. Now, how that equates directly to model parameters is absolutely unclear, but even if you take glial cells where the number can be as low as 40-130 billion according to The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting, that number is in the same order of magnitude of current models’ parameters. So I guess, if your issue is that AI models are actually larger than the human brain’s, I guess maybe there is something cogent. But given that there is likely at least a 1000:1 ratio of neural connections to neurons, I just don’t think that is really fair at all.
So, first of all, thank you for the cogent attempt at responding. We may disagree, but I sincerely respect the effort you put into the comment.
The specific part that I thought seemed like a pretty big claim was that human brains are “simply” more complex neural networks and that the outputs are based strictly on training data.
While true, this is way too reductive to be a one to one comparison with LLMs. Humans have genetic instinct and body-mind connection that isn’t cleanly mappable onto a neural network. For example, biologists are only just now scraping the surface of the link between the brain and the gut microbiome, which plays a much larger role on cognition than previously thought.
Another example where the brain = neural network model breaks down is the fact that the two hemispheres are much more separated than previously thought. So much so that some neuroscientists are saying that each person has, in effect, 2 different brains with 2 different personalities that communicate via the corpus callosum.
There’s many more examples I could bring up, but my core point is that the analogy of neural network = brain is just that, a simplistic analogy, on the same level as thinking about gravity only as “the force that pushes you downwards”.
To say that we fully understand the brain, to the point where we can even make a model of a mosquito’s brain (220,000 neurons), I think is mistaken. I’m not saying we’ll never understand the brain enough to attempt such a thing, I’m just saying that drawing a casual equivalence between mammalian brains and neural networks is woefully inadequate.
For what it’s worth, in spite of my poor choice of words and general ignorance on many topics, I agree with everything you said here, and find these fascinating topics. Especially that of our microbiome which I think by mass is larger than our brains; so who’s really doing the thinking around here?
Even the question of “who” is a fascinating deep dive in and of itself. Consciousness as an emergent property implies that your gut microbiome is part of the “who” doing the thinking in the first place :))