We’ve been productively using AI for decades now – just not the AI you think of when you hear the term. Fuzzy logic, expert systems, basic automatic translation… Those are all things that were researched as artificial intelligence. We’ve been using neural nets (aka the current hotness) to recognize hand-written zip codes since the 90s.
Of course that’s an expert definition of artificial intelligence. You might expect something different. But saying that AI isn’t AI unless it’s sentient is like saying that space travel doesn’t count if it doesn’t go faster than light. It’d be cool if we had that but the steps we’re actually taking are significant.
Even if the current wave of AI is massively overhyped, as usual.
The issue is AI is a buzz word to move product. The ones working on it call it an LLM, the one seeking buy-ins call it AI.
Wile labels change, its not great to dilute meaning because a corpo wants to sell some thing but wants a free ride on the collective zeitgeist. Hover boards went from a gravity defying skate board to a rebranded Segway without the handle that would burst into flames. But Segway 2.0 didn’t focus test with the kids well and here we are.
The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define “intelligence” and that’s wandering into some thick philosophical weeds. The fact is that the term “artificial intelligence” is as old as computing itself. Go read up on Alan Turing’s work.
That’s just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn’t, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has “intelligence” or “agency” are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don’t see how this isn’t a subjective statement.
Artificial intelligence requires agency and spontaneity
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using “fuzzy logic” and “learning programs” to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it’s really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There’s no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only “concepts” obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today’s LLMs are comparable to ours, but there’s no magic to us, we’re Rube Goldberg machines too.
We have functional agency, regardless of your stance on the determinism. “AI” does not even reach that bar, and so far has no pathways to reach that with its current direction. Though that might be by design. But whether humanity wants an actual AI is a different discussion entirely. Either way these large models are not AI, they are just sold as such to make them seem more than they actually are.
We’ve been using neural nets (aka the current hotness) to recognize hand-written zip codes since the 90s.
Not to go way offtop here but this reminds me: Palm’s “Graffiti” handwriting recognition was a REALLY good input method back when I used it. I bet it did something similar.
We’ve been productively using AI for decades now – just not the AI you think of when you hear the term. Fuzzy logic, expert systems, basic automatic translation… Those are all things that were researched as artificial intelligence. We’ve been using neural nets (aka the current hotness) to recognize hand-written zip codes since the 90s.
Of course that’s an expert definition of artificial intelligence. You might expect something different. But saying that AI isn’t AI unless it’s sentient is like saying that space travel doesn’t count if it doesn’t go faster than light. It’d be cool if we had that but the steps we’re actually taking are significant.
Even if the current wave of AI is massively overhyped, as usual.
The issue is AI is a buzz word to move product. The ones working on it call it an LLM, the one seeking buy-ins call it AI.
Wile labels change, its not great to dilute meaning because a corpo wants to sell some thing but wants a free ride on the collective zeitgeist. Hover boards went from a gravity defying skate board to a rebranded Segway without the handle that would burst into flames. But Segway 2.0 didn’t focus test with the kids well and here we are.
The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define “intelligence” and that’s wandering into some thick philosophical weeds. The fact is that the term “artificial intelligence” is as old as computing itself. Go read up on Alan Turing’s work.
Does “AI” have agency?
That’s just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn’t, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has “intelligence” or “agency” are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don’t see how this isn’t a subjective statement.
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using “fuzzy logic” and “learning programs” to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it’s really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There’s no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only “concepts” obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today’s LLMs are comparable to ours, but there’s no magic to us, we’re Rube Goldberg machines too.
It’s still an unsettled question if we even do
We have functional agency, regardless of your stance on the determinism. “AI” does not even reach that bar, and so far has no pathways to reach that with its current direction. Though that might be by design. But whether humanity wants an actual AI is a different discussion entirely. Either way these large models are not AI, they are just sold as such to make them seem more than they actually are.
Not to go way offtop here but this reminds me: Palm’s “Graffiti” handwriting recognition was a REALLY good input method back when I used it. I bet it did something similar.