I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can’t back it up.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.
I’m not talking about a precise definition of consciousness, I’m talking about a consistent one.
Does not matter, any which way you try to spin it, any imprecise or “inconsistent” definition anybody would want to use, literally EVERYBODY with half a brain will agree that humans DO have consciousness and a rock does not. A squid could be arguable. But LLMs are just a mm above rocks, and lightyears below squids on the ladder towards consciousness.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Yea. The same way Bburago models real cars. They look somewhat similar, if you close one eye and squint the other and don’t know how far each of them are. But apart from looks, they have NOTHING in common and in NO way offer the same functionality. We don’t even know how many different types of neurons are, let alone be close to replicating each of their functions and operations:
So no, AI/LLMs are absolutely and categorically nowhere near where we could be lamenting about whether they would be conscious or not. Anyone questioning this is a victim of the Dunning-Kruger effect, by having zero clue about how complex brains and neurons are, and how basic, simple and function-lacking current NN technology is in comparison.
I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can’t back it up.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.
Does not matter, any which way you try to spin it, any imprecise or “inconsistent” definition anybody would want to use, literally EVERYBODY with half a brain will agree that humans DO have consciousness and a rock does not. A squid could be arguable. But LLMs are just a mm above rocks, and lightyears below squids on the ladder towards consciousness.
Yea. The same way Bburago models real cars. They look somewhat similar, if you close one eye and squint the other and don’t know how far each of them are. But apart from looks, they have NOTHING in common and in NO way offer the same functionality. We don’t even know how many different types of neurons are, let alone be close to replicating each of their functions and operations:
https://alleninstitute.org/news/why-is-the-human-brain-so-difficult-to-understand-we-asked-4-neuroscientists/
So no, AI/LLMs are absolutely and categorically nowhere near where we could be lamenting about whether they would be conscious or not. Anyone questioning this is a victim of the Dunning-Kruger effect, by having zero clue about how complex brains and neurons are, and how basic, simple and function-lacking current NN technology is in comparison.