Yeah no only people who don’t understand the tech are worried about AGI. There is zero evidence to suggest that we’re anywhere on the right path to develop it. The chatbots are not intelligent, they are just a big bag of all the data the trainers could scrape and an algorithm to pull things out of that bag in a way that humans like.
Actual AGI would require us to understand how consciousness works. We don’t at all.
No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.
We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.
Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.
Whole lotta ifs and assumptions. “A high enough level of information processing” is meaningless if we don’t have any idea what sort of information processing could lead to consciousness, because it clearly isn’t just raw throughput.
AGI definitionally improves itself, which implies awareness of itself and intention. Those are a huge amount of how we define consciousness.
In neuroscience and philosophy, when people talk about consciousness, they’re typically referring to the fact of experience - that it feels like something to be. That experience has qualia.
Nowhere is it written that this is a requirement for general intelligence. It’s perfectly conceivable to imagine a system that’s more intelligent than any human but where it doesn’t feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.
Nobody’s saying AGI is here right now - it’s a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as “fake” just ignores the trajectory we’re on with AI development. If we wait until it’s real to start thinking about risks, it might be too late.
The people who warn about AI risk aren’t worried about GenAI - they’re worried about AGI.
We’re raising a tiger puppy. Right now it’s small and cute, but it won’t stay that way forever.
AGI isn’t real
Yeah no only people who don’t understand the tech are worried about AGI. There is zero evidence to suggest that we’re anywhere on the right path to develop it. The chatbots are not intelligent, they are just a big bag of all the data the trainers could scrape and an algorithm to pull things out of that bag in a way that humans like.
Actual AGI would require us to understand how consciousness works. We don’t at all.
Where does it say that AGI needs to be consciouss?
The general definition.
No, it doesn’t. It’s a reasonably safe assumption that something that intelligent is probably also conscious - but it doesn’t have to be.
We also don’t need to understand consciousness in order to create it in our systems. If consciousness is just an emergent feature of a high enough level of information processing, then it would automatically show up once we build such a system whether we intend it or not.
Hell, in the worst case we might create something we assume isn’t conscious - but it is - and it could be suffering immensely.
Whole lotta ifs and assumptions. “A high enough level of information processing” is meaningless if we don’t have any idea what sort of information processing could lead to consciousness, because it clearly isn’t just raw throughput.
AGI definitionally improves itself, which implies awareness of itself and intention. Those are a huge amount of how we define consciousness.
In neuroscience and philosophy, when people talk about consciousness, they’re typically referring to the fact of experience - that it feels like something to be. That experience has qualia.
Nowhere is it written that this is a requirement for general intelligence. It’s perfectly conceivable to imagine a system that’s more intelligent than any human but where it doesn’t feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.
AGI is fake
I don’t think AGI is fake, conceptually. Humans are just meat-based computers. Eventually we will build something of comparable power and efficiency.
However, LLMs don’t seem like a viable path to AGI imo.
We disagree about genies being real (they are not) so don’t worry about expressing or defending your points further.
Nobody’s saying AGI is here right now - it’s a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as “fake” just ignores the trajectory we’re on with AI development. If we wait until it’s real to start thinking about risks, it might be too late.
nope, its fake bruh
just like genies, jesus, and NFTs