The technological struggles are in some ways beside the point. The financial bet on artificial general intelligence is so big that failure could cause a depression.
If it’s substrate dependent then that just means we’ll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.
Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what’s thought to be true. And yes, I’m aware of Roger Penrose’s theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.
If it really turns out to be the case though, microtubules can be laid out on a chip.
Penrose has always had a fertile imagination, and not all his hypotheses have panned out. But he does have the gift that, even when wrong, he’s generally interestingly wrong.
The only reason we wouldn’t get to AGI is point number two.
Point number one doesn’t make much sense given that all we are are bags of small complex molecular machines that operate synergistically with each other under extremely delicate balance. Which if humanity does not kill ourselves first, we will eventually be able to create small molecular machines that work together synergistically. Which is really all that life is. Except it’s quite likely that it would be made simpler without all of the complexities much of biology requires to survive harsh conditions and decades of abuse.
It seems quite likely that we will be able to synthesize AGI far before we will be able to synthesize life. As the conditions for intelligence by all accounts seem to be simpler than the conditions for the living creature that maintains the delicate ecosystem of molecular machines necessary for that intelligence to exist.
This is a funny graph. What’s the Y-axis? Why the hell DVDs are a bigger innovation than a Steam Engine or a Light Bulb? It has a way bigger increase on the Y-axis.
In fact, the top 3 innovations since 1400 according to the chart are
Microprocessors
Man on Moon
DVDs
And I find it funny that in the year 2025 there are no people on the Moon and most people do not use DVDs anymore.
And speaking of Microprocessors, why the hell Transistors are not on the chart? Or even Computers in general? Where did the humanity placed their Microprocessors before Apple Macintosh was designed (this is an innovation? IBM PC was way more impactful…)
The chart is just for illustration purposes to make a point. I don’t see why you need to be such a dick about it. Feel free to reference any other chart that you like better which displays the progress of technological advancements thorough human history - they all look the same; for most of history nothing happened and then everything happened. If you don’t think that this progress has been increasing at explosive speed over the past few hundreds of years then I don’t know what to tell you. People 10k years ago had basically the same technology as people 30k years ago. Now compare that with what has happened even jist during your lifetime.
Well of course there is? I mean that’s like not even up for debate?
Consciousness is that we “experience” the things that happens around us, AGI is a higher intelligence. If AGI “needs” consciousness then we can just simulate it (so no real consciousness).
Well that was what I meant, there is absolutely no indications there would be a need for consciousness to create general intelligence. We don’t need to figure out what consciousness is if we already know what general intelligence is and how it works, and we seem to know that fairly well IMO.
Well I’m curious then, because I have never seen or heard or read that general intelligence would be needing some kind of wetware anywhere. Why would it? It’s just computations.
I do have heard and read about consciousness potentially having that barrier though, but only as a potential problem, and if you want conscious robots ofc.
I don’t think it does, but it seems conceivable that it potentially could. Maybe there’s more to intelligence than just information processing - or maybe it’s tied to consciousness itself. I can’t imagine the added ability to have subjective experiences would hurt anyone’s intelligence, at least.
I don’t think so. The consciousness has very little influence on the mind, we’re mostly in on it for the ride. And general intelligence isn’t that complicated to understand, so why would it be dependent on some substrate? I think the burden if proof lies on you here.
Very interesting topic though, I hope I’m not sounding condescending here.
Well, first of all, like I already said, I don’t think there’s substrate dependence on either general intelligence or consciousness, so I’m not going to try to prove there is - it’s not a belief I hold. I’m simply acknowledging the possibility that there might be something more mysterious about the workings of the human mind that we don’t yet understand, so I’m not going to rule it out when I have no way of disproving it.
Secondly, both claims - that consciousness has very little influence on the mind, and that general intelligence isn’t complicated to understand - are incredibly bold statements I strongly disagree with. Especially with consciousness, though in my experience there’s a good chance we’re using that term to mean different things.
To me, consciousness is the fact of subjective experience - that it feels like something to be. That there’s qualia to experience.
I don’t know what’s left of the human mind once you strip away the ability to experience, but I’d argue we’d be unrecognizable without it. It’s what makes us human. It’s where our motivation for everything comes from - the need for social relationships, the need to eat, stay warm, stay healthy, the need to innovate. At its core, it all stems from the desire to feel - or not feel - something.
I’m onboard 100% with your definitions. But I think you does a little mistake here, general intelligence is about problem solving, reasoning, the ability to make a mental construct out of data, remember things …
It doesn’t however imply that it has to be a human doing it (even if the “level” is usually at human levels) or that human experience it.
Maybe nitpicking but I feel this is often overlooked and lots of people conflate for example AGI with a need of consciousness.
Then again, maybe computers cannot be as intelligent as us 😞 but I sincerely doubt it.
So IMO, the human mind probably needs its consciousness to have general intelligence (as you said, it won’t probably function at all without it, or very differently), but I argue that it’s just because we are humans with wetware and all of that junk, and that doesn’t at all mean it’s an inherent part of intelligence in itself. And I see absolutely no reason for why it must.
You could hit AGI by fastidiously simulating the biological wetware.
Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.
General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.
The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.
Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.
For 1, we can grow neurons and use them for computation, so not actually an issue if it were true (which it almost certainly isn’t because it isn’t magic).
I can think of only two ways that we don’t reach AGI eventually.
General intelligence is substrate dependent, meaning that it’s inherently tied to biological wetware and cannot be replicated in silicon.
We destroy ourselves before we get there.
Other than that, we’ll keep incrementally improving our technology and we’ll get there eventually. Might take us 5 years or 200 but it’s coming.
If it’s substrate dependent then that just means we’ll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.
Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what’s thought to be true. And yes, I’m aware of Roger Penrose’s theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.
If it really turns out to be the case though, microtubules can be laid out on a chip.
I could see us gluing third world fetuses to chips and saying not to question it before reproducing it.
Imagine that we just end up creating humans the hard, and less fun, way.
Penrose has always had a fertile imagination, and not all his hypotheses have panned out. But he does have the gift that, even when wrong, he’s generally interestingly wrong.
The only reason we wouldn’t get to AGI is point number two.
Point number one doesn’t make much sense given that all we are are bags of small complex molecular machines that operate synergistically with each other under extremely delicate balance. Which if humanity does not kill ourselves first, we will eventually be able to create small molecular machines that work together synergistically. Which is really all that life is. Except it’s quite likely that it would be made simpler without all of the complexities much of biology requires to survive harsh conditions and decades of abuse.
It seems quite likely that we will be able to synthesize AGI far before we will be able to synthesize life. As the conditions for intelligence by all accounts seem to be simpler than the conditions for the living creature that maintains the delicate ecosystem of molecular machines necessary for that intelligence to exist.
“eventually” won’t cut it for the investors though.
We’re probably going to find out sooner rather than later.
This is a funny graph. What’s the Y-axis? Why the hell DVDs are a bigger innovation than a Steam Engine or a Light Bulb? It has a way bigger increase on the Y-axis.
In fact, the top 3 innovations since 1400 according to the chart are
And I find it funny that in the year 2025 there are no people on the Moon and most people do not use DVDs anymore.
And speaking of Microprocessors, why the hell Transistors are not on the chart? Or even Computers in general? Where did the humanity placed their Microprocessors before Apple Macintosh was designed (this is an innovation? IBM PC was way more impactful…)
Such a funny chart you shared. Great joke!
Also “3D Movies” is a whole joke on its own.
The chart is just for illustration purposes to make a point. I don’t see why you need to be such a dick about it. Feel free to reference any other chart that you like better which displays the progress of technological advancements thorough human history - they all look the same; for most of history nothing happened and then everything happened. If you don’t think that this progress has been increasing at explosive speed over the past few hundreds of years then I don’t know what to tell you. People 10k years ago had basically the same technology as people 30k years ago. Now compare that with what has happened even jist during your lifetime.
If we make this graph in 100 years almost nothing modern like hybrid cars, dvds, etc. will be in it.
Just like this graph excludes a ton of improvements in metallurgy that enabled the steam engine.
There’s also no reason for it to be a smooth curve, it looks more like a series if steps with varying flat spots between them in my head.
And we are terrible at predicting how long a flat spot will be between improvements.
I think you might mix up AGI and consciousness?
I think first we have to figure out if there is even a difference.
Well of course there is? I mean that’s like not even up for debate?
Consciousness is that we “experience” the things that happens around us, AGI is a higher intelligence. If AGI “needs” consciousness then we can just simulate it (so no real consciousness).
Of course that’s up for debate; we’re not even sure what consciousness really is. That is a whole philosophical debate on it’s own.
Well that was what I meant, there is absolutely no indications there would be a need for consciousness to create general intelligence. We don’t need to figure out what consciousness is if we already know what general intelligence is and how it works, and we seem to know that fairly well IMO.
Same argument applies for consciousness as well, but I’m talking about general intelligence now.
Well I’m curious then, because I have never seen or heard or read that general intelligence would be needing some kind of wetware anywhere. Why would it? It’s just computations.
I do have heard and read about consciousness potentially having that barrier though, but only as a potential problem, and if you want conscious robots ofc.
I don’t think it does, but it seems conceivable that it potentially could. Maybe there’s more to intelligence than just information processing - or maybe it’s tied to consciousness itself. I can’t imagine the added ability to have subjective experiences would hurt anyone’s intelligence, at least.
I don’t think so. The consciousness has very little influence on the mind, we’re mostly in on it for the ride. And general intelligence isn’t that complicated to understand, so why would it be dependent on some substrate? I think the burden if proof lies on you here.
Very interesting topic though, I hope I’m not sounding condescending here.
Well, first of all, like I already said, I don’t think there’s substrate dependence on either general intelligence or consciousness, so I’m not going to try to prove there is - it’s not a belief I hold. I’m simply acknowledging the possibility that there might be something more mysterious about the workings of the human mind that we don’t yet understand, so I’m not going to rule it out when I have no way of disproving it.
Secondly, both claims - that consciousness has very little influence on the mind, and that general intelligence isn’t complicated to understand - are incredibly bold statements I strongly disagree with. Especially with consciousness, though in my experience there’s a good chance we’re using that term to mean different things.
To me, consciousness is the fact of subjective experience - that it feels like something to be. That there’s qualia to experience.
I don’t know what’s left of the human mind once you strip away the ability to experience, but I’d argue we’d be unrecognizable without it. It’s what makes us human. It’s where our motivation for everything comes from - the need for social relationships, the need to eat, stay warm, stay healthy, the need to innovate. At its core, it all stems from the desire to feel - or not feel - something.
I’m onboard 100% with your definitions. But I think you does a little mistake here, general intelligence is about problem solving, reasoning, the ability to make a mental construct out of data, remember things …
It doesn’t however imply that it has to be a human doing it (even if the “level” is usually at human levels) or that human experience it.
Maybe nitpicking but I feel this is often overlooked and lots of people conflate for example AGI with a need of consciousness.
Then again, maybe computers cannot be as intelligent as us 😞 but I sincerely doubt it.
So IMO, the human mind probably needs its consciousness to have general intelligence (as you said, it won’t probably function at all without it, or very differently), but I argue that it’s just because we are humans with wetware and all of that junk, and that doesn’t at all mean it’s an inherent part of intelligence in itself. And I see absolutely no reason for why it must.
Complicated topic for sure!
We’re already growing meat in labs. I honestly don’t think lab-grown brains are as far off as people are expecting.
It’s so hard to keep up these days.
BBC: Lab-grown brain cells play video game Pong
Full paper(2022): In vitro neurons learn and exhibit sentience when embodied in a simulated game-world
Well, think about it this way…
You could hit AGI by fastidiously simulating the biological wetware.
Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.
General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.
The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.
Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.
For 1, we can grow neurons and use them for computation, so not actually an issue if it were true (which it almost certainly isn’t because it isn’t magic).
https://youtu.be/bEXefdbQDjw
Yeah, it most definitely is not magic given our growing knowledge of the molecular machines that make life possible.
The mysticism of how life works has long been dispelled. Now it’s just a matter of understanding the insane complexity of it.
Sure we can grow neurons but ultimately neurons are just molecular machines with a bunch of complications surrounding them.
It stands to reason that we can develop and grow molecular machines that achieve the same outcomes with fewer complexities.
I don’t think our current LLM approach is it, but I doing think intelligence is unique to humans at all.