The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.
Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.
This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.
But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.
I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?
Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.
Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.
The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.
The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.
Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.
This is why I hate calling it AI.
You can call it an LLM.
It’s artificial in the sense that it’s not real. It’s “not real” intelligence imitating as “real” intelligence.
This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.
But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.
I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?
Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.
Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.
The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.