The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.
Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.
when it outputs results, it’s not clear if it engaged the math engine or just guessed
That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.
Fundamentally yes and no. Original commentor could’ve saved his breath, if people wanted to be educated on AI they have plenty of resources to do so but instead they choose to remain ill informed. The difference is that humans are capable of critical thinking and conceptual connection. We are just as prone to mistakes as AI, we just have a much higher apptitude for mistakes lol. Hence the goal not being to make a perfect AI, its a much more achievable goal of making AI’s that beat us in specific fields. Then to beat us in all fields.
It’s missing features obviously (think neuroplasticity) but is that how AI differs from human intelligence, or simply a lack in the current generation?
It seems to be a flaw in both the hardware and software side of things. Hardware wise, we have yet to make chips that achieve the processing density of human brain matter. Also, heat generation becomes an issue as you try to scale smaller systems up. Software wise, we know our current neural networks dont scale up well, so we seem to be waiting on some more foundational research for more efficient algorithms. My suspicion is that we’re not really going to get true General Superintelligence until we start manufacturing chips that incorporate living neurons, it just really seems cheaper to use already existing computing systems than to design your own architecture.
The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.
Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.
That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.
I explain it as asking 100 people to Google something and taking the most common answer.
Yeah, that’s basically exactly what family feud does.
Yep but instead of “name something a woman keeps in her purse” it’s “write my legal document” or “is it ok to lick a lamp socket”
Great question! The answer to all three of your queries is “yes.” Would you like me to search for the nearest lamp socket?
Hey felbane llm, should I get a face tattoo?
Is a human much different? We too require tons of training and we too are prone to stupid mistakes.
Fundamentally yes and no. Original commentor could’ve saved his breath, if people wanted to be educated on AI they have plenty of resources to do so but instead they choose to remain ill informed. The difference is that humans are capable of critical thinking and conceptual connection. We are just as prone to mistakes as AI, we just have a much higher apptitude for mistakes lol. Hence the goal not being to make a perfect AI, its a much more achievable goal of making AI’s that beat us in specific fields. Then to beat us in all fields.
It’s missing features obviously (think neuroplasticity) but is that how AI differs from human intelligence, or simply a lack in the current generation?
It seems to be a flaw in both the hardware and software side of things. Hardware wise, we have yet to make chips that achieve the processing density of human brain matter. Also, heat generation becomes an issue as you try to scale smaller systems up. Software wise, we know our current neural networks dont scale up well, so we seem to be waiting on some more foundational research for more efficient algorithms. My suspicion is that we’re not really going to get true General Superintelligence until we start manufacturing chips that incorporate living neurons, it just really seems cheaper to use already existing computing systems than to design your own architecture.
Yes.