Retool, a development platform for business software, recently published the results of its State of AI survey. Over 1,500 people took part, all from the tech industry:...
Over half of all tech industry workers view AI as overrated::undefined
This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.
…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-
I’ve seen it refered to as AGI bit I think itns wrong.
Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next.
There is no thikinking or problem solving involved.
A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one.
That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.
The argument “it just predicts the most likely next word” while true massively under values what it even means to predict the next word or token. Largely these predictions are based on sentences and ideas the model has trained on from its data sets. It’s pretty intelligent if you think about it. You read a text book then when you apply the knowledge or take a test you use what you read to form a new sentence in relation to the context of the question or problem. For the models “text prediction” to be correct it has to understand certain relationships between complex ideas and objects to some capacity. Yes it absolutely is not as good as human intelligence. But what it’s doing is much more advanced then text to type on your phone keyboard. It’s a step in the right direction, over hyped right now but the hype is funneling cash into research. The models are already getting more advanced. Right now half of what it says is hot garbage but it can be pretty accurate.
Right? Like, I, too, predict the next word in my sentence to properly respond to inputs with desired output. Sure I have personality (usually) and interests, but that’s an emergent behavior of my intelligence, not a prerequisite.
It might not formulate thoughts the way we do, but it absolutely emulates some level of intelligence, artificially.
I think so many people overrate human intelligence, thus causing them to underrate AI. Don’t get me wrong, our brains are amazing, but they’re also so amazing that they can make crazy cool AI that is also really amazing.
People just hate the idea of being meat robots, I don’t blame em.
Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?
for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.
The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).
There are significant differences between statistical models and AI.
I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.
That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.
Largely because we understand that what they’re calling “AI” isn’t AI.
This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.
…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-
As in AGI?
No, INT.
What is that?
I’ve seen it refered to as AGI bit I think itns wrong. Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next. There is no thikinking or problem solving involved.
A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one. That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.
AGI would be on a whole other level.
Edit: Grammar
The argument “it just predicts the most likely next word” while true massively under values what it even means to predict the next word or token. Largely these predictions are based on sentences and ideas the model has trained on from its data sets. It’s pretty intelligent if you think about it. You read a text book then when you apply the knowledge or take a test you use what you read to form a new sentence in relation to the context of the question or problem. For the models “text prediction” to be correct it has to understand certain relationships between complex ideas and objects to some capacity. Yes it absolutely is not as good as human intelligence. But what it’s doing is much more advanced then text to type on your phone keyboard. It’s a step in the right direction, over hyped right now but the hype is funneling cash into research. The models are already getting more advanced. Right now half of what it says is hot garbage but it can be pretty accurate.
Right? Like, I, too, predict the next word in my sentence to properly respond to inputs with desired output. Sure I have personality (usually) and interests, but that’s an emergent behavior of my intelligence, not a prerequisite.
It might not formulate thoughts the way we do, but it absolutely emulates some level of intelligence, artificially.
I think so many people overrate human intelligence, thus causing them to underrate AI. Don’t get me wrong, our brains are amazing, but they’re also so amazing that they can make crazy cool AI that is also really amazing.
People just hate the idea of being meat robots, I don’t blame em.
Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?
Yup. LLM RAG is just search 2.0 with a GPU.
For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline
THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.
It absolutely is AI. A lot of stuff is AI.
It’s just not that useful.
The decision tree my company uses to deny customer claims is not AI despite the business constantly referring to it as such.
There’s definitely a ton of “AI” that is nothing more than an If/Else statement.
for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.
That’s basically what video game AI is, and we’re happy enough to call it that
Well… it’s a video game. We also call them “CPU” which is also entirely inaccurate.
It’s useful at sucking down all the compute we complained crypto used
The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).
I argue AI is much easier to pull a profit from than a currency exchange also 🙂
Yeah it’s funny how that little tidbit just went quietly into the bin not to talked about again.
There are significant differences between statistical models and AI.
I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.
That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.
Yeah, an AI is a model you can’t explain.
deleted by creator