Except that “hallucinate” is a terrible term. A hallucination is when you perceive something that doesn’t exist. What AI is doing is making things up; i.e. lying.
This language also credits LLMs with an implied ability to think they don’t have.
My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.
So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.
The interface makes it appear that the AI is sapient. You talk to it like a human being, and it responds like a human being. Like you said, it might be impossible to avoid ascribing things like intentionality to it, since it’s so good at imitating people.
It may very well be a stepping-stone to AGI. It may not. Nobody knows. So, of course we shouldn’t assume that it is.
I don’t think that “hallucinate” is a good term regardless. Not because it makes AI appear sapient, but because it’s inaccurate whether the AI is sapient or not.
Except that “hallucinate” is a terrible term. A hallucination is when you perceive something that doesn’t exist. What AI is doing is making things up; i.e. lying.
Yes.
Who are you trying to convince?
This language also credits LLMs with an implied ability to think they don’t have.
My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.
So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.
The interface makes it appear that the AI is sapient. You talk to it like a human being, and it responds like a human being. Like you said, it might be impossible to avoid ascribing things like intentionality to it, since it’s so good at imitating people.
It may very well be a stepping-stone to AGI. It may not. Nobody knows. So, of course we shouldn’t assume that it is.
I don’t think that “hallucinate” is a good term regardless. Not because it makes AI appear sapient, but because it’s inaccurate whether the AI is sapient or not.
https://www.dictionary.com/browse/hallucinate