I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.
Any good examples on how to explain this in simple terms?
Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?
deleted by creator
I would argue that it is quite obviously correct, but that the interesting question is whether humans are in the same category (I would argue yes).
deleted by creator
You sound like a chatbot who’s offended by it’s intelligence being insulted.
Bro is lost in the sauce
deleted by creator