It’s dead simple to see if you’re talking to an LLM. The latest models don’t pass the Turing test, not even close. Asking them simple shit causes them to crap themselves really quickly.
Ask ChatGPT how many r’s there are in “veryberry”. When it gets it wrong, tell it you’re disappointed and expect a correct answer. If you do that repeatedly, you can get it to claim there’s more r’s in the word than it has letters.
Can you show the question you asked that led to this and which model was used? I just tested in several models, even slightly older ones and they all answered precisely. Of course if you follow up and tell it the right answer is wrong you can make it say stuff like this, but not one got it wrong out of the gate.
My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).
The only question missing at the start was "How many r’s are there in the word ‘veryberry’. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they’ve fixed this one by now.
Still, it’s remarkably trivial to get an LLM to provide a clearly non-human response.
Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.
Perhaps it was being influenced by the chat history. But try asking how many r’s in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you’re talking to an LLM is fairly trivial.
Current AIs pass it, since most people can’t reasonably tell between AI and human-written stuff every time
It’s dead simple to see if you’re talking to an LLM. The latest models don’t pass the Turing test, not even close. Asking them simple shit causes them to crap themselves really quickly.
Ask ChatGPT how many r’s there are in “veryberry”. When it gets it wrong, tell it you’re disappointed and expect a correct answer. If you do that repeatedly, you can get it to claim there’s more r’s in the word than it has letters.
that’s it? you asked one question and that was enough for you?
It’s quite easy to identify an AI when you’re talking to one. To be fair, you need to actually run the Turing test since it removes confirmation bias
Here’s what I got:**
Can you show the question you asked that led to this and which model was used? I just tested in several models, even slightly older ones and they all answered precisely. Of course if you follow up and tell it the right answer is wrong you can make it say stuff like this, but not one got it wrong out of the gate.
My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).
The only question missing at the start was "How many r’s are there in the word ‘veryberry’. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they’ve fixed this one by now.
Still, it’s remarkably trivial to get an LLM to provide a clearly non-human response.
Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.
Perhaps it was being influenced by the chat history. But try asking how many r’s in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you’re talking to an LLM is fairly trivial.