I’ve been able to reproduce some, like the “how to carry <insert anything here> across a river” one where it always turns it into the fox, goose and grain puzzle.
But generally on anything that’s gone viral, by the time you try to reproduce it someone has already gone in and hard-coded a fix to prevent it from giving the same stupid answer going forward.
Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
Usually I’ll see something mild or something niche get wildly messed up.
I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.
Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.
Those LLMs can’t handle numbers, they have zero concept of what a number is. They can pull some definitions, they can sorta get very basic arithmetic to work in a limited domain based on syntax rules, but it will mess up most calculations. ChatGPT tries to work around it by recognizing the prompt is related to math, passing it to a more normal Wolfram-Alpha style algorithm, and then using the language model to format the reply into something more appealing, but even this approach often fails because if the AI gets confused for any reason it will feed moronic data to the maths algorithm.
The “sauce vs dressing” one worked for me when I first heard about it, but in the following days it refused to give an AI answer and now has a “reasonable” AI answer
I can literally never reproduce these. I’ve tried several times now.
Try why I eat Vim?
Try why I eat Vim?
I’ve been able to reproduce some, like the “how to carry <insert anything here> across a river” one where it always turns it into the fox, goose and grain puzzle.
But generally on anything that’s gone viral, by the time you try to reproduce it someone has already gone in and hard-coded a fix to prevent it from giving the same stupid answer going forward.
Because they’re fake.
I agree. People used to get so mad at me for suggesting that for some reason
Yeah, I never get these strange AI results.
Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
Usually I’ll see something mild or something niche get wildly messed up.
I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.
Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.
Those LLMs can’t handle numbers, they have zero concept of what a number is. They can pull some definitions, they can sorta get very basic arithmetic to work in a limited domain based on syntax rules, but it will mess up most calculations. ChatGPT tries to work around it by recognizing the prompt is related to math, passing it to a more normal Wolfram-Alpha style algorithm, and then using the language model to format the reply into something more appealing, but even this approach often fails because if the AI gets confused for any reason it will feed moronic data to the maths algorithm.
At least one dumb one was reproducible, I’d look for it but it was probably a few hundred comments ago
The “sauce vs dressing” one worked for me when I first heard about it, but in the following days it refused to give an AI answer and now has a “reasonable” AI answer
The original, if you haven’t seen it:
Love it :D