Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
It’s a machine learning chat bot, not a calculator, and especially not “AI.”
Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.
It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.
so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.
I think it trained on Reddit data
We must be using different Reddits, my friend
This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token
It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.
If it’s trying emulate a human then it’s spot on. I suck at maths.