That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.
Why are we relying on language models to answer questions. These things don’t really “know” anything right?
No one knows anything get over yourself buddy - it gave a correct answer way more polite than I ever could so who’s gonna complain
Don’t they pull from online sources? So it’s basically googling with extra steps and an unpredictable middleman
That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.