It’s the same as “Google said this”.
Before AI, Google could say nothing, it’s a search engine. Same with gpt - it’s a tool to access information from different sources.
Just having information out in the Internet / on a search index / accessed by LLM doesn’t make it relevant or credible…
And what buffles me: it’s pretty easy to set up gpt to cite sources and provide the links, filtering through sources that a user trusts. Why neither of my friends do it? Why “gpt said” is even an argument in a discussion?..
Except people just straight up copy paste gpt output. At the very least people would say “I googled and got this result and that result.” We’ve taken what was minimal work and made it minimaler.
It’s the same as “Google said this”. Before AI, Google could say nothing, it’s a search engine. Same with gpt - it’s a tool to access information from different sources.
Just having information out in the Internet / on a search index / accessed by LLM doesn’t make it relevant or credible…
And what buffles me: it’s pretty easy to set up gpt to cite sources and provide the links, filtering through sources that a user trusts. Why neither of my friends do it? Why “gpt said” is even an argument in a discussion?..
Except information from Google was human made at least to some degree. With LLMs there is no guarantee
Except people just straight up copy paste gpt output. At the very least people would say “I googled and got this result and that result.” We’ve taken what was minimal work and made it minimaler.
… Buffalo buffalo buffles