That seems rather risky, considering that they don’t really check that they output accurate information, and OpenAI specifically recommends against using it for that due to the possibility of their GPT models outputting falsehoods as fact.
If you’re double-checking the sources, both to make sure that they exist, and they are accurate, you may as well do the research without using an LLM in the first place.
You’re just adding to your workload unnecessarily in that case.
That seems rather risky, considering that they don’t really check that they output accurate information, and OpenAI specifically recommends against using it for that due to the possibility of their GPT models outputting falsehoods as fact.
As opposed to Google searching manually, which always has accurate outputs and never outputs falsehoods as fact. 🙂
As long as you double check the source of an answer I don’t see an issue.
If you’re double-checking the sources, both to make sure that they exist, and they are accurate, you may as well do the research without using an LLM in the first place.
You’re just adding to your workload unnecessarily in that case.
I’m thinking that double checking is slightly faster than doing the research yourself.