orclev@lemmy.mltoTechnology@beehaw.org•Death By API: Reddit Joins Twitter In Pricing Out Apps
1·
1 year agoHonestly I would have joined beehaw except I disagree with the no downvote policy. It forces you into a terrible tradeoff where either you upvote literally everything that isn’t bad, but then have no way of actually indicating truly good content, or else only upvote the truly good content, but then have no way of indicating bad content. You could always block users that post bad content, but that’s a super heavy handed approach with no real nuance, and doesn’t help improve the community.
That’s the crux of the problem, a LLM has no understanding of what it’s saying, it doesn’t know how to use references. All it knows is that in similar contexts this set of words tended to follow this other set of words. It doesn’t actually understand anything. It’s capable of producing output that looks correct to a casual glance but is often wildly wrong.
Just look at that legal filing that idiot lawyer used ChatGPT to generate. It produced fake references that were trivial for a real lawyer to spot because they used the wrong citation format for the district they were supposedly from. They looked like real citations because they were based on how real citations looked but it didn’t understand that citations have different styles depending on the court district and that the claimed district and citation style must match.
LLMs are very good at producing convincing sounding bullshit, particular for the uninformed.
I saw a post here the other day where someone was saying they thought LLMs were great for learning because beginners often don’t know where to start. There might be some merit to that if it’s used carefully, but by the same token that’s incredibly dangerous because it often takes very deep knowledge to see the various ways the LLMs output is wrong.