Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
This feature is in beta. That issue title is sort of exaggerated tbh. Test it if you want, but take everything their beta LLM spits out with grain of salt.
The “ai explain” button doesn’t mention that it’s in beta even in the expanded detail text. But more importantly, even once out of beta, LLMs will never be trustworthy references without humans vetting them. This isn’t a “beta” problem it’s a “completely misunderstood the problem and solution” problem.
I suppose they can add source URL of information, so, you can verify correctness.
But then I don’t get it why we need lying AI if we can get URL in the first place. So, it will work just like any other good search engine.
Sorry if I sound salty, but I still don’t get why companies put fake AI engines everywhere.
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is
This feature is in beta. That issue title is sort of exaggerated tbh. Test it if you want, but take everything their beta LLM spits out with grain of salt.
The “ai explain” button doesn’t mention that it’s in beta even in the expanded detail text. But more importantly, even once out of beta, LLMs will never be trustworthy references without humans vetting them. This isn’t a “beta” problem it’s a “completely misunderstood the problem and solution” problem.
I suppose they can add source URL of information, so, you can verify correctness. But then I don’t get it why we need lying AI if we can get URL in the first place. So, it will work just like any other good search engine.
Sorry if I sound salty, but I still don’t get why companies put fake AI engines everywhere.
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is