LLMs are useful to provide generic examples of how a function works. This is something that would previously take an hour of searching the docs and online forums, but the LLM can do for very quickly, and I appreciate. But I have a library I want to use that was just updated with entirely new syntax. The LLMs are pretty much useless for it. Back to the docs I go! Maybe my terrible code will help to train the model. And in my field (marine biogeochemistry), the LLM generally cannot understand the nuances of what I’m trying to do. Vibe coding is impossible. And I doubt the training set will ever be large or relevant enough for the vibe coding to be feasible.
Subjectively speaking, I don’t see it so that good a job of being current or priortizing current over older.
While RAG is the way to give LLM a shot at staying current, I just didn’t see it doing that good a job with library documentation. Maybe it can do all right with tweaks like additional properties or arguments, but more structural changes to libraries I just don’t see being handled.
Exactly. It’s an very niche library (tmap for R) and just was completely overhauled. Gemini, chatGPT and Copilot all seem pretty confused and mix up the old and new syntax
LLMs are useful to provide generic examples of how a function works. This is something that would previously take an hour of searching the docs and online forums, but the LLM can do for very quickly, and I appreciate. But I have a library I want to use that was just updated with entirely new syntax. The LLMs are pretty much useless for it. Back to the docs I go! Maybe my terrible code will help to train the model. And in my field (marine biogeochemistry), the LLM generally cannot understand the nuances of what I’m trying to do. Vibe coding is impossible. And I doubt the training set will ever be large or relevant enough for the vibe coding to be feasible.
Thats simply not true. LLMs with RAG can easily catch up with new library changes.
Subjectively speaking, I don’t see it so that good a job of being current or priortizing current over older.
While RAG is the way to give LLM a shot at staying current, I just didn’t see it doing that good a job with library documentation. Maybe it can do all right with tweaks like additional properties or arguments, but more structural changes to libraries I just don’t see being handled.
Exactly. It’s an very niche library (tmap for R) and just was completely overhauled. Gemini, chatGPT and Copilot all seem pretty confused and mix up the old and new syntax
Thats a lot on implementation of the LLM engine . For python or js you can feed the API schema of the entire virtual environment.
You can’t know without checking though, it may be wrong
The term for that is actually ‘slopping’. Kthx ;-)