I just read how someone on RetroArch tries to improve documentation by using Copilot. But not in the sense as we might think. His approach is to let Copilot read the documentation and give him follow-up question a hypothetical developer might have. This also could be extended to normal code I guess, to pretend it being a student maybe and have it ask questions instead generating or making changes? I really like this approach.
For context, I myself don’t use online Ai tools, only offline “weak” Ai run on my hardware. And I mostly don’t use it to generate code, but more like asking questions in the chatbox or revising code parts and then analyze and test the “improved” version. Otherwise I do not use it much in any other form. It’s mainly to experiment.


Rubber ducky can be anything you want it to be and has solved more bugs to date than all the LLMs combined. https://en.wikipedia.org/wiki/Rubber_duck_debugging
I don’t think the case I talked in my post is comparable to Rubber duck debugging.
Yes, it does. Maybe you need to experience it first by first hand.
I read about what Rubber duck debugging is in the linked article. It’s a totally different thing that what I’m talking about.