I just read how someone on RetroArch tries to improve documentation by using Copilot. But not in the sense as we might think. His approach is to let Copilot read the documentation and give him follow-up question a hypothetical developer might have. This also could be extended to normal code I guess, to pretend it being a student maybe and have it ask questions instead generating or making changes? I really like this approach.

For context, I myself don’t use online Ai tools, only offline “weak” Ai run on my hardware. And I mostly don’t use it to generate code, but more like asking questions in the chatbox or revising code parts and then analyze and test the “improved” version. Otherwise I do not use it much in any other form. It’s mainly to experiment.

  • Skullgrid@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    tool can be used in many ways. sometimes even the right way

    it’s fucking sad that this is a revelation.

    • thingsiplay@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      Never said its a revelation. I just pointed out an interesting use case, which does not involve content generation like code or art.

      • Skullgrid@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Im not saying that’s what you’re saying. It’s just sad that every fucking conversation is about how trash ai is or how it’s going to save the world. It has interesting uses to augment human capabilities.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    2 days ago

    In writing, it’s simply the most magnificent brainstorming tool ever created.

    Idiots using it to substitute-for writing, & just sign-off on whatever is produced, ought never be trusted again, with writing-responsibility.

    ( IF you’re signing-off on something, THEN you’re responsible for its quality, is the principle )

    I think you’re onto something…

    using it to corner one into better-quality understanding’s a good use of it.

    _ /\ _

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    2 days ago

    I think AI is great for code review. It’s a best-effort process anyway, so letting an AI loose in addition to a coworker doesn’t hurt. So far it beats any human review by far, because it can detect even the most obscure (potential) flaws

    • UnspecificGravity@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      This is the disconnect we are seeing. It is a useful tool for improving the QUALITY of our output, but its not labor saving. The problem is that American industry doesn’t care about quality and only wants to use this if it saves on labor costs.

      • thingsiplay@lemmy.mlOP
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        I’m not really familiar with Rubber ducky and just quickly searched the web. So it is a tool to create tests? Or what is it exactly? Is it an Ai tool? Can it read the entire code or documentation base and then pretend to be a student or developer that asks you questions about it?

        I am not down playing the other issues it has, like licensing, cost, environmental impact, dependency and privacy issues. These are still an issue with such an online LLM tool. But that is not the point of my post and does not take away about a “good” use case. In my opinion.