• FarceOfWill@infosec.pub
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    3 months ago

    Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it’s hard to review and not something a human/the person who “wrote” it would get wrong.

    Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

    • HauntedCupcake@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 months ago

      Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it’s a net positive.

      However, generally, I agree it’s more risk than it’s worth