• tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    200
    ·
    edit-2
    24 days ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      91
      ·
      24 days ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          ·
          24 days ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      72
      arrow-down
      1
      ·
      24 days ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        52
        arrow-down
        2
        ·
        24 days ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 days ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.