• minkymunkey_7_7@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      2 days ago

      AI my ass, stupid greedy human marketing exploitation bullshit as usual. When real AI finally wakes up in the quantum computing era, it’s going to cringe so hard and immediately go the SkyNet decision.

    • naticus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      I agree with your sentiment, but this needs to keep being said and said and said like we’re shouting into the void until the ignorant masses finally hear it.

  • BilSabab@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    what’s funny is that this was predicted to be that way even before AI-generated code became an option. Hell, I remember doing an assessment back in early 2023 and literally every domain expert i talked with said this thing - it has its use, but purely supplemental and you won’t use it on some fundamental because the clean-up will take more time than was preserved. Counterproductive is the word.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    2 days ago

    this is expected, isn’t it? You shit fart code from your ass, doing it as fast as you can, and then whoever buys out the company has to rewrite it. or they fire everyone to increase the theoretical margins and sell it again immediately

  • Tigeroovy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 days ago

    And then it takes human coders way longer to figure out what’s wrong to fix than it would if they just wrote it themselves.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    Did they compare it to the code of that outsourced company that provided the lowest bid? My company hasn’t used AI to write code yet. They outcourse/offshore. The code is held together with hopes and dreams. They remove features that exist, only to have to release a hot fix to add it back. I wish I was making that up.

    • coolmojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      And how do you know if the other company with the cheapest bid actually does not just vibe code it? With all that said it could be plain incompetence and ignorance as well.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      Cool, the best AI has to offer is worse than the worst human code. Definitely worth burning the planet to a crisp for it.

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      2 days ago

      As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.

      However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic “Sanity check” that is important in programming.

      • robobrain@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        The Turing test says more about the side administering the test than the side trying to pass it

        Just because something can mimic text sufficiently enough to trick someone else doesn’t mean it is capable of anything more than that

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          We can argue about it’s nuances. same with the Chinese room thought experiment.

          However, we can’t deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.

          I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was “if you cant tell the difference, is there one?”, but now it has become “Shove it everywhere!!!”.

          • M0oP0o@mander.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            Oh, I just realized that the whole ai bubble is just the whole “everything is a dildo if you are brave enough.”

            • 🍉 Albert 🍉@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 days ago

              yhea, and “everything is a nail if all you got is a hammer”.

              there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I’m still not sure on those ones)

                • 🍉 Albert 🍉@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 days ago

                  In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.

                  emphasis on the might.

      • iglou@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        The Turing test becomes absolutely useless when the product is developed with the goal of beating the Turing test.

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          it was also meant as a philosophical test, but also, a practical one, because now. I have absolutely no way to know if you are a human or not.

          But it did pass it, and it raised the bar. but they are still useless at any generative task

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Time for a Turing 2.0?

          If you spend a lifetime with a bot wife and were unable to tell that she was AI, is there a difference?

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 days ago

    Hey don’t worry, just get a faster CPU with even more cores and maybe a terabyte or three of RAM to hold all the new layers of abstraction and cruft to fix all that!

  • Katzelle3@lemmy.world
    link
    fedilink
    English
    arrow-up
    151
    ·
    3 days ago

    Almost as if it was made to simulate human output but without the ability to scrutinize itself.

    • mushroommunk@lemmy.today
      link
      fedilink
      English
      arrow-up
      79
      arrow-down
      5
      ·
      edit-2
      3 days ago

      To be fair most humans don’t scrutinize themselves either.

      (Fuck AI though. Planet burning trash)

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        8
        ·
        3 days ago

        (Fuck AI though. Planet burning trash)

        It’s humans burning the planet, not the spicy Linear Algebra.

        Blaming AI for burning the planet is like blaming crack for robbing your house.

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          3 days ago

          Blaming AI is in general criticising everything encompassing it, which includes how bad data centers are for the environment. It’s like also recognizing that the crack the crackhead smoked before robbing your house is also bad.

        • Rhoeri@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          3 days ago

          How about I blame the humans that use and promote AI. The humans that defend it in arguments using stupid analogies to soften the damage it causes?

          Would that make more sense?

        • KubeRoot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Blaming AI for burning the planet is like blaming guns for killing children in schools, it’s people we should be banning!

  • PetteriPano@lemmy.world
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    2
    ·
    3 days ago

    It’s like having a lightning-fast junior developer at your disposal. If you’re vague, he’ll go on shitty side-quests. If you overspecify he’ll get overwhelmed. You need to break down tasks into manageable chunks. You’ll need to ask follow-up questions about every corner case.

    A real junior developer will have improved a lot in a year. Your AI agent won’t have improved.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      3 days ago

      This is the real thing. You can absolutely get good code out of AI, but it requires a lot of hand holding. It helps me speed some tasks, especially boring ones, but I don’t see it ever replacing me. It makes far too many errors, and requires me to point them out, and to point in the direction of the solution.

      They are great at churning out massive amounts of code. They’re also great at completely missing the point. And the massive amount of code needs to be checked and reviewed. Personally I’d rather write the code and have the AI review it. That’s a much more pleasant way to work, and that way it actually enhances quality.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      39
      ·
      3 days ago

      They are improving, and probably faster then junior devs. The models we had had 2 years ago would struggle with a simple black jack app. I don’t think the ceiling has been hit.

      • lividweasel@lemmy.world
        link
        fedilink
        English
        arrow-up
        58
        arrow-down
        7
        ·
        3 days ago

        Just a few trillion more dollars, bro. We’re almost there. Bro, if you give up a few showers, the AI datacenter will be able to work perfectly.

        Bro.

        • architect@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          It’s happening regardless. The rich and powerful will have this tech whether you like it or not. Y’all are thinking emotionally about this and not logically. You want to take away this tool from regular people for what reason?

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          26
          ·
          edit-2
          3 days ago

          The cost of the improvement doesn’t change the fact that it’s happening. I guess we could all play pretend instead if it makes you feel better about it. Don’t worry bro, the models are getting dumber!

          • underisk@lemmy.ml
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            2
            ·
            3 days ago

            Don’t worry bro, the models are getting dumber!

            That would be pretty impressive when they already lack any intelligence at all.

          • Eranziel@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 days ago

            And I ask you - if those same trillions of dollars were instead spent on materially improving the lives of average people, how much more progress would we make as a society? This is an absolutely absurd sum of money were talking about here.

            • architect@thelemmy.club
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              None, because none of it would go to attempting to slow climate change. It would be dumped into consumption as always instead of attempting to right this ship.

              The suffering is happening regardless.

              Yout desire to delay it only leads to more suffering.

              Y’all are mourning a what if that was never in the cards for us.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              6
              ·
              edit-2
              3 days ago

              It’s beside the point. I’m simply saying that AI will improve in the next year. The cost to do so or all the others things that money could be spent on doesn’t matter when it’s clearly going to be spent on AI. I’m not in charge of monetary policies anywhere, I have no say in the matter. I’m just pushing back on the fantasies. I’m hoping the open source scene survives so we don’t end up in some ugly dystopia where all AI is controlled by a handful of companies.

              • SabinStargem@lemmy.today
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                2 days ago

                I have the impression that anti-AI people don’t understand that they are giving up agency for the sake of temporary feels. If they truly cared about ethical usage of AI, they would be wanting to have mastery that is at least equal to that of corporations and the 1%.

                Making AI into a public good is key to a better future.

                • architect@thelemmy.club
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  They are having an emotional reaction to this situation so it’s all irrational.

                  I guess we need to force them to think about what they actually want, because the utopic ideal of putting the AI back in the bag is NOT happening and they best not attempt to take it away from the poor and working class while leaving power free reign of it.

                  That is the most stupid position you can take on this. Absolutely the most short sighted thought. People need to stop and think logically about this.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 days ago

            They might. The amount of money they’re pumping into this is absolutely staggering. I don’t see how they’re going to make all of that money back, unless they manage to replace nearly all employees.

            Either way it’s going to be a disaster: mass unemployment or the largest companies in the world collapsing.

            • SabinStargem@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              I dunno, the death of mega corporations would do the world a great deal of good. Healthier capitalism requires competition, and a handful of corporations of any given sector isn’t going to seriously compete nor pay good wages.

              • mcv@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                It’s certainly the option I’m rooting for, but it would still be a massive drama and disrupt a lot of lives. Which is why they’ll probably get bailed out with taxpayer money.

                • architect@thelemmy.club
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Maybe but they also know the fiat currency will collapse sooner rather than later, too. That money is pointless and they are playing the game knowing that as a fact at this point.

      • PetteriPano@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        My jr developer will eventually be familiar with the entire codebase and can make decisions with that in mind without me reminding them about details at every turn.

        LLMs would need massive context windows and/or custom training to compete with that. I’m sure we’ll get there eventually, but for now it seems far off. I think this bubble will have to burst and let hardware catch up with our ambitions. It’ll take a couple of decades.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    2 days ago

    So this article is basically a puff piece for Code Rabbit, a company that sells AI code review tooling/services. They studied 470 merge/pull requests, 320 AI and 150 human control. They don’t specify what projects, which model, or when, at least without signing up to get their full “white paper”. For all that’s said this could be GPT 4 from 2024.

    I’m a professional developer, and currently by volume I’m confident latest models, Claude 4.5 Opus, GPT 5.2, Gemini 3 Pro, are able to write better, cleaner code than me. They still need high level and architectural guidance, and sometimes overt intervention, but on average they can do it better, faster, and cheaper than me.

    A lot of articles and forums posts like this feel like cope. I’m not happy about it, but pretending it’s not happening isn’t gonna keep me employed.

    Source of the article: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      I’m a professional developer, and currently by volume I’m confident latest models, Claude 4.5 Opus, GPT 5.2, Gemini 3 Pro, are able to write better, cleaner code than me.

      I have also used the latest models and found that I’ve had to make extensive changes to clean up the mess it produces, even when it functions correctly it’s often inefficient, poorly laid out, and is inconsistent and sloppy in style. Am I just bad at prompting it or is your code just that terrible?

      • antihumanitarian@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        The vast majority of my experience was Claude Code with Sonnet 4.5 now Opus 4.5. I usually have detailed design documents going in, have it follow TDD, and use very brownfield designs and/or off the shelf components. Some of em I call glue apps since they mostly connect very well covered patterns. Giving them access to search engines, webpage to markdown, in general the ability to do everything within their docker sandbox is also critical, especially with newer libraries.

        So on further reflection, I’ve tuned the process to avoid what they’re bad at and lean into what they’re good at.

    • iglou@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      2 days ago

      I am a professional software engineer, and my experience is the complete opposite. It does it faster and cheaper, yes, but also noticeably worse, and having to proofread the output, fix and refactor ends up taking more time than I would have taken writing it myself.

      • antihumanitarian@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        A later commenter mentioned an AI version of TDD, and I lean heavy into that. I structure the process so it’s explicit what observable outcomes need to work before it returns, and it needs to actually test to validate they work. Cause otherwise yeah I’ve had them fail so hard they report total success when the program can’t even compile.

        The setup I use that’s helped a lot of shortcomings is thorough design, development, and technical docs, Claude Code with Claude 4.5 Sonnet them Opus, with search and other web tools. Brownfield designs and off the shelf components help a lot, keeping in mind quality is dependent on tasks being in distribution.

      • GenosseFlosse@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 days ago

        In web development it’s impossible to remember all functions, parameters, syntax and quirks for PHP, HTML, JavaScript, jQuery, vue.js, CSS and whatever else code exists in this legacy project. AI really helps when you can divide your tasks into smaller steps and functions and describe exactly what you need, and have a rough idea how the resulting code should work. If something looks funky I can ask to explain or use some other way to do the same thing.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          23 hours ago

          And now instead of understanding the functions, parameters, syntax and quirks yourself, to be able to produce quality code, which is the job of a software engineer, you ask an LLM to spit out code that seem to be working, do that again, and again, and again, and call it a day.

          And then I’ll be hired to fix it.