• atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    10 days ago

    The claims that AI will be surpassing humans in programming are pretty ridiculous. But let’s be honest - most programming is rather mundane.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 days ago

      Never have I had to implement any kind of ridiculous algorithm to pass tests with huge amounts of data in the least amount of memory, as the competitive websites show.

      It has been mostly about:

      • Finding the correct library for a job and understanding it well, to prevent footguns and blocking future features
      • Design patterns for better build times
      • Making sane UI options and deciding resource alloc/dealloc points that would match user interaction expectations
      • cmake

      But then again, I haven’t worked in FinTech or Big Data companies, neither have I made an SQL server.

      • magikmw@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        10 days ago

        Because actually writing code is the least important part of programming.

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          There are some times when I wish I were better at regexp and scripting.
          Times when I am writing a similar kind of thing again and again, which is just different enough (and small enough number of repetitions) that it doesn’t seem viable to make the script.

          At those times, I tend to think - maybe Cursor would have done this part well - but have no real idea since I have never used it.

          On the other hand, if I had a scripting endpoint from clang, [1], I would have used that to make a batch processor for even a repetition as small as 10 and wouldn’t have thought once about AI.


          1. which would have taggified parts of code (in the same tone as “parts of speech”) like functions declaration, return type, function name, type qualifier etc. ↩︎

    • wetbeardhairs@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      10 days ago

      Well, this kind of AI won’t ever be useful as a programmer. It doesn’t think. It doesn’t reason. It cannot make decisions besides using a ton of computational power and enormous deep neural networks to shit out a series of words that seem like they should follow your prompt. An LLM is just a really, really good next-word guesser.

      So when you ask it to solve the Tower of Hanoi problem, great it can do that. Because it saw someone else’s answer. But if you ask it to solve it for a tower than is 20 disks high it will fail because no one ever talks about going that far and it flounders. It’s not actually reasoning to solve the problem - it’s regurgitating answers it has ingested from stolen internet conversations. It’s not even attempting to solve the general case because it’s not trying to solve the problem, it’s responding to your prompt.

      That said - an LLM is also great as an interface to allow natural language and code as prompts for other tools. This is where the actually productive advancements will be made. Those tools are garbage today but they’ll certainly improve.

    • Ledivin@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      10 days ago

      My productivity has at least tripled since I started using Cursor. People are actually underestimating the effects that AI will have in the industry

      • PushButton@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        8
        ·
        10 days ago

        It means the AI is very helpful to you. This also means you are as good as 1/3 of an AI in coding skills…

        Which is not a great news for you mate.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          3
          ·
          9 days ago

          Ah knock it off. Jesus you sound like people in the '90s mocking “intellisense” in the IDE as somehow making programmers “less real programmers”.

          It’s all needless gatekeeping and purity test BS. Use tools that are useful. Don’t worry if it makes you less of a man.

          • Feyd@programming.dev
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            9 days ago

            It’s not gate keeping it is true. I know devs that say ai tools are useful but all the ones that say it makes them multiples more productive are actually doing negative work because I have to deal with their terrible code they don’t even understand.

            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 days ago

              The devs I know use it as a tool and check their work and fully understand the code they’ve produced.

              So your experience vs. mine. I suspect you just work with shitty developers who would be producing shitty work whether they were using AI or not.

            • Ledivin@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              8 days ago

              I literally don’t write code anymore, I write detailed specs, invest a lot of time into my guardrails and integrations, and review changes from my agents. My code quality has not fallen, in fact we’ve been able to be much more strict about our style guidelines.

              My job has changed completely, but the results are the same - simply much, much faster. And to be clear, this is in code bases that are hundreds of thousands of lines deep, across multiple massive monorepos, and using context from several different documentation sites - both internal and external.

              If anything, people are understating the effects this will have over the next year, let alone further. The entry-level IC dev is dead. If you aren’t producing at least twice as fast as you used to, you’re going to be left behind. I cannot possibly suggest strongly enough that you start learning how to use it.

        • Rikudou_Sage@lemmings.world
          cake
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          True, I use some local model by Jetbrains that only completes a single line and that’s my sweet spot, it usually guesses the line well and saves me some time without forcing me to read multiple lines of code I didn’t write.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    25
    arrow-down
    5
    ·
    10 days ago

    They have their uses. For instance the other day I needed to read some assembly and decompiled C, you know how fun that can be. LLM proved quite good at translating it to english. And really speed up the process.

    Writing it back wasn’t that good though, just good enough to point in a direction but I still ended up writing the patcher mostly by myself.

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      10 days ago

      the other day I needed to read some assembly and decompiled C

      As one casually does lol Jokes aside, that’s pretty cool. I wish I had the technical know-how and, most importantly, the patience for it.

      • mormund@feddit.org
        link
        fedilink
        arrow-up
        8
        ·
        10 days ago

        If you’re interested in getting into it, download Ghidra and open an older program/game in it that you like. The decompiler is pretty amazing imo, so you rarely have to look at the assembly. But it also cross-references them so you can look at the decompiled C Code and the associated assembly. It’s pretty fun 😊

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        edit-2
        10 days ago

        Assembly is very simple (at least RISC-V assembly is which I mostly work with) but also very tedious to read. It doesn’t help that the people who choose the instruction mnemonics have extremely poor taste - e.g. lb, lh, lw, ld instead of load8, load16, load32, load64. Or j instead of jump. Who needs to save characters that much?

        The over-abbreviation is some kind of weird flaw that hardware guys all have. I wondered if it comes from labelling pins on PCB silkscreens (MISO, CLK etc)… Or maybe they just have bad taste.

        I once worked on a chip that had nested acronyms.

        • Lemminary@lemmy.world
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          10 days ago

          The over-abbreviation is some kind of weird flaw that hardware guys all have

          My bet is on the teaching methods in uni. From what I’ve seen, older teaching methods use terrible variable names for a production environment. I think it unfortunately sticks because students get used to it and find it easier & faster than typing things out.

        • amorpheus@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          10 days ago

          Who needs to save characters that much?

          Do you realize how old assembly language is?

          It predates hard disks by ten years and coincided with the invention of the transistor.

          • FizzyOrange@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            10 days ago

            Do you realize how old assembly language is?

            Do you? These instructions were created in 2011.

            It predates hard disks by ten years and coincided with the invention of the transistor.

            I’m not sure what the very first assembly language has to do with RISC-V assembly?

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    10 days ago

    Fortunately, 90% of coding is not hard problems. We write the same crap over and over. How many different creat an account and signin flows do we really need. Yet there seem to be an infinite amount, and each with it’s own bugs.

  • Stubb@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    10 days ago

    I’ve found that AI is only good at solving programming problems that are relatively “small picture” — or if it has to do with the basics of a language — anything else that it provides a solution for you will have to re-write completely once you consult with the language’s standards and best practices.

    • Rikudou_Sage@lemmings.world
      cake
      link
      fedilink
      arrow-up
      4
      ·
      9 days ago

      Well, I recently did kind of an experiment, writing a kid game in Kotlin without ever using it. And it was surprisingly easy to do. I guess it helps that I’m fluent in ~5 other programming languages because I could tell what looked obviously wrong.

      My conclusion kinda is that it’s a really great help if you know programming in general.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 days ago

      there aren’t that many, if you’re talking specifically LLMs, but ML+AI is more than LLMs.

      Not a defence or indictment of either side, just people tend to confuse the terms “LLM” and “AI”

      I think there could be worth in AI for identification (what insect in this, find the photo I took of the receipt for my train ticket last month, order these chemicals from lowest to highest pH…) - but LLMs are only part of that stack - the input and output - which isn’t going to make many massive breakthroughs week to week.

      • Glitchvid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        The recent boom in neural net research will have real applicable results that are genuine progress: signal processing (e.g. noise removal), optical character recognition, transcription, and more.

        However the biggest hype areas with what I see as the smallest real return is in the huge model LLM space, which basically try to portray AGI as just around the corner. LLMs will have real applications in summarization, but largely otherwise they just generate asymptotically plausible babble, very good for filling the Internet with slop, not actually useful to replace all the positions OAI, et al, need it to (for their funding to be justified).

    • finitebanjo@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      9 days ago

      Because Lemmy is more representative of scientists and underprivileged while other media is more representative of celebrities and people who can afford other media, like hedge funds or tech monopolies.