• wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    19
    ·
    12 小时前

    This is what you get when AI fanaticism combines with Rust fanaticism.

    1 million lines a month is 2-ish line per second. That “engineer” is just someone to blame when things don’t work. They aren’t going to be contributing anything.

    • tyrant@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 小时前

      I was about to say that surely it’s not just 1 person they are talking about. Then I read, "Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”

      WTF

    • ranzispa@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      11 小时前

      I mean, if this is true and it works it is not too far fetched. You’d mostly be checking that tests still make sense and that they pass.

      Microsoft scientists have worked on a tool that automatically converts some C code to Rust.

        • ranzispa@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 小时前

          No, you go to your manager and be like: your machine to make C code into rust code does not work. If you want to keep the pace of 1M loc per month and keep your boss happy I need double pay and 10 people working on it at all time.

          • cheesybuddha@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 小时前

            But when your boss tells you that you have to keep doing it this way, then you don’t have much choice in the matter. You either keep asking AI for new code and hope it gets it right, or you have to actually delve into the code and spend your time correcting it.

            The 1 million lines of code is just untenable, assuming they want code that actually works.

            • ranzispa@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 小时前

              Well, if that’s the case you do the job in the way you yourself judge best. Maybe that tool is good at some tasks and you apply it to that. Bill Gates will be sad for a couple months and then likely forget about the expectations which had been set and you yourself got a stable job with a safe position for years to come.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 小时前

        You’d mostly be checking that tests still make sense and that they pass.

        Nah, my experience is most of your time is finding out what parameter or function call they made up because its mathematically a good answer.

      • Deestan@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 小时前

        The expensive autocomplete can’t do this.

        AI markering all wants us to believe that spoon technology is this close to space flight. We just need to engrave the spoons better. And gold plate them thicker.

        Dude who wrote that doesn’t understand how LLMs work, how Rust works, how C works, and clearly jack shit about programming in general.

        Rewriting from one paradigm to another isn’t something you can delegate to a million monkeys shitting into typewriters. The core and time-consuming part of the work itself requires skilled architectural coding.

        • cheesybuddha@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 小时前

          LLMs are - by the nature of how they work - only able to achieve 90-95% accuracy. That’s the theoretical best they can do, according to the people behind OpenAI. And worse, it will be presented as 100% accurate, even going so far as to make up sources wholecloth.

          That’s an insane and completely unacceptable error rate for any system even pretending to be mission critical.

          Can you imagine sending people to space with a system that has a 1 in 20 chance of just being completely unfit for service?

        • ranzispa@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 小时前

          Well, in that case they’re overstating their capabilities. Which is not too surprising.