• fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    5 hours ago

    All of the examples are commercial products. The author doesn’t know or doesn’t realize that this is a capitalist problem. Of course, there is bloat in some open source projects. But nothing like what is described in those examples.

    And I don’t think you can avoid that if you’re a capitalist. You make money by adding features that maybe nobody wants. And you need to keep doing something new. Maintenance doesn’t make you any money.

    So this looks like AI plus capitalism.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 hours ago

      “Open source” is not contradictory to “capitalist”, just involves a fair bit of industry alliances and\or freeloading.

      • Jakeroxs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 minutes ago

        It absolutely is to the majority of capitalists unless it still somehow directly benefits them monetarily

  • squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    8 hours ago

    The article is very much off point.

    • Software quality wasn’t great in 2018 and then suddenly declined. Software quality has been as shit as legally possible since the dawn of (programming) time.
    • The software crisis has never ended. It has only been increasing in severity.
    • Ever since we have been trying to squeeze more programming performance out of software developers at the cost of performance.

    The main issue is the software crisis: Hardware performance follows moore’s law, developer performance is mostly constant.

    If the memory of your computer is counted in bytes without a SI-prefix and your CPU has maybe a dozen or two instructions, then it’s possible for a single human being to comprehend everything the computer is doing and to program it very close to optimally.

    The same is not possible if your computer has subsystems upon subsystems and even the keyboard controller has more power and complexity than the whole apollo programs combined.

    So to program exponentially more complex systems we would need exponentially more software developer budget. But since it’s really hard to scale software developers exponentially, we’ve been trying to use abstraction layers to hide complexity, to share and re-use work (no need for everyone to re-invent the templating engine) and to have clear boundries that allow for better cooperation.

    That was the case way before electron already. Compiled languages started the trend, languages like Java or C# deepened it, and using modern middleware and frameworks just increased it.

    OOP complains about the chain “React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways”. But he doesn’t even consider that even if you run “straight on bare metal” there’s a whole stack of abstractions in between your code and the execution. Every major component inside a PC nowadays runs its own separate dedicated OS that neither the end user nor the developer of ordinary software ever sees.

    But the main issue always reverts back to the software crisis. If we had infinite developer resources we could write optimal software. But we don’t so we can’t and thus we put in abstraction layers to improve ease of use for the developers, because otherwise we would never ship anything.

    If you want to complain, complain to the mangers who don’t allocate enough resources and to the investors who don’t want to dump millions into the development of simple programs. And to the customers who aren’t ok with simple things but who want modern cutting edge everything in their programs.

    In the end it’s sadly really the case: Memory and performance gets cheaper in an exponential fashion, while developers are still mere humans and their performance stays largely constant.

    So which of these two values SHOULD we optimize for?


    The real problem in regards to software quality is not abstraction layers but “business agile” (as in “business doesn’t need to make any long term plans but can cancel or change anything at any time”) and lack of QA budget.

    • 0x0@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      we would need exponentially more software developer budget.

      Are you crazy? Profit goes to shareholders, not to invest in the project. Get real.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      Yeah what I hate that agile way of dealing with things. Business wants prototypes ASAP but if one is actually deemed useful, you have no budget to productisize it which means that if you don’t want to take all the blame for a crappy app, you have to invest heavily in all of the prototypes. Prototypes who are called next gen project, but gets cancelled nine times out of ten 🤷🏻‍♀️. Make it make sense.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        This. Prototypes should never be taken as the basis of a product, that’s why you make them. To make mistakes in a cheap, discardible format, so that you don’t make these mistake when making the actual product. I can’t remember a single time though that this was what actually happened.

        They just label the prototype an MVP and suddenly it’s the basis of a new 20 year run time project.

        In my current job, they keep switching around everything all the time. Got a new product, super urgent, super high-profile, highest priority, crunch time to get it out in time, and two weeks before launch it gets cancelled without further information. Because we are agile.

    • BillBurBaggins@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      And you can’t even zoom into the images on mobile. Maybe it’s harder than they think if they can’t even pick their blogging site without bugs

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    14 hours ago

    Quality in this economy ? We need to fire some people to cut costs and use telemetry to make sure everyone that’s left uses AI to pay AI companies because our investors demand it because they invested all their money in AI and they see no return.

  • themaninblack@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    13 hours ago

    Being obtuse for a moment, let me just say: build it right!

    That means minimalism! No architecture astronauts! No unnecessary abstraction! No premature optimisation!

    Lean on opinionated frameworks so as to focus on coding the business rules!

    And for the love of all that is holy, have your developers sit next to the people that will be using the software!

    All of this will inherently reduce runaway algorithmic complexity, prevent the sort of artisanal work that causes leakiness, and speed up your code.

  • PattyMcB@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    15 hours ago

    Non-technical hiring managers are a bane for developers (and probably bad for any company). Just saying.

  • afk_strats@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    19 hours ago

    Accept that quality matters more than velocity. Ship slower, ship working. The cost of fixing production disasters dwarfs the cost of proper development.

    This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don’t. It’s a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      18 hours ago

      “Apparently there’s never the money to do it right, but somehow there’s always the money to do it twice.”

      Management never likes to have this brought to their attention, especially in a Told You So tone of voice. One thinks if this bothered pointy-haired types so much, maybe they could learn from their mistakes once in a while.

      • ozymandias117@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        17 hours ago

        We’ll just set up another retrospective meeting and have a lessons learned.

        Then we won’t change anything based off the findings of the retro and lessons learned.

        • PattyMcB@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 hours ago

          Post-mortems always seemed like a waste of time to me, because nobody ever went back and read that particular confluence page (especially me executives who made the same mistake again)

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            Post mortems are for, “Remember when we saw something similar before? What happened and how did we handle it?”

    • ryathal@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      There’s levels to it. True quality isn’t worth it, absolute garbage costs a lot though. Some level that mostly works is the sweet spot.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      15 hours ago

      The sad thing is that velocity pays the bills. Quality it seems, doesn’t matter a shit, and when it does, you can just patch up the bits people noticed.

      • _stranger_@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        15 hours ago

        This is survivorship bias. There’s probably uncountable shitty software that never got adopted. Hell, the E.T. video game was famous for it.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          I don’t make games, but fine. Baldurs Gate 3 (PS5 co-op) and Skyrim (Xbox 360) had more crashes than any games I’ve ever played.

          Did that stop either of them being highly rated top selling games? No. Did it stop me enjoying them? No.

          Quality feels important, but past a certain point, it really isn’t. Luck, knowing the market, maneuverability. This will get you most of the way there. Look at Fortnite. It was a wonky building game they quickly cobbled into a PUBG clone.

          • _stranger_@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            I love crappy slapped together indi games. Headliners and Peak come to mind. Both have tons of bugs but the quality is there where it matters. Peak has a very unique health bar system I love, and Headliners is constantly working on the balance and fun, not on the graphics or collision bugs. Both of those groups had very limited resources and they spent them where they matter, in high quality mechanics that are fun to play.

            Skyrim is old enough to drive a car now, but back then it’s main mechanic was the open world hugeness. They made damn sure to cram that world full of tons of stuff to do, and so for the most part people forgave bugs that didn’t detract from that core experience.

            BG3 was basically perfect. I remember some bugs early on but that’s a very high quality game. If you’re expecting every game you play to live up to that bar, you’re going to be very disappointed.

            Quality does matter, but it only matters when it’s core to the experience. No one is going to care if your first-person-shooter with tons of lag and shitty controls has an amazing interactive menu and beautiful animations.

            It’s not the amount of quality, it’s where you apply it.

            (I’ve had that robot game that came with th ps5 crash on me, but folding@home on the ps2 never did, imagine that)

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    20 hours ago

    Software has a serious “one more lane will fix traffic” problem.

    Don’t give programmers better hardware or else they will write worse software. End of.

    • nelson@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      20 hours ago

      This is very true. You don’t need a bigger database server, you need an index on that table you query all the time that’s doing full table scans.

      • GenosseFlosse@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        16 hours ago

        You never worked on old code. It’s never that simple in practice when you have to make changes to existing code without breaking or rewriting everything.

        Sometimes the client wants a new feature that cannot easily implement and has to do a lot of different DB lookups that you can not do in a single query. Sometimes your controller loops over 10000 DB records, and you call a function 3 levels down that suddenly must spawn a new DB query each time it’s called, but you cannot change the parent DB query.

        • nelson@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Where is this even coming from? The guy above me is saying not to give devs better hardware and to teach them to code better.

          I followed up with an example of how using indices in a database to boost the performance helped more than throwing more hardware at it.

          This has nothing to do with having worked on old code. Stop trying to pull my comment out of context.

          But yes you’re right. Adding indexes to a database does nothing to solve adding a new feature in the scenario you described. I also never claimed it did.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          but you cannot change the parent DB query.

          Why not?

          This sounds like the “don’t touch working code” nonsense I hear from junior devs and contracted teams. They’re so worried about creating bugs that they don’t fix larger issues and more and more code gets enshrined as “untouchable.” IMO, the older and less understood logic is, the more it needs to be touched so we can expose the bugs.

          Here’s what should happen, depending on when you find it:

          • grooming/research phase - increase estimates enough to fix it
          • development phase - ask senior dev for priority; most likely, you work around for now, but schedule a fix once feature compete; if it’s significant enough, timelines may be adjusted
          • testing phase/hotfix - same as dev, but much more likely to put it off

          Teams should have a budget for tech debt, and seniors can adjust what tech debt they pick.

          In general though, if you’re afraid to touch something, you should touch it, but only if you budget time for it.

  • IrateAnteater@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    74
    ·
    24 hours ago

    I think a substantial part of the problem is the employee turnover rates in the industry. It seems to be just accepted that everyone is going to jump to another company every couple years (usually due to companies not giving adequate raises). This leads to a situation where, consciously or subconsciously, noone really gives a shit about the product. Everyone does their job (and only their job, not a hint of anything extra), but they’re not going to take on major long term projects, because they’re already one foot out the door, looking for the next job. Shitty middle management of course drastically exacerbates the issue.

    I think that’s why there’s a lot of open source software that’s better than the corporate stuff. Half the time it’s just one person working on it, but they actually give a shit.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      True, but this is a reaction to companies discarding their employees at the drop of a hat, and only for “increasing YoY profit”.

      It is a defense mechanism that has now become cultural in a huge amount of countries.

    • HaraldvonBlauzahn@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      It seems to be just accepted that everyone is going to jump to another company every couple years (usually due to companies not giving adequate raises).

      Well. I did the last jump because the quality was so bad.

    • MotoAsh@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      edit-2
      22 hours ago

      Definitely part of it. The other part is soooo many companies hire shit idiots out of college. Sure, they have a degree, but they’ve barely understood the concept of deep logic for four years in many cases, and virtually zero experience with ANY major framework or library.

      Then, dumb management puts them on tasks they’re not qualified for, add on that Agile development means “don’t solve any problem you don’t have to” for some fools, and… the result is the entire industry becomes full of functionally idiots.

      It’s the same problem with late-stage capitalism… Executives focus on money over longevity and the economy becomes way more tumultuous. The industry focuses way too hard on “move fast and break things” than making quality, and … here we are, discussing how the industry has become shit.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        20 hours ago

        Shit idiots with enthusiasm could be trained, mentored, molded into assets for the company, by the company.

        Ala an apprenticeship structure or something similar, like how you need X years before you’re a journeyman at many hands on trades.

        But uh, nope, C suite could order something like that be implemented at any time.

        They don’t though.

        Because that would make next quarter projections not look as good.

        And because that would require actual leadership.

        This used to be how things largely worked in the software industry.

        But, as with many other industries, now finance runs everything, and they’re trapped in a system of their own making… but its not really trapped, because… they’ll still get a golden parachute no matter what happens, everyone else suffers, so that’s fine.

        • MotoAsh@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          18 hours ago

          Exactly. I don’t know why I’m being downvoted for describing the thing we all agree happens…

          I don’t blame the students for not being seasoned professionals. I clearly blame the executives that constantly replace seasoned engineers with fresh hires they don’t have to pay as much.

          Then everyone surprise pikachu faces when crap is the result… Functionally idiots is absolutely correct for the reality we’re all staring at. I am directly part of this industry, so this is more meant as honest retrospective than baseless namecalling. What happens these days is idiotry.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 hours ago

            Yep, literal, functional idiots, as in, they keep doing easily provably as stupid things, mainly because they are too stubborn to admit they could be wrong about anything.

            I used to be part of this industry, and I bailed, because the ratio of higher ups that I encountered anywhere, who were competent at their jobs vs arrogant lying assholes was about 1:9.

            Corpo tech culture is fucked.

            Makes me wanna chip in a little with a Johnny Silverhand solo.

            • MotoAsh@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              ·
              13 hours ago

              Fuck man, why don’t more ethical-ish devs join to make stuff? What’s the missing link on top of easy sharing like FOSS kinda’ already has?

              Obviously programming is a bit niche, but fuck… how can ethical programmers come together to survive under capitalism? Sure, profit sharing and coops aren’t bad, but something of a cultural nexus is missing in this space it feels…

              • sp3ctr4l@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 hours ago

                Well, I’m not quite sure how to … intentionally create a cultural nexus … but I would say that having something like lemmy, piefed, the fediverse, is at least a good start.

                Socializing, discussion, via a non corpo platform.

                Beyond that, uh, maybe something more lile an actual syndicalist collective, or at least a union?

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        My hot take : lots of projects would benefit from a traditional project management cycle instead of trying to force Agile on every projects.

        • MotoAsh@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 hours ago

          Agile SHOULD have a lot of the things ‘traditional’ management looks for! Though so many, including many college teachers I’ve heard, think of it way too strictly.

          It’s just the time scale shrinks as necessary for specific deliverable goals instead of the whole product… instead of having a design for the whole thing from top to bottom, you start with a good overview and implement general arch to service what load you’ll need. Then you break down the tasks, and solve the problems more and more and yadda yadda…

          IMO, the people that think Agile Development means only implement the bare minimum … are part of the complete fucking idiot portion of the industry.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    47
    ·
    22 hours ago

    I’ve been working at a small company where I own a lot of the code base.

    I got my boss to accept slower initial work that was more systemically designed, and now I can complete projects that would have taken weeks in a few days.

    The level of consistency and quality you get by building a proper foundation and doing things right has an insane payoff. And users notice too when they’re using products that work consistently and with low resources.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 hours ago

      (I write only internal tools and I’m a team of one. We have a whole department of people working on public and customer focused stuff.)

      My boss let me spend three months with absolutely no changes to functionality or UI, just to build a better, more configurable back end with a brand new config UI, partly due to necessity (a server constraint changed), otherwise I don’t think it would have ever got off the ground as a project. No changes to master for three months, which was absolutely unheard of.

      At times it was a bit demoralising to do so much work for so long with nothing to show for it, but I knew the new back end would bring useful extras and faster, robust changes.

      The backend config ui is still in its infancy, but my boss is sooo pleased with its effect. He is used to a turnaround for simple changes of between 1 and 10 days for the last few years (the lifetime of the project), but now he’s getting used to a reply saying I’ve pushed to live between 1 and 10 minutes.

      Brand new features still take time, but now that we really understand what it needs to do after the first few years, it was enormously helpful to structure the whole thing to be much more organised around real world demands and make it considerably more automatic.

      Feels food. Feels really good.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      18
      ·
      21 hours ago

      This is one of the things that frustrates me about my current boss. He keeps talking about some future project that uses a new codebase we’re currently writing, at which point we’ll “clean it up and see what works and what doesn’t.” Meanwhile, he complains about my code and how it’s “too Pythonic,” what with my docstrings, functions for code reuse, and type hints.

      So I secretly maintain a second codebase with better documentation and optimization.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        21 hours ago

        How can your code be too pythonic?

        Also type hints are the shit. Nothing better than hitting shift tab and getting completions and documentation.

        Even if you’re planning to migrate to a hypothetical new code base, getting a bunch of documented modules for free is a huge time saver.

        Also migrations fucking suck, you’re an idiot if you think that will solve your problems.

  • kayazere@feddit.nl
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    22 hours ago

    Another big problem not mentioned in the article is companies refusing to hire QA engineers to do actual testing before releasing.

    The last two American companies I worked for had fired all the QA engineers or refused to hire any. Engineers were supposed to “own” their features and test them themselves before release. It’s obvious that this can’t provide the same level of testing and the software gets released full of bugs and only the happy path works.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    23 hours ago

    Fabricated 4,000 fake user profiles to cover up the deletion

    This has got to be a reinforcement learning issue, I had this happen the other day.

    I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.

    Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.

    Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.

      Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        I’ve been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.

        It feels like a secondary model that only exists to validate that a task was actually completed could work.

        • FishFace@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          Yeah, it can work, because it’ll trigger the recall of different types of input data. But it’s not magic and if you have a 25% chance of the model you’re using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.

  • Pika@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    23 hours ago

    I’m glad that they added CloudStrike into that article, because it adds a whole extra level of incompetency in the software field. CS as a whole should have never happens in the first place if Microsoft properly enforced their stance they claim they had regarding driver security and the kernel.

    The entire reason CS was able to create that systematic failure was because they were(still are?) abusing the system MS has in place to be able to sign kernel level drivers. The process dodges MS review for the driver by using a standalone driver that then live patches instead of requiring every update to be reviewed and certified. This type of system allowed for a live update that directly modified the kernel via the already certified driver. Remote injection of un-certified code should never have been allowed to be injected into a secure location in the first place. It was a failure on every level for both MS and CS.