Yeah, gonna be interesting. Software companies working on consumer software often don’t need to care, because:
They don’t need to buy the RAM that they’re filling up.
They’re not the only culprit on your PC.
Consumers don’t understand how RAM works nearly as well as they understand fuel.
And even when consumers understand that an application is using too much, they may not be able to switch to an alternative either way, see for example the many chat applications written in Electron, none of which are interoperable.
I can see somewhat of a shift happening for software that companies develop for themselves, though. At $DAYJOB, we have an application written in Rust and you can practically see the dollar signs lighting up in the eyes of management when you tell them “just get the cheapest device to run it on” and “it’s hardly going to incur cloud hosting costs”.
Obviously this alone rarely leads to management deciding to rewrite an application/service in a more efficient language, but it certainly makes them more open to devs wanting to use these languages. Well, and who knows what happens, if the prices for Raspberry Pis and cloud hosting and such end up skyrocketing similarly.
As a programmer myself I don’t care about RAM usage, just startup time. If it takes 10s to load 150MB into memory it’s a good case for putting in the work to reduce the RAM bloat.
I mean, don’t get me wrong, I also find startup time important, particularly with CLIs. But high memory usage slows down your application in other ways, too (not just other applications on the system). You will have more L1, L2 etc. cache misses. And the OS is more likely to page/swap out more of your memory onto the hard drive.
Of course, I don’t either sit in front of an application and can tell that it was a non-local NUMA memory access that caused a particular slowness, so I can understand not really being able to care for iterative improvements. But yeah, that is also why I quite like using an efficient stack outright. It just makes computers feel as fast as they should be, without me having to worry about it.
Side-note
I heavily considered ending this comment with this dumbass meme:
Then I realized, I’m responding to someone called “Caveman”. Might’ve been subconscious influence there. 😅
Add to the list: doing native development most often means doing it twice. Native apps are better in pretty much every metric, but rarely are they so much better that management decides it’s worth doing the same work multiple times.
If you do native, you usually need a web version, Android, iOS, and if you are lucky you can develop Windows/Linux/Mac only once and only have to take the variation between them into account.
Do the same in Electon and a single reactive web version works for everything. It’s hard to justify multiple app development teams if a single one suffices too.
At my last job we had a stretch where we were maintaining four different iOS versions of our software: different versions for iPhones and iPads, and for each of those one version in Objective-C and one in Swift. If anyone thinks “wow, that was totally unnecessary”, that should have been the name of my company.
At this rate I suspect the best solution is to cram everything but the UI into a cross-platform library (written in, say, Rust) and have the UI code platform-specific, use your cross-platform library using FFI. If you’re big enough to do that, at least.
This equation might change a bit as more software users learn how bloated apps affect their hardware upgrade frequency & costs over time. The RAM drought brings new incentive to teach and act on that knowledge.
Management might be a bit easier to convince when made to realize that efficiency translates to more customers, while bloat translates to fewer. In some cases, developing a native app might even mean gaining traction in a new market.
We have an enormous problem with software optimization both in cycles and memory costs. I would love for that to change but the vast majority of customers don’t care. It’s painful to think about but most don’t care as long as it works “good enough” which is a nebulous measure that management can use to lie to shareholders.
Even mentioning that we’ve wiped out roughly a decade in hardware gains with how bloated and slow our software is doesn’t move the needle. All of the younger devs in our teams truly see no issue. They consider nextjs apps to be instant. Their term, not me putting words in their mouths. VSCode is blazingly fast in their eyes.
We’ve let the problem slide so long that we have a whole generation of upcoming devs that don’t even see a problem let alone care about it. Anyone who mentors devs should really hammer this home and maybe together we can all start shifting that apathy.
I’m not too optimistic on that one. Bloated software has been an issue for the last 20 or so years at least.
At the same time upgrade cycles have become much slower. In the 90s you’d upgrade your PC every two years and each upgrade would bring whole entire use cases that just weren’t possible before. Similar story with smartphones until the mid-2010s.
Nowadays people use their PCs for upwards of 10 years and their smartphones until they drop them and crack the screen.
Devices have so much performance nowadays that you can really just run some electron apps and not worry about it. It might lag a little at times, but nobody buys a new device just because the loyalty app of your local supermarket is laggy.
I don’t like Electron either, but tbh, most apps running on Electon are so light-weight that it doesn’t matter much that they waste 10x the performance. If your device can handle a browser with 100 tabs, there’s no issue running an Electron app either.
Lastly, most Electron/Webview apps aren’t really a matter of choice. If your company uses Teams you will use teams, no matter how shit it runs on your device. If you need to use your public transport, you will use their app, no matter if it’s Electron or not. Same with your bank, your mobile phone carrier or any other service.
many chat applications written in Electron, none of which are interoperable.
This is one of my pet peeves, and a noteworthy example because chat applications tend to be left running all day long in order to notify of new messages, reducing a system’s available RAM at all times. Bloated ones end up pushing users into upgrading their hardware sooner than should be needed, which is expensive, wasteful, and harmful to the environment.
Open chat services that support third party clients have an advantage here, since someone can develop a lightweight one, or even a featherweight message notifier (so that no full-featured client has to run all day long).
Writing in Rust or “an efficient language” does nothing for ram bloat. The problem is using 3rd party libraries and frameworks. For example a JavaScript interpreter uses around 400k. The JavaScript problem is developers importing a 1GB library to compare a string.
You’d have the same bloat if you wrote in assembly.
Maybe you’re confusing memory (RAM) vs storage ? Because I converted some backend processing services from nodejs to rust, and it’s almost laughable how little RAM the rust counterparts used.
Just running a nodejs service took a couple of hundred mb of ram, iirc. While the rust services could run at below 10mb.
But I’m guessing that if you went the route of compiling an actual binary from the nodejs service, you could achieve some saving of ram & storage either way. With bun or deno ?
Because I converted some backend processing services from nodejs to rust,
You converted only the functions you needed and only included the functions you needed. You did not convert the entire node.js codebase and then include the entire library. That’s the problem I’m describing. A few years ago I toyed with javascript to make a LCARS style wall home automation panel. The overhead of what other people had published was absurd. I did what you did. I took out only the functions I needed, rewrote them, and reduced my program from gigabytes to megabytes even though it was still all Javascript.
I would expect larger corporate projects to do so. It is something that one needs to know about and configure, but if one senior webdev works on a project, they’ll set it up pretty quickly.
On the one hand, tree shaking is often not used, even in large corporate projects.
On the other hand, tree shaking is much less effective than what a good compiler does. Tree shaking only works on a per-module basis, while compilers can optimize down to a code-line basis. Unused functions are not included, and not even variables that can be optimized out are included.
But the biggest issue (and one that tree shaking can also not really help against) is that due to the weak standard library of JS a ton of very simple things are implemented in lots of different ways. It’s not uncommon for a decently sized project (including all of the dependencies) to contain a dozen or so implementations of a padding function or some other small helper functions.
And since all of them are used somewhere in the dependency tree, none of them can be optimized out.
That’s not really a problem of the runtime or the language itself, but since the language and its environment are quite tightly coupled, it is a big problem when developing on JS.
This isn’t Reddit. You don’t need to talk in absolutes.
Similar to WittyShizard, my experience is very different. Said Rust application uses 1200 dependencies and I think around 50 MB RAM. We had a Kotlin application beforehand, which used around 300 dependencies and 1 GB RAM, I believe. I would expect a JavaScript application of similar complexity to use a similar amount or more RAM.
And more efficient languages do have an effect on RAM usage, for example:
Not using garbage collection means objects generally get cleared from RAM quicker.
Iterating over substrings or list elements is likely to be implement more efficiently, for example Rust has string slices and explicit .iter() + .collect().
People in the ecosystem will want to use the language for use-cases where efficiency is important and then help optimize libraries.
You’ve even got stupid shit, for example in garbage-collected languages, it has traditionally been considered best practice, that if you’re doing async, you should use immutable data types and then always create a copy of them when you want to update them. That uses a ton of RAM for stupid reasons.
This isn’t Reddit. You don’t need to talk in absolutes.
I haven’t posted anything on reddit in years. There is no need to start off a post with insults.
re: garbage collection
I wrote java back in 1997 and the programs used a few megabytes. Garbage collection doesn’t in itself require significantly more ram because it only delays the freeing of ram that would have been allocated using a non garbage collection language. Syntatic sugar like iterators does not in general save gigabytes of ram.
The OP isn’t talking about 500k apps now requiring 1MB. The article talks about former 85K apps now taking GB’s of ram.
Yeah, gonna be interesting. Software companies working on consumer software often don’t need to care, because:
I can see somewhat of a shift happening for software that companies develop for themselves, though. At $DAYJOB, we have an application written in Rust and you can practically see the dollar signs lighting up in the eyes of management when you tell them “just get the cheapest device to run it on” and “it’s hardly going to incur cloud hosting costs”.
Obviously this alone rarely leads to management deciding to rewrite an application/service in a more efficient language, but it certainly makes them more open to devs wanting to use these languages. Well, and who knows what happens, if the prices for Raspberry Pis and cloud hosting and such end up skyrocketing similarly.
As a programmer myself I don’t care about RAM usage, just startup time. If it takes 10s to load 150MB into memory it’s a good case for putting in the work to reduce the RAM bloat.
I mean, don’t get me wrong, I also find startup time important, particularly with CLIs. But high memory usage slows down your application in other ways, too (not just other applications on the system). You will have more L1, L2 etc. cache misses. And the OS is more likely to page/swap out more of your memory onto the hard drive.
Of course, I don’t either sit in front of an application and can tell that it was a non-local NUMA memory access that caused a particular slowness, so I can understand not really being able to care for iterative improvements. But yeah, that is also why I quite like using an efficient stack outright. It just makes computers feel as fast as they should be, without me having to worry about it.
Side-note
I heavily considered ending this comment with this dumbass meme:
Then I realized, I’m responding to someone called “Caveman”. Might’ve been subconscious influence there. 😅
But you repeat yourself.
Add to the list: doing native development most often means doing it twice. Native apps are better in pretty much every metric, but rarely are they so much better that management decides it’s worth doing the same work multiple times.
If you do native, you usually need a web version, Android, iOS, and if you are lucky you can develop Windows/Linux/Mac only once and only have to take the variation between them into account.
Do the same in Electon and a single reactive web version works for everything. It’s hard to justify multiple app development teams if a single one suffices too.
At my last job we had a stretch where we were maintaining four different iOS versions of our software: different versions for iPhones and iPads, and for each of those one version in Objective-C and one in Swift. If anyone thinks “wow, that was totally unnecessary”, that should have been the name of my company.
“works” is a strong word
Works good enough for all that a manager cares about.
At this rate I suspect the best solution is to cram everything but the UI into a cross-platform library (written in, say, Rust) and have the UI code platform-specific, use your cross-platform library using FFI. If you’re big enough to do that, at least.
This equation might change a bit as more software users learn how bloated apps affect their hardware upgrade frequency & costs over time. The RAM drought brings new incentive to teach and act on that knowledge.
Management might be a bit easier to convince when made to realize that efficiency translates to more customers, while bloat translates to fewer. In some cases, developing a native app might even mean gaining traction in a new market.
We have an enormous problem with software optimization both in cycles and memory costs. I would love for that to change but the vast majority of customers don’t care. It’s painful to think about but most don’t care as long as it works “good enough” which is a nebulous measure that management can use to lie to shareholders.
Even mentioning that we’ve wiped out roughly a decade in hardware gains with how bloated and slow our software is doesn’t move the needle. All of the younger devs in our teams truly see no issue. They consider nextjs apps to be instant. Their term, not me putting words in their mouths. VSCode is blazingly fast in their eyes.
We’ve let the problem slide so long that we have a whole generation of upcoming devs that don’t even see a problem let alone care about it. Anyone who mentors devs should really hammer this home and maybe together we can all start shifting that apathy.
I’m not too optimistic on that one. Bloated software has been an issue for the last 20 or so years at least.
At the same time upgrade cycles have become much slower. In the 90s you’d upgrade your PC every two years and each upgrade would bring whole entire use cases that just weren’t possible before. Similar story with smartphones until the mid-2010s.
Nowadays people use their PCs for upwards of 10 years and their smartphones until they drop them and crack the screen.
Devices have so much performance nowadays that you can really just run some electron apps and not worry about it. It might lag a little at times, but nobody buys a new device just because the loyalty app of your local supermarket is laggy.
I don’t like Electron either, but tbh, most apps running on Electon are so light-weight that it doesn’t matter much that they waste 10x the performance. If your device can handle a browser with 100 tabs, there’s no issue running an Electron app either.
Lastly, most Electron/Webview apps aren’t really a matter of choice. If your company uses Teams you will use teams, no matter how shit it runs on your device. If you need to use your public transport, you will use their app, no matter if it’s Electron or not. Same with your bank, your mobile phone carrier or any other service.
This is one of my pet peeves, and a noteworthy example because chat applications tend to be left running all day long in order to notify of new messages, reducing a system’s available RAM at all times. Bloated ones end up pushing users into upgrading their hardware sooner than should be needed, which is expensive, wasteful, and harmful to the environment.
Open chat services that support third party clients have an advantage here, since someone can develop a lightweight one, or even a featherweight message notifier (so that no full-featured client has to run all day long).
Writing in Rust or “an efficient language” does nothing for ram bloat. The problem is using 3rd party libraries and frameworks. For example a JavaScript interpreter uses around 400k. The JavaScript problem is developers importing a 1GB library to compare a string.
You’d have the same bloat if you wrote in assembly.
Maybe you’re confusing memory (RAM) vs storage ? Because I converted some backend processing services from nodejs to rust, and it’s almost laughable how little RAM the rust counterparts used.
Just running a nodejs service took a couple of hundred mb of ram, iirc. While the rust services could run at below 10mb.
But I’m guessing that if you went the route of compiling an actual binary from the nodejs service, you could achieve some saving of ram & storage either way. With bun or deno ?
You converted only the functions you needed and only included the functions you needed. You did not convert the entire node.js codebase and then include the entire library. That’s the problem I’m describing. A few years ago I toyed with javascript to make a LCARS style wall home automation panel. The overhead of what other people had published was absurd. I did what you did. I took out only the functions I needed, rewrote them, and reduced my program from gigabytes to megabytes even though it was still all Javascript.
Yeah, you need to do tree-shaking with JavaScript to get rid of unused library code: https://developer.mozilla.org/en-US/docs/Glossary/Tree_shaking
I would expect larger corporate projects to do so. It is something that one needs to know about and configure, but if one senior webdev works on a project, they’ll set it up pretty quickly.
On the one hand, tree shaking is often not used, even in large corporate projects.
On the other hand, tree shaking is much less effective than what a good compiler does. Tree shaking only works on a per-module basis, while compilers can optimize down to a code-line basis. Unused functions are not included, and not even variables that can be optimized out are included.
But the biggest issue (and one that tree shaking can also not really help against) is that due to the weak standard library of JS a ton of very simple things are implemented in lots of different ways. It’s not uncommon for a decently sized project (including all of the dependencies) to contain a dozen or so implementations of a padding function or some other small helper functions.
And since all of them are used somewhere in the dependency tree, none of them can be optimized out.
That’s not really a problem of the runtime or the language itself, but since the language and its environment are quite tightly coupled, it is a big problem when developing on JS.
“Mature ecosystem” it’s called in JS land.
I wish nodejs or ecmascript would have just done the Go thing and included a legit standard library.
Tree-shaking works on per function basis
This isn’t Reddit. You don’t need to talk in absolutes.
Similar to WittyShizard, my experience is very different. Said Rust application uses 1200 dependencies and I think around 50 MB RAM. We had a Kotlin application beforehand, which used around 300 dependencies and 1 GB RAM, I believe. I would expect a JavaScript application of similar complexity to use a similar amount or more RAM.
And more efficient languages do have an effect on RAM usage, for example:
.iter()+.collect().I haven’t posted anything on reddit in years. There is no need to start off a post with insults.
re: garbage collection
I wrote java back in 1997 and the programs used a few megabytes. Garbage collection doesn’t in itself require significantly more ram because it only delays the freeing of ram that would have been allocated using a non garbage collection language. Syntatic sugar like iterators does not in general save gigabytes of ram.
The OP isn’t talking about 500k apps now requiring 1MB. The article talks about former 85K apps now taking GB’s of ram.
I don’t know what part of that is supposed to be an insult.
And the article may have talked of such stark differences, but I didn’t. I’m just saying that the resource usage is noticeably lower.