That’s the funny thing. I had a (yet) very basic Programm and did not care at all about memory management. When I did some testing I realised, that for some reason when I printed string 1 I also got characters from string 2.
This non-sarcastically. The operating system is better at cleaning up memory than you, and it’s completely pointless to free all your allocations if you’re about to exit the program. For certain workloads, it can lead to cleaner, less buggy code to not free anything.
It’s important to know the difference between a “memory leak” and unfreed memory. A leak refers to memory that cannot be freed because you lost track of the address to it. Leaks are only really a problem if the amount of leaked memory is unbounded or huge. Every scenario is different.
Of course, that’s not an excuse to be sloppy with memory management. You should only ever fail to free memory intentionally.
Absolutely. I once wrote a server for a factory machine that spawned child processes to work each job item. Intentionally we did not free any memory in the child process because it serves only one request and then exits anyway. It’s much more efficient to have the OS just clean up everything and provides strong guarantees that nothing can be left behind accidentally for a system where up time was money. Any code to manage memory was pointless line noise and extra developer effort.
In fact I think in the linker we specifically replaced free with a function that does nothing.
Upvoted. This is something I learned rather recently. Sometimes it’s more performant to slowly leak than it would be to free properly. Then take x amount of time to restart every n amount of time.
A middle ground is a memory pool or an object pool where you reuse the memory rather than free it. Instead, you free it all in one operation when that phase of your application is complete. I’ve seen this done for particle systems to reduce overhead.
Yeah, you can use the Epsilon garbage collector in Java for a no-op garbage collection strategy.
For short-lived programs that do not risk hitting any memory constraints, it makes a lot of sense - zero time wasted on cleanup during execution, then you just do it all at the end when killing the program, which you can do in constant time since you don’t need to reason about what should remain or not.
Not freeing your memory at all is a memory management strategy. I think some LaTeX compilers use it as well as surprisingly many Java applications.
.net
Anything I run in C# or similar seems to allocate 512GB of virtual address space and then just populates what it actually uses.
That’s the funny thing. I had a (yet) very basic Programm and did not care at all about memory management. When I did some testing I realised, that for some reason when I printed string 1 I also got characters from string 2.
That sounds like it could be memory corruption. That should not happen because every string should be separated by a null terminator.
This non-sarcastically. The operating system is better at cleaning up memory than you, and it’s completely pointless to free all your allocations if you’re about to exit the program. For certain workloads, it can lead to cleaner, less buggy code to not free anything.
It’s important to know the difference between a “memory leak” and unfreed memory. A leak refers to memory that cannot be freed because you lost track of the address to it. Leaks are only really a problem if the amount of leaked memory is unbounded or huge. Every scenario is different.
Of course, that’s not an excuse to be sloppy with memory management. You should only ever fail to free memory intentionally.
Absolutely. I once wrote a server for a factory machine that spawned child processes to work each job item. Intentionally we did not free any memory in the child process because it serves only one request and then exits anyway. It’s much more efficient to have the OS just clean up everything and provides strong guarantees that nothing can be left behind accidentally for a system where up time was money. Any code to manage memory was pointless line noise and extra developer effort.
In fact I think in the linker we specifically replaced free with a function that does nothing.
Upvoted. This is something I learned rather recently. Sometimes it’s more performant to slowly leak than it would be to free properly. Then take x amount of time to restart every n amount of time.
A middle ground is a memory pool or an object pool where you reuse the memory rather than free it. Instead, you free it all in one operation when that phase of your application is complete. I’ve seen this done for particle systems to reduce overhead.
Yeah, you can use the Epsilon garbage collector in Java for a no-op garbage collection strategy.
For short-lived programs that do not risk hitting any memory constraints, it makes a lot of sense - zero time wasted on cleanup during execution, then you just do it all at the end when killing the program, which you can do in constant time since you don’t need to reason about what should remain or not.