Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
In general, ‘big’ CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
…But I think AMD’s MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on (unless it’s a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it’s a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
(In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that’s not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I’m not even sure if there’s a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
In summary, whilst theoretically one can design a high-power low-leakage power source, it’s going to cost a lot more because you need better components, and that’s not going to be a generic desktop PC power source.
That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won’t beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
The GTX 480 is efficient by modern standards. If Nvidia could make a cooler that could handle 600 watts in 2010 you can bet your sweet ass that GPU would have used a lot more power.
Well that and if 1000 watt power supplies were common back then.
I have an old Intel 1440 desktop that runs 24/7 hooked up to a UPS along with a Beelink miniPC, my router, and a POE switch and the UPS is reporting a combined 100w.
Highly doubt it’s worth it in the long run due to electricity costs alone
Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.
You can slow the RAM down too. You don’t need XMP enabled if you’re just using the PC as a NAS. It can be quite power hungry.
Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
Yeah.
In general, ‘big’ CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.
Peripherals and IO will do that. Cores pulling 5-6W while IO die pulls 6-10W
https://www.techpowerup.com/review/amd-ryzen-7-5700x/18.html
Same with auto overclocking mobos.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
…But I think AMD’s MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on (unless it’s a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it’s a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
(In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that’s not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I’m not even sure if there’s a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
In summary, whilst theoretically one can design a high-power low-leakage power source, it’s going to cost a lot more because you need better components, and that’s not going to be a generic desktop PC power source.
That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won’t beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.
All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.
And heat your room in the winter!
Add spring + autumn if you live up north.
I’m still running a 480 that doubles as a space heater (I’m not even joking; I increase the load based on ambient temps during winter)
I am assuming that’s a GTX 480 and not an RX 480; if so - kudos for not having that thing melt the solder off the heatsink by now! 😅
The GTX 480 is efficient by modern standards. If Nvidia could make a cooler that could handle 600 watts in 2010 you can bet your sweet ass that GPU would have used a lot more power.
Well that and if 1000 watt power supplies were common back then.
A 486, eh?
If they’re gonna buy a nas anyway, how many years to break even?
I have an old Intel 1440 desktop that runs 24/7 hooked up to a UPS along with a Beelink miniPC, my router, and a POE switch and the UPS is reporting a combined 100w.