I have an early 2000s PC (pre-SATA) with 512MB RAM (I’d love to tell you about the CPU, but its under a cooler that isn’t going anywhere) that’s been sitting in closets for about 15 years. Assuming I’m willing to buy into it, can something like that reasonably host the following simultaneously on a 40GB boot drive:
Nextcloud Actual Photoprism KitchenOwl SearXNG Katvia Paperless-ngx
Or should I just get new hardware? Regardless, I’d like to do something with this trusty ol business server.
Edit: Lenovo or Dell as the most cost-effective, reliable self-host server in your opinion?
That’s an antique. The list of stuff you want to run probably needs several gigabytes of RAM. I think Nextcloud alone needs 512MB. I’d recommend newer hardware, you can find stuff on ebay for under $100 that would be a LOT more powerful than what you have.
This is what I’d do OP.
I’m a huge fan of the lenovo thinkcentre m92p tinys. Basically the same thing as the dells for ~$150CAD. 3 of them (plus a couple PIs) run my homelab with lots of room to grow.
deleted by creator
Yeah, that’s kinda what I was afraid of. Thanks for looking out, that link was a 404 by the way
Oh, looks like Lemmy is breaking it for some reason. I just searched eBay for “Dell SFF”.
Holy shit, that hardware is disgustingly cheap. This is my number one option so far, thank you!
No problem. Another really good option is to get something brand new like a ZimaBoard (don’t bother with the 2GB of RAM version). It uses very little power and runs perfectly with CasaOS, which is a linux distro designed to make self-hosting dead simple. It will cost you more up front but will likely save you some money in the long run (after a couple years) because it uses less power.
I like the idea and I appreciate pushing efficiency as far as one can, but looking at that makes me feel like I’d get frustrated just getting the OS to boot properly
Makes sense. It’s always a good idea to start with a cheap solution just to get comfortable. Then, if you decide to push things further and upgrade you’ll be more informed about what hardware you might need.
Won’t be able to do much, and even if you can do some stuff you have to keep on mind that the energy efficiency would be poor enough that you’d still be better off with a cheap pi from a cost perspective.
Really good point. Is a Pi recommendable for selfhosting?
If you want something small and cheap, it might be worth getting a used thin client PC.
I got a cheap £20 Igel thin client from eBay as raspberry pi’s were still far too expensive, plus I already had a spare 4GB ddr3 sodimm to drop into it and a 120gb wd green ssd that I’d stripped from its case and fitted internally into the thin client.
After upgrading it one ended up with a 1.2ghz AMD GX-412 cpu, 4gb DDR3, 120gb sata ssd and an external usb 3 1tb hard drive i also had laying around.
As a component of my homelab, it’s running Debian 12, docker with a few containers (pigallery 2, Libreddit, portainer, searXNG), it’s my backup Emby server and my main Pihole and PiVPN client.
Completely silent, sips power and still has capacity spare to run more containers and other projects that catch my interest.
That’s a pretty cool solution, honestly. I’m considering all options here! I’d hate to invest then find out there are more cost-effective options or that I somehow limited the server’s potential.
Working really great for me. I originally just bought it to run Pihole on a dedicated machine and have a secondary pihole instance on my Unraid server in case either of them went down but leaving it sitting there with just PiVPN and Pihole duties seemed wasteful.
I’m getting even more out of it running some of the lighter containers on it with plenty of spare room to do more.
I’ve logged/uploaded my upgrade process here just so you can get some ideas on what I did.
https://imgur.com/a/ExcLdttIt is bulkier than a raspberry pi, being around the size of a router but the low cost and being able to utilise hardware that I had sitting doing nothing made me go this route rather than just getting a pi.
Why do you hate yourself? You’d be better off hosting on an old cell phone.
I know, I know…but it’s good, faithful hardware and I want it to go to a good use.
If it’s really early 2000s, you might want to put it on eBay. There are retro gamers out there that could use it as good Windows 9x era gaming PC. You could give that HW a new life in someone’s retro setup.
It’s great HW for occasional gaming, but it’s very inefficient for 24/7 operation. You want to be somewhere after 2015-ish for something that is supposed to run constantly.
Old hardware is awesome to reuse most of the time but it’s not nearly as efficient as our hardware today.
It’s probably good to just properly recycle the old gear and spend $200 on a mini-PC from Amazon that has three times the power all while using less electricity.
I usually completely tear down old equipment into is raw materials, the best I can. It’s less likely to be shipped off to another country for uncontrolled destruction and I get more money back for the materials.
Don’t, it’s horribly inefficient.
It might be nice to use to learn, but you probably won’t be able to handle that much.
If you’re not using it for anything, you could wipe it and throw a server on it? I guess it depends on what you consider fun
It’s currently running RedHat, I just don’t have any ideas for what to do with it. Running a server for anything more than NAS is new to me
I would never say no to using older hardware. Yeah, it’ll be like punishing yourself. But you learn a shit ton.
I recently started self hosting. I started on a PC with the same specs as you’ve said. Booting was an issue. And tons of stuff always broke. But i learnt a lot. And then there was a time, when i genuinely thought i could do better and switched an old laptop with decent specs.
Pi’s are very expensive and too dang low on supply.
So always make do with what you have. If it’s your first home lab, then yeah go ahead. In a few months switch
Forgetting the lack of processing power and RAM, it’s likely a box that old will eat power. It’s just not worth it for something that old.
A used thin client along the lines of the lenovo think* series will be affordable and do the job.
It’s a great machine to learn on. Build yourself a web server or something like that. You don’t know what it can do until you push it, and you’re not out anything by taking it to its limits. If it has something like a Core 2 Duo you could even run KVM and launch a virtual machine to learn about that process. Old hardware is meant to be run into the ground and you’ll learn a lot in the process, including getting a feel for how much hardware you really need to perform the tasks you want. (I literally just retired a rack server this year with a Core 2 Duo and 8GB of memory, which was used to run five VM servers providing internet services.)
Can you give me some case use examples for VMs like that? My VM knowledge stops at emulating OSs for software compatibility and running old Windows versions for gaming.
What I’ve always done is to create a VM for each service I run – so like one each for DNS, apache, postfix, dovecot, and even one to handle ssh and ftp logins. I’ll also set up a VM when I want to test a new service, so I don’t trash out a physical machine. This makes it easy to make extra copies if I want to run redundant systems or just move them to a different physical server. I suppose this is something like what docker does, except these are entirely self-contained systems that don’t even need to be running the same OS, and if someone happens to hack into one system, it doesn’t give them access to all the others. I also have a physical machine set up as a firewall to direct the appropriate ports to each VM and handle load balancing, but for your experiments you could do this task on the physical desktop and point everything to the VMs running inside it.
One nice thing about KVM is that you can overload your memory space. So like if you only have 512MB available and you set up three VMs with 256MB each, the actual free space will be shared among them because usually a system doesn’t take up ALL of its memory (although for linux you might need to limit how much cache ram each system will try to use). In reality what you find is that a system might run a task or get a burst of traffic that uses more memory, so it will pull free physical memory from the other VMs as needed, then give it back when the task is done. You won’t really want to run web-facing servers with such a tight space though, unless you are the only person actually using them, but hopefully it gives you some ideas of how you can play around with what you have available in that machine.
Holy shit, that’s genius. I saved your comment for reference. This is probably how I’ll end up learning to make these things work