I don’t think you should be using it anymore if it’s getting hot enough to cook a pizza…
cooked perfectly !
Avoid hardware RAID (have a look at this). Use Linux MD or BTRFS or ZFS.
It’s a 2004 server, you can’t do anything else but HW RAID on this. also, it’s using UltraSCSI (and you should not use that in 2024 either ahah)
SCSI was creme de la creme ages ago! Is it not a matter of going in its BIOS, configure the hardware RAID (go for mirror only!?), endure the noise it probably makes, and install ? :)
Indeed! I have a lot of SCSI disks, PCI cards and a few cables too! (also, SCSI is fun to pronounce… SKEUZY) but on this server, the RAID card doesn’t have any option to create a RAID in its BIOS, from what I can tell it needs a special software and I can’t find good tutorials or documentation out there :(
You can find the 7.12.x support CD for that controller at https://www.ibm.com/support/pages/ibm-serveraid-software-matrix. I’m pretty sure that server model did not support USB booting so you’ll need to burn that to a disc. This will be the disc to boot off of to create your array(s).
I forget if the support CD had the application you would install in Windows to manage things after installation or not, or if that’s only on the application CD. Either way you’ll find several downloads for various OS drivers and the applications from that matrix.
Thanks for the link! I’ll definitely need to try this… I have a few CDs laying around, I’ll burn one!
my scsi controller needs to be entered during boot to manage raid. it also has an external battery that needed replacing (which cost more new than just buying a new card … with the exact same battery) so if you’re not in verbose boot mode figure that out and see if the controller is telling you which function key it needs.
figuring out this old stuff is most of the fun in running it, I would sell it as scrap before actually hosting anything on it.
Yeah I already have the key combo to enter the RAID card BIOS, CTRL + i
And yeah I won’t be hosting anything on it obviously, I just love old hardware and trying to push them tp their limit!
Why is that? Does the motherboard effectively just not have enough inputs for all the disks, so that’s why you need dedicated hardware that handles some kind of raid configuration, and in the end the motherboard just sees it all as one drive? I never really understood what SCSI was for. How do the drives connect, SATA/PATA/something else?
SCSI is its own thing, to fix some issues with IDE iirc. The drive backplane is directly attached to the motherboard, well, more specifically to the RAID Card on the Motherboard, then the RAID card give the OS/Motherboard access to the configured RAID disk that you have created, but not to the disks themselves.
I did not know that
Well, you could make each disk its own RAID 0 array. There would probably be performance overhead compared to just using the hardware RAID though.
Serving pizza and files. What a time to be alive.
mv pizza.01 /srv/mouth/
In Linux everything is a file!
It really ties the room together
For a non-pizza comment: I’ve been out of the hardware game for awhile, but the last time I had to set one of these up for RAID, the paper manual (which can probably be found digitally) was helpful. I also vaguely recall RAID 5 either having issues or being unavailable.
It’s slowly coming back to me… There was a floppy disk that you needed to launch the raid config? Also the platform ran pretty well with debian 4.0 if you’re debating what to run on it.
It’s pretty straightforward. RAID controller has it’s own bios. Setup what you want. Done.
You should replace that thing with something more modern. I had a 5000p chipset system someone gave me with dual quad cores and an assload of ram.
The shitty box idled over 400W. I went as far as getting low power ram and the newest CPU it would support that also supported frequency and power scaling and it still used over 400W on idle.
This while I had a Xeon E5 box that was only a few years younger that uses more in the neighborhood of 50W on idle and utterly decimates the 5000 series box in CPU performance.
You’re probably better of fetching some old Ryzen 1800x system of ebay for higher performance and leagues lower power consumption.
As for the raid, don’t use it. Hardware raid has always been shit and in modern Linux and Windows is as good as completely depricated.
You’re missing the point, it’s not about using old hardware to daily drive them here, it’s for the fun and thrill of discovering ancient hardware, software and technologies! I’ll definitly need to see how much power this one is taking tho, but with only 1 out of 2 CPU I’d say around 200W for something this old
I have a HP Proliant DL380G7, basically the last server with a front side bus, and all the comments about it where about power per watt.
and they’re not wrong.
I just don’t think this is the community for old servers like this, self hosting is very much a practical consideration and the money spent on electricity running anything useful on these old things is better spent on a raspberry pi or stand alone NAS or something.
In my opinion, selfhosting is also about discovering how (and what) you could selfhost with old hardware and OS, just for fun and understanding a bit more about the history of hardware
But yeah for 24/7 services I have others way more modern servers and also an OrangePi
Oh, I get it. But a baseline HP Proliant from that era is just an x86 system barely different from a desktop today but worse/slower/more power hungry in every respect.
For history and “how things changed”, go for something like a Sun Fire system from the mid 2000’s (280R or V240 are relatively easy and cheap to get and are actually different) or a Proliant from the mid to late 90’s (I have a functioning Compaq Proliant 7000 which is HUGE and a puzzlebox inside).
x86 computers haven’t changed much at all in the past 20 years and you need to go into the rarer models (like blade systems) to see an actual deviation from the basic PC alike form factor we’ve been using for the past 20 years and unique approaches to storage and performance.
For self hosting, just use something more recent that falls within your priceclass (usually 5-6 years old becomes highly affordable). Even a Pi is going to trounce a system that old and actually has a different form factor.
I would love to actually get my hand on some Sun gear, they look really cool! Or even some Itanium powered servers! This one is an IBM server that I got for free, and exploring the software to use it, is a bit of a challenge and it is pretty different from how you configure server nowadays. (Also, a floppy drive on a server, this is what I call awesome!)
For selfhosting real stuff, I do have modern gear and an OrangePi too!
I also looked up the Compaq ProLiant 7000, and this thing is huge indeed, but it does look awesome!
They have a secondary motherboard that hosts the Slot CPUs, 4 single core P3 Xeons. I also have the Dell equivalent model but it has a bum mainboard.
With those 90’s systems, to get Windows NT to use more than 1 core, you have to get the appropriate Windows version that actually supports them.
Now you can simply upgrade from a 1 to a 32 core CPU and Windows and Linux will pick up the difference and run with it.
In the NT 3.5 and 4 days, you actually had to either do a full reinstall or swap out several parts of the Kernel to get it to work.
Downgrading took the same effort as a multicore windows Kernel ran really badly on a single core system.
As for the Sun Fires, the two models I mentioned tend to be highly available on Ebay in the 100-200 range and are very different inside than an X86 system. You can go for 400 or higher series to get even more difference, but getting a complete one of those can be a challenge.
And yes, the software used on some of these older systems was a challenge in itself, but they aren’t really special, they are pretty much like having different vendors RGB controller softwares on your system, a nuisance that you should try to get past.
For instance, the IBM 5000 series raid cards were simply LSI cards with an IBM branded firmware.
The first thing most people do is put the actual LSI firmware on them so they run decently.
I have a (crappy) poweredge and know for a fact that that’s the wrong end to put the pizza on any rack server.
Only heat would be from the drive backplain, all the boiling hot CPUs, RAM, and expansion cards are further back.
Who said it was too keep it warm ? Maybe it’s too cool it off before eating it :)
Also, drives can get pretty hot
wats on the pzaa
Did you use it to cook the pizza?
nice pizza
pizza for scale :)
Does it cook pizza?
I like how you have a pizza on the top. Probably not a great place for it long term.
Just keeping lunch warm.
Specs?
Intel Xeon 3.2GHz (yes that’s the whole model number), 4 gigs of DDR2 RAM and 3x 73GB Ultra SCSI disks!
I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.
Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.
For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.
Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.
My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn’t hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.
OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.
(It’s important to get this number right for performance because it can’t be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)
Only do zfs if you can bypass all the raid functions.
I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.
From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.
If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.
Have fun creating lxcs and vms for all the services you could want.
Hey, it’s a 2005 server, it can’t do IT mode, it only have Ultra SCSI 70GB drives, a 10 GB nic would be useless (it’s only PCI, not PCIe) DDR2 RAM and 1 core processors only too!
I’ll probably install a Debian, I had fun trying Windows 2003 Server. It has a Floppy drive too, I’ll definitely keep the DVD and Floppy drive in there! (the CD Drive is IDE btw) And you can only configure the RAID array via a CD provided by IBM (No, you cannot boot this CD from an USB key, as the software on the CD is looking for the dvd drive and not an USB key)
Most of everything you said would be accurate for recent servers tho, but not here, not at all ahah!