• Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      28
      ·
      edit-2
      1 month ago

      I have been an IT professional since 1995. Never have I ever had a personal PC that wasn’t either a refurbished laptop or some sort of Frankenstein abomination that I put together from whatever was on sale and upcycled parts.

      • partial_accumen@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        1 month ago

        I have been an IT professional since 1995. Never have I ever had a personal PC that wasn’t either a refurbished laptop or some sort of Frankenstein abomination that I pit together from whatever was on sale and upcycled parts.

        I’ve been in the game for about the same amount of time. I stopped doing that about 15 years ago when I saw that the electricity I was paying on older gear was equaling or exceeding the cost of buying newer, faster, and lower power consumption hardware.

        • Windex007@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          Power costs is a poor tax in the same way skipping the dentist and getting a root canal later is.

          Also in the process of power efficiency-izing my lab. It just wasn’t a feasible option before, I didn’t have the means. I just paid interest via electricity.

          • partial_accumen@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            Do we need to update Sam Vimes ‘Boots’ Theory of Socio-Economic Unfairness to Sam Vimes **‘Compute’ ** Theory of Socio-Economic Unfairness?

        • Aceticon@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Curiously, judging by my recent upgrade parts search, the peak of the capability-to-power-used curve on PCs (at least gaming ones) seems to have peaked about a decade ago.

          Signed, a fellow Old Sea Dog Of Tech who has also gone through the same change over a decade ago

      • Evil_Shrubbery@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 month ago

        Isn’t that a bit like buying an old truck instead of a year old Miata?

        Afaik those CPUs use so much juice when idling … sure, you dont get all them lanes or ECC, but a PC at the same price with a few year old CPU outclasses that CPU by a lot & at a fraction of the running cost (also quietly).

        Just something to keep in mind as an alternative, especially when you don’t intend to fill all the pcie bussy (several users with several intensive tasks that benefit from wider bus to RAM & PCI even with a slow CPU).
        Ok, and you miss out on some fancy admin stuff, but … it’s just for home use …

        • lud@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 month ago

          Yeah server hardware isn’t the most efficient if you want to save power. It’s probably better to get a NUC or something.

          With that said my old Dell PowerEdge R730 only uses around 84 watt (running around 5 VMs that are doing pretty much nothing) The server runs Proxmox and has 128 GB of ram, two Xeon E5-2667 v4 CPUs, 4 old used 1 TB HDDs I bought for cheap, and 4 old used 128 GB SATA SSDs I also bought for cheap (all storage is 2,5 drives).

          All I had to do was change a few BIOS settings to prioritize efficiency over performance. 84 watts is obviously still not great but it’s not that bad.

          • Evil_Shrubbery@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Sounds nice, but yes, uses quite a bit of power.

            I should measure mine - I have a Ryzen 5900 (24t, 64MB … some 20k cinebench score) as the main, and a Core 12700 (16+4t, 12MB).
            (And Intel gen 7 and 2 at my patents. All of them proxmoxed.)

            Never ever managed to bottleneck anything on them, not really, but got them super cheap used.

            Buying anything server/enterprise that powerful would cost me a lot of moneys. And prob have two CPUs which doubles a lot of power hungry bits.

            • lud@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              1 month ago

              The only reason that I have measured my server is that it has that feature built into the iDRAC. I have been thinking of buying an external power meter for years but have never bothered to do that.

              Luckily I got my server for free from work. It was part of an old SAN so it came with 4 dual 16 Gbit fiber channel cards and 2 dual 10 gigabit ethernet cards. Before I took those out of the server it consumed around 150 watts at idle which is crazy.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 month ago

          I always recommend buying enterprise grade hardware for this type of thing, for two reasons:

          1. Consumer-grade hardware is just that - it’s not built for long-term, constant workloads (that is, server workloads). It’s not built for redundancy. The Dell PowerEdge has hotswappable drive bays, a hardware RAID controller, dual CPU sockets, 8 RAM slots, dual built-in NICs, the iDrac interface, and redundant hot-swappable PSUs. It’s designed to be on all the time, reliably, and can be remotely managed.

          2. For a lot of people who are interested in this, a homelab is a path into a technology career. Working with enterprise hardware is better experience.

          Consumer CPUs won’t perform server tasks like server CPUs. If you want to run a server, you want hardware that’s built for server workloads - stability, reliability, redundancy.

          So I guess yes, it is like buying an old truck? Because you want to do work, not go fast.

          • Evil_Shrubbery@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Is this mythology? :P
            Server stuff is unusual and mysterious, rare, and expensive - I get the allure.

            I like your second point (tho wouldn’t say a lot, most of us just want services at home + ProxMox or even Linux in general isn’t the most common hypervisor to learn for getting a job in like mid-sized companies), but for the rest - PC can take loads just as well as enterprise/server, this isn’t the 90s or early 2000s when you eg got shitty capacitors on even the best consumer mobos. Your average second gen Core PC could run non-stop since it’s birth to today.
            The exception are hard drives, which homelabbers buy enterprise anyways.
            BTW - who has their home lab on full load all the time (not sarcasm, actually asking for usecases)?

            The rest is just additional equipment one might need or might not. A second CPU slot is irrelevant when buying old servers, ram slots need to be filled to even take advantage of the extra lanes of server CPUs and even then older tech might still be slower than dual channel ddr5, drive bays are cheap to buy … but if you want nicely looking hot-swappable PSU then you need a server/workstation case.

            Server vs consumer CPUs mostly differ in how well they can parallelize tasks, mostly by having more cores and more lanes. But if a modern CPU core outclases older server CPU cores like 10:1 that logic just doesn’t add up anytime. Both do the same work.

            Imho old servers aren’t super cheap but are priced accordingly.

            I think this whole debate consumer vs enterprise hardware (except hard drives ofc) can be summed in a proxy question of - do homelabbers need registered ECC RAM?

      • Midnight Wolf@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 month ago

        I have a ThinkServer with a similar Xeon, running proxmox -> Debian, so I was looking like “huh, interesting” until I saw the internals.

        Fuuuuuuuuuuuuuuuuuck all that. Damn it Dell, quit your weird bullshit. It’s just a motherboard, cpu, cooler, and ram. Slap in intake and exhaust fans. Figure it the fuck out.

        E: and it better have a goddamn standard psu, too. Fuck yourself, Dell. I’ve seen your shit.

        • Benjaben@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          The one saving grace is that their one-off custom damn shit always feels well designed, and they move a lotta units (which helps with repairs when everything is GD custom). Dunno if that’s changed in recent years.

          With that said I avoid them for personal use usually for the same reason, why have a desktop if you don’t get the benefit of parts compatibility?!

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Hmm, I don’t have direct experience with ThinkServers, but what I see on eBay looks like standard ATX hardware… which is not really what you want in a server.

          The Dell motherboard has dual CPU sockets and 8 RAM slots. The PSUs are not the common ATX desktop format because there are 2 of them and they are hot swappable. This is basically a rack server repacked into a desktop tower case, not an ATX desktop with a server CPU socket.