Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      6 days ago

      Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

      • communism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.

          Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.

          I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 days ago

    Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

    Just having all these things sandboxed makes it SO much easier.

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

  • nuggie_ss@lemmings.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 days ago

    Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 days ago

    Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

    As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.

          • TeddE@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            7 days ago

            They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.

            Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.

            • BrianTheFirst@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              …because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷‍♂️

              • TeddE@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                Fair point. I think my eyes glossed over the part where they said they where taking a second look at docker (but caught the rest about rebuilding the OS in general). My sincere apologies 😓😅

  • Magiilaro@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

    Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    7 days ago

    It’s just another system to maintain, another link in the chain that can fail.

    I run all my services on my personal gaming pc.

  • yessikg@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

  • OnfireNFS@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

    It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

    Probably doesn’t work for everyone but it works for me

  • SailorFuzz@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…

    I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…

    And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.

    I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…

    • ChapulinColorado@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

      Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.

      Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 days ago

        I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

        Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.

        • ChapulinColorado@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.

          The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.

          • Lka1988@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 days ago

            Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.

            When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.

  • kossa@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

    My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

  • Billegh@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    It depends on the service and the desired level of it stack.

    I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn’t really suitable for the task.

    At work, I run services in docker in VMs because the benefits far outweigh the complexity.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    6 days ago

    You sure you mean bare metal here? Bare metal means no OS.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    97
    arrow-down
    5
    ·
    8 days ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      8 days ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 days ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

        • mesa@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 days ago

          I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

          That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.