Currently, I run Unraid and have all of my services’ setup there as docker containers. While this is nice and easy to setup initially, it has some major downsides:

  • It’s fragile. Unraid is prone to bugs/crashes with docker that take down my containers. It’s also not resilient so when things break I have to log in and fiddle.
  • It’s mutable. I can’t use any infrastructure-as-code tools like terraform, and configuration sort of just exist in the UI. I can’t really roll back or recover easily.
  • It’s single-node. Everything is tied to my one big server that runs the NAS, but I’d rather have the NAS as a separate fairly low-power appliance and then have a separate machine to handle things like VMs and containers.

So I’m looking ahead and thinking about what the next iteration of my homelab will look like. While I like unraid for the storage stuff, I’m a little tired of wrangling it into a container orchestrator and hypervisor, and I think this year I’ll split that job out to a dedicated machine. I’m comfortable with, and in fact prefer, IaC over fancy UIs and so would love to be able to use terraform or Pulumi or something like that. I would prefer something multi-node, as I want to be able to tie multiple machines together. And I want something that is fault-tolerant, as I host services for friends and family that currently require a lot of manual intervention to fix when they go down.

So the question is: how do you all do this? Kubernetes, docker-compose, Hashicorp Nomad? Do you run k3s, Harvester, or what? I’d love to get an idea of what people are doing and why, so I can get some ideas as to what I might do.

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    8 months ago

    I would stay away from kubernets/k3/k8s. Unless you want to learn it for work purposes, it’s so overkill you can spend a month before you get things running. I know from experience. My current setup gives you options and has been reliable for me.

    NAS Box: Truenas Scale - You can have UnRaid fill this role.

    Services Hosting: Proxmox - I can spin up any VMs I need and lots of info online to do things like hardware passthrough to VMs.

    Containers: Debian VM - Debian makes a great server environment as it’s stable and well supported. I just make this VM a docker swarm host. I managed things with Portainer for a web interface.

    I keep data on the NAS and have containers access it over the network. Usually a NFS share.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      How do you manage your services on that, docker compose files? I’m really trying to get away from the workflow of clicking around in some UI to configure everything, only for it to glitch out and disappear and I have to try and remember what things to click to get it back. It was my main problem with portainer that caused me to move away from it (I have separate issues with docker-compose but that’s another thing)

      • hi_its_me@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        8 months ago

        I have a similar setup to the above. Personally I use Docker Compose and backup up my compose scripts to the NAS.

      • khorak@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        I personally stepped away from compose. You mentioned that you want a more declarative setup. Give Ansible a try. It is primarily for config management, but you can easily deploy containerized apps and correlate configs, hosts etc.

        I usually write roles for some more specialized setups like my HTTP reverse proxy, the arrs etc. Then I keep everything in my inventory and var files. I’m really happy and I really can tear things down and rebuild quickly. One thing to point out is that the compose module for Ansible is basically unusable. I use the docker container module instead. Works well so far and it keeps my containers running without restarting them unnecessarily.

  • FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    I am happy with my simple docker-compose setup - one root folder with one subfolder per project containing the compose file and any configuration mounted into the container. Traefik automatically exposes all services I want under a well-known URL using a single line in each compose file. Watchtower updates the containers.

    This has been running stable for over two years with probably 2-3 reboots in between. If my current NUC ever breaks I’ll set it up again using Podman instead of Docker, but aside from that I couldn’t be happier!

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      This seems like a sensible choice, but it would be a bit messy for multi-node which is the direction I’m heading in

  • superpants@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    A plug for the pro Kubernetes crowd:

    I run microk8s on a 3 node cluster, using FluxCD to deploy and manage my services. I also work with Kubernetes at work, so I’m very familiar with the concepts. But I will never use anything else.

    If you want maximum control and flexibility, learn Kubernetes. For a lot of people (myself included) it’s overkill, but IMO it’s the best.

    My main gripe with docker-compose, which is what I used to use, is that service changes require access to the machine. I have to run commands on the host to alter services. With Kubernetes, and more precisely a GitOps model, you can just make a commit to a git repo and it will roll out.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      For your last point, portainer fixes that. I use portainer to pull compose files from my gitea instance. There is an option to auto update on git comit but I prefer to press the button to update.

      I write the compose files in vscode and push them to my repo.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      FWIW I manage docker compose files with ansible. Allows me to centrally manage them without the need to go logging into multiple vms. I also create a systemd service file to start/stop the containers (also managed with ansible).

      That said I’m starting to switch over to k8s as well (also with microk8s which has been the easiest to work with). Definitely overkill but I want to learn it.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Yes very true, I really would much prefer GitOps as I feel… uneasy about how handwired and ephemeral my current setup is and would love it to be more declarative and idempotent. It does seem like Kubernetes is the way to do that.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    Podman pods + systemd units to manage pods lifecycle. Ansible to deploy the base OS requirements, the ancillary services (SSH, backups, monitoring…), and the pods/containers/services themselves.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 months ago

    You should try out all the options you listed and the other recommendations and find what works best for you.

    I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.

    Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.

    If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Thanks. Yeah I’m temped to try kubernetes because of what you mentioned. I really like that every part that I need (ingress controller, certs, etc) are considered part of the core service and are built in. Right now I just have to run that stuff like it’s own service and wire everything up by hand. I don’t think I mind the extra overhead of kubernetes either, I love to tinker with that sort of thing anyway!

      I think I will try a couple of things though. Maybe find a set of services to deploy with each and compare the experiences.

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.

        But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.

        You can also look at the cloud native computing foundation’s list of projects. It’s a good list of things that work well.

        Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it

        I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.

        I have heard good things about nomad too but I have yet to try it.

  • grehund@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Proxmox. Currently considering upgrading from a single node to a 3 node Cluster for Ceph.

  • forwardvoid@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Portainer + caddy + watchtower, this will give you the benefits of containers without the complexity of Kubernetes. As someone who professionally works with Kubernetes, I agree with what other people have said here: “only run it if you want to learn it for professional use”.

    Portainer is a friendly UI for running containers. It supports docker compose as well. It helps with observability and ops.
    Caddy is an easy proxy with automatic Let’s Encrypt support.
    Watchtower will update and restart your containers if there’s an update.
    (Edit: formatting)

  • RedFox@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I really enjoy these type of conversations, learn a lot.

    Since you’ve gotten lots of good advice on container manager, I’ll encourage your desire for IaC/DevOps CM, etc.

    I believe all the leading CM choices support what you’re wanting to do. I can’t guide you on which one to chose, but just browse through the options or functions your favorite does for the Kx container solution you go with.

    I use SALT because of Security Onion, and open source IDS. I have all my nix systems being babysat by SALT, and can have a new x-arr media server, NGINX, blog, etc running in the amount of time to deploy the template (I use vSphere) and salt applies the desired state. Back up and restore a mount folder, np. IaC is only limited by your imagination. I have salt also specifying all the containers I have running, defining the config files, etc. Basically poor mans/simpleton kub.

    I suspect you already know this, but if there isn’t a module that directly does what you want like running SQL specific functions, you can just have it run programmatic CLI files on the host, or in the container for you.

    I am in the process of moving my IaC code from manager file system to Gitlab. I imagine you’d do this from jump street. Have fun.

  • Samsy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I was familiar with just organise my docker-compose containers without any frontend. But I discovered casaOS, which make things pretty simple. An AppStore and a SMB-Shared File manager gave me a really good workflow. Things that aren’t on the AppStore can be handled outside of Casa, too.

    PS. But never make the mistake to integrate the outside handled containers, this mess things up.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Thanks, yeah I’ve heard good things about casaOS. I think that I’m trying to move in the other direction though: fewer UI’s and more CLI’s + Configuration files.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    8
    ·
    8 months ago

    First, hire a team of energetic full-time container bros. Half of them will help architect your setup, and other half will focus entirely on supporting the container cult.