Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 days ago

    Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.

    I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.

    • Backup: docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gz
    • Restore docker load < ./images/{image}-{tag}-{arch}.tar.gz

    It will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Ah, cool. Seems like way less work than hosting a local registry. But, would I have to docker load every single one when needed? If I had to rebuild the docker host for example? Is it faster to just front load that work and build a local registry to pull through?

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        I also run (well, ran) a local registry. It ended up being more trouble than it was worth.

        Would you have to docker load them all when rebuilding a host?

        Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.

        For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.

        Everything a service (or stack of services) needs is all in my deploy directory which looks like this:

        /apps/{app_name}/
            docker-compose.yml
            .env
            build/
                Dockerfile
                {build assets}
            data/
                {app_name}
                {app2_name}  # If there are multiple applications in the stack
                ...
            conf/                   # If separate from the app data
                {app_name}
                {app2_name}
                ...
            images/
                {app_name}-{tag}-{arch}.tar.gz
                {app2_name}-{tag}-{arch}.tar.gz
        

        When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.

        When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).

        A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.

        Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.

        Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.

  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    i selfhost a pullthrough docker repository, so every container I use is stored in there and can be pulled offline.

  • wersooth@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    if there are some custom image, e.g. additional stuff I added to an existing image, then I backup the dockerfile. you can rebuild the image anytime and the size is smaller than a binary.

    • brvslvrnst@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Came here to say that: the most economic solution is to backup any dockerfiles themselves, though that also has the caveat that any installs within the build steps might also depend on external resources that could also be dropped.

      There are methods of adding a local caching layer between your system and external, which might be what OP is after, but that involves investing in the additional space needed to back them up

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    I mean…you have the container right there on your machine. If you’re concerned, just run your own registry and push copies there when needed. This of course is all unnecessary, as you only need the Dockerfile to build a clean image from scratch, and it will obviously work if it’s already been published.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      As long as the internet works and the image is still available.

      Which is kind of the whole point, homie.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        Again, it’s nearly impossible to scrub the entire internet of the code to just run a single command and build a docker image of whatever you were running yourself. If you have the image locally, you can just push it anywhere you want.

        Keep Dockerfile, keep checkouts of what you’re running, or push images locally. All very simple.

  • HumanPerson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    I used to but then I switched out the server I was doing backups to and have been thinking “I’ll get to it later” for many months. If anything goes wrong I’m screwed. I’ll get to it later ¯_(ツ)_/¯

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    I do run my own forgejo container repo, and I mirror containers I need there.

    But I don’t backup my containers, just the data directories I mount to them.

  • r0ertel@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I’ve been looking to do this, but haven’t found a good, easy to use pull thru proxy for docker, ghcr.io and some other registries. Most support docker only.

    This one looks promising but overly complicated to set up.

    A few times now, I’ve gone to restart a container and the repo’s been moved, archived or paywalled. Other times, I’m running a few versions behind and the maintainer decided to not support it, but upgrading would mean a complete overhaul of my Helm values file. Ugh!

    I was considering a docker registry on separate ports for each upstream registry I’d like to proxy/cache.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    yes. all of the images I use are cached and stored in my locally hosted gitlab registry.

    I think I’ve got around 120-140 images. a lot of what I have is just in case of an emergency.

    I’ve always imagined I could build and run technological infrastructure after a social collapse or something, so I have a lot of images that could be a good basis to start with. Most OS images, popular DB images, etc. it would probably never work, but I’d rather have the option than not.