I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was Docker that had modified them. And after some brief research, I found a number of open issues, just like this one, of people complaining about this behaviour. I think it’s an enourmous security risk to have Docker silently do this by default.

I have heard that Podman doesn’t suffer from this issue, as it is daemonless. If that is true, I will certainly be switching from Docker to Podman.

  • Molecular0079@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    6 months ago

    If you use firewalld, both docker and podman apply rules in a special zone separate from your main one.

    That being said, podman is great. Podman in rootful mode, along with podman-docker and docker-compose, is basically a drop-in replacement for Docker.

    • Dandroid@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I’m a podman user, but what’s the point of using podman if you are going to use a daemon and run it as root? I like podman so I can specifically avoid those things.

      • Molecular0079@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I am using it as a migration tool tbh. I am trying to get to rootless, but some of the stuff I host just don’t work well in rootless yet, so I use rootful for those containers. Meanwhile, I am using rootless for dev purposes or when testing out new services that I am unsure about.

        Podman also has good integration into Cockpit, which is nice for monitoring purposes.

  • zeluko@kbin.social
    link
    fedilink
    arrow-up
    48
    ·
    6 months ago

    Yeah, it needs those rules for e.g. port-forwarding into the containers.
    But it doesnt really ‘nuke’ existing ones.

    I have simply placed my rules at higher priority than normal. Very simple in nftables and good to not have rules mixed between nftables and iptables in unexpected ways.
    You should filter as early as possible anyways to reduce ressource usage on e.g. connection tracking.

    • Kalcifer@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      6 months ago

      But it doesnt really ‘nuke’ existing ones.

      How come I don’t see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren’t there — all I see are Docker’s rules.

  • Auli@lemmy.ca
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    6 months ago

    It doesn’t nuke your rules. Just ads to them.

    • Kalcifer@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      6 months ago

      How come I don’t see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren’t there — all I see are Docker’s rules.

      • gorgori@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        You can use a bridge network or the host network.

        In bridge network it is like a NAT host. With its own firewall settings.

        In host network mode, it will just open the port it needs.

        • Kalcifer@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I could be misunderstanding your comment, but you don’t seem to have answered my question of why I don’t see my rules anymore.

  • JustEnoughDucks@feddit.nl
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    5
    ·
    6 months ago

    This is standard, but often unwanted, behavior of docker.

    Docker creates a bunch of chain rules, but IIRC, doesn’t modify actual incoming rules (at least it doesn’t for me) it just will make a chain rule for every internal docker network item to make sure all of the services can contact each other.

    Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

    I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

    Option to disable this behavior would be 100x better then current, but what do I know lol

    • justJanne@startrek.website
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      6 months ago

      That assumes you’re on some VPS with a hardware firewall in front.

      Often enough you’re on a dedicated server that’s directly exposed to the internet, with those iptables rules being the only thing standing between your services and the internet.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        6 months ago

        What difference does it make if you open the ports yourself for the services you expose, or Docker does it for you? That’s all that Docker is meant to do, act as convenience so you don’t have to add/remove rules as the containers go up/down, or remember Docker interfaces.

        If by any chance you are making services listen on 0.0.0.0 and covering them up with a firewall that’s very bad practice.

          • lemmyvore@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            6 months ago

            I’m fairly sure you can find an alternative to whatever problem you’re having.

            • justJanne@startrek.website
              link
              fedilink
              English
              arrow-up
              5
              ·
              6 months ago

              You need to be able to have multiple nodes in one LAN access ports on each others’ containers without exposing those to the world and without using additional firewalls in front of the nodes.

              That’s why kubernetes ended up removing docker support and instead recommends podman or using containerd natively.

    • moonpiedumplings@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      6 months ago

      Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

      I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

      My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn’t be exposed.

      Excerpt from another comment of mine:

      It’s only docker where you have to deal with something like this:

      ---
      services:
        webtop:
          image: lscr.io/linuxserver/webtop:latest
          container_name: webtop
          security_opt:
            - seccomp:unconfined #optional
          environment:
            - PUID=1000
            - PGID=1000
            - TZ=Etc/UTC
            - SUBFOLDER=/ #optional
            - TITLE=Webtop #optional
          volumes:
            - /path/to/data:/config
            - /var/run/docker.sock:/var/run/docker.sock #optional
          ports:
            - 3000:3000
            - 3001:3001
          restart: unless-stopped
      

      Originally from here, edited for brevity.

      Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

      • wreckedcarzz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 months ago

        So uh, I just spun up a vps a couple days ago, few docker containers, usual security best practices… I used ufw to block all and open only ssh and a couple others, as that’s what I’ve been told all I need to do. Should I be panicking about my containers fucking with the firewall?

        • moonpiedumplings@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it’s only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

          An easy way to see containers running is: docker ps, where you can look at forwarded ports.

          Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.

          • wreckedcarzz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Just waking up, I’ve been running docker on my nas for a few years now and was never made aware of this - the nas ports appear safe, but the vps is not, so I swapped in 127.0.0.1 in front of the port number (so it’s now 127.0.0.1:8080:80 or what have you), and that appears to resolve it. I have nginx running so of course that’s how I want to have a couple things exposed, not everything via port.

            My understanding was that port:port just was local for allowing redirection from container to host, and you’d still need to do network firewall management yourself to allow stuff through, and that appears the case on my home network, so I never had reason to question it. Thanks, I learned something today :)

            Might do the same to my nas containers, too, just to be safe. I’m using those containers as a testbed for the vps containers so I don’t want to forget…

        • Droolio@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          Actually, ufw has its own separate issue you may need to deal with. (Or bind ports to localhost/127.0.0.1 as others have stated.)

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 months ago

      Option to disable this behavior would be 100x better then current, but what do I know lol

      Prevent docker from manipulating iptables

      Don’t know what it’s actually doing, I’m just learning how to work with nftables, but I saved that link in case oneday I want to manage the iptables rules myself :)

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Good luck. Your going to have to change the rules whenever the up address of the container changes.

        • N0x0n@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          6 months ago

          If you are talking about the IP address then just add a static address, no? I do it anyway in my docker compose:

          ...
              networks:
                traefik.net:
                  ipv4_address: 10.10.10.99
          
          networks:
              traefik.net:
                name: traefik-net
                external: true
          

          I’m not an expert so maybe I’m wrong, if so do not hesitate to correct me !

          EDIT: If the IP address doesn’t change, you do not need to change to routing and iptables/nftables rules. ??

    • Kalcifer@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      IIRC, doesn’t modify actual incoming rules (at least it doesn’t for me)

      How come I don’t see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren’t there — all I see are Docker’s rules.

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    6 months ago

    Wow, thanks for the heads up.

    Looks like it affects dockerd, but not docker desktop.

    Any idea of the docker implementation in Proxmox or TrueNAS? (TrueNAS does containers if I remember right?)