I have two machines running docker. A (powerful) and B (tiny vps).

All my services are hosted at home on machine A. All dns records point to A. I want to point them to B and implement split horizon dns in my local network to still directly access A. Ideally A is no longer reachable from outside without going over B.

How can I forward requests on machine B to A over a tunnel like wireguard without loosing the source ip addresses?

I tried to get this working by creating two wireguard containers. I think I only need iptable rules on the WG container A but I am not sure. I am a bit confused about the iptable rules needed to get wireguard to properly forward the request through the tunnel.

What are your solutions for such a setup? Is there a better way to do this? I would also be glad for some keywords/existing solutions.

Additional info:

  • Ideally I would like to not leave docker.
  • Split horizon dns is no problem.
  • I have a static ipv6 and ipv4 on both machines.
  • I also have spare ipv6 subnets that I can use for intermediate routing.
  • I would like to avoid cloudflare.
    • raldone01@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      9 months ago

      I have heard of it seems like a good option. If you use it please tell me if it can fullfil my requirements.

      Mhh I didn’t know headscale exists. Tailscale being proprietary was the main thing keeping me from using it.

      • RiderExMachina@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        9 months ago

        I haven’t used Tailscale myself, but it seems like it’s basically just a Wireguard frontend.

        • Mr. Forager@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Although correct, there feature set is amazing and expanding. Tailscale is my number one tool of choice, these days, it’s so simple and so handy.

          • RiderExMachina@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            “Technically correct” is the best form of correct. Though having tried setting up Wireguard in the past, having a dead-simple solution like Tailscale might be worth trying it out, especially with the 100 device free tier

  • z3bra@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    Keeping the source IP intact means you’ll have troubles routing back the traffic through host B.

    Basically host A won’t be able to access the internet without going through B, which could not be what you want.

    Here’s how it works:

    On host A:

    • add a /32 route to host B public IP through your local ISP gateway (eg. 192.168.1.1)
    • setup a wireguard tunnel between A and B
    • host A: 172.17.0.1/30
    • host B: 172.17.0.2/30
    • add a default route to host B wireguard IP

    On host B:

    • setup wireguard (same config)
    • add PAT rules to the firewall so to DNAT incoming requests on the ports you need to 172.17.0.1
    • add an SNAT masquerade rule so all outbound request from 172.17.0.1 are NATed with host B public address.

    This should do what you need. However, if I may comment it out, I’d say you should give up on carrying the source IP address down to host A. This setup I described is clunky and can fail in many ways. Also I can see no benefits of doing that besides having “pretty logs” on host A. If you really need good logs, I’d suggest setting up a good reverse proxy on host B and forwarding it’s logs to a collector on host A.

  • 7Sea_Sailor@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.

    At the homelab (A in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.

    The original source IP is available to your local docker containers by making use of the X-Forwarded-For header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:

    {
            servers {
                    trusted_proxies static 192.168.144.1/24 100.111.166.92
            }
    }
    

    replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.

    Let me know if you have any further questions.