In my home network, I’m currently hosting a public facing service and a number of private services (on their own subdomain resolved on my local DNS), all behind a reverse proxy acting as a “bouncer” that serves the public service on a subdomain on a port forward.
I am in the process of moving the network behind a hardware firewall and separating the network out and would like to move the reverse proxy into its own VLAN (DMZ). My initial plan was to host reverse proxy + authentication service in a VM in the DMZ, with firewall allow rules only port 80 to the services on my LAN and everything else blocked.
On closer look, this now seems like a single point of failure that could expose private services if something goes wrong with the reverse proxy. Alternatively, I could have a reverse proxy in the DMZ only for the public service and another reverse proxy on the LAN for internal services.
What is everyone doing in this situation? What are best practices? Thanks a bunch, as always!
If the proxy gets compromised it will have access to the services whether it’s in a DMZ or VLAN or whatever. I’m unclear on what scenario you are trying to prevent or mitigate.
If the proxy has a remote exploit and it’s publicly exposed you’re screwed anyway.
Put it behind an encrypted authenticated tunnel if you’re worried about this i.e. not publicly exposed. Or expose it and keep up with the security fixes.
Also not sure what you mean when you say your current proxy is a “bouncer” and that it “could” expose your services if something goes wrong. Isn’t that its job, to expose them? Is it doing any authentication right now?
Right, I agree with proxy exploit means compromised either way. Thanks for your reply.
I am trying to prevent the case where internal services that I don’t otherwise have a need to lock down very thoroughly might get publicly exposed. I take it it’s an odd question?
Re “bouncer”: Expose some services publicly, not others, discriminated by host with public dns (service1.example.com) or internal dns (service2.home.example.com), is what I think I meant by it. Hence my question about one proxy for internal and one public, or one that does both.
It’s not an odd question actually it’s a very good question.
Many people don’t realize that “internal” services are just as exposed as “external” ones. That’s because a reverse proxy doesn’t care about domain name resolution, it receives the domain name as a HTTP header and anybody can put anything in there. So as long as an attacker can guess your “private” naming scheme and put a correct domain name in their request, they can use your port forward to reach “private” services. All it takes is for that domain name to be defined in your reverse proxy.
In order to be safe you should be adding allow/deny rules to each proxy host to only allow LAN IPs to access the private hosts (and also exclude the internal IP of the router that’s doing the forward, if your router isn’t doing masquerading to show up as the remote IP of the visitor).
Whether the proxies are one or two doesn’t help in any way, they just forward anything that’s given to them. If you want security you have to add IP allow/deny rules or some actual authentication.
This is exactly the type of answer I was looking for. Thanks a bunch.
So but in that way, having a proxy on the LAN that knows about internal services, and another proxy that is exposed publicly but is only aware of public services does help by reducing firewall rule complexity. Would you say that statement is correct?
Oh yes, if they’re completely separate and the internal proxy can’t be reached from port forward that’s fine.
I was stuck thinking about two chained proxies for some reason.
I never specified, I think, and probably wasn’t too clear on it myself. Thanks for your insights, I’ll try to take them to my configuration now.
The comment above is accurate how domain names can be passed to Nginx that would resolve to private IP addresses. But that doesn’t mean they need to exposed. Nginx has a
listen
directive that specifies what IPs are listened on. So If your Reserve Proxy has both a public IP and private IP. then the private services can have a a listen directive like this:server { # Whatever your proxy's private IP is listen 10.0.0.1; server_name my-private-service; }
No matter what hostname is passed in, Nginx would only reply to requests that can reach the Nginx host at it’s private IP address.
This is a good hint, I’m going to take a look at that. Thank you!
If you are running your servies in a single VM in your DMZ then that is your point of failure. You could spin up 2 micro VMs and run something like Docker swarm to make sure there are 2 trafik proxies running.
The services run on a separate box; yet to be decided on which VLAN I put it. I was not planning to have it in the DMZ but to create ingress firewall rules from the DMZ.
Double NIC on the proxy. One in each VLAN.
One proxy with two NICs downstream? Does that solve the “single point of failure” risk or am I being overly cautious?
Plus, the internal and external services are running on the same box. Is that where my real problem lies?
Are you running redundant routers, connections, ISPs…etc? Compromise is part of the design process. If you have resiliency requirements redundancy will help, but it ratchets up complexity and cost.
Security has the same kinds of compromises. I prefer to build security from the network up, leveraging tools like VLANs to start building the moat. Realistically, your reverse proxy is likely battle tested if it’s configured correctly and updated. It’ll probably be the most secure component in your stack. If that’s configured correctly and gets popped, half the Internet is already a wasteland.
If you’re running containers, yeah technically there are escape vectors, but again your attacker would need to pop the proxy software. It’d probably be way easier to go after the apps themselves.
Do something like this with NICs on each subnet:
DMZ VLAN <-> Proxy <-> Services VLAN
Right, I could have been more precise. I’m talking about security risk, not resilience or uptime.
“It’ll probably be the most secure component in your stack.” That is a fair point.
So, one port-forward to the proxy, and the proxy reaching into both VLANs as required, is what you’re saying. Thanks for the help!
It depends on the trade-offs you want to make. If you want to maintain one less Nginx install with a little more risk, that’s a way to go.
If your priority is security, use a separate proxy for your private services and do allow your public VLAN access into your private VLAN.
My home network only has public services on it right now, but now you are making me think I should segment it further if I want to host any truly private services there.
The answer seems to always be “not segmented enough”. ;)