Docker has a known security issue with port exposure in that it punches holes through the firewall without asking your permission, see https://github.com/moby/moby/issues/4737
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
Containers are widely used at our company, by developers who don't understand underlying concepts, and they often expose services on all interfaces, or to all hosts.
You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees. We use an OTP for employees to verify they gave us the right email/phone number to send them to. Security sees "email/sms" and "OTP" and therefor, tickets us at the highest "must respond in 15 minutes" priority ticket every time an employee complains about having lost access to an email or phone number.
Doesn't matter that we're not sending anything sensitive. Doesn't matter that we're a team of 4 managing more than a million data points. Every time we push back security either completely ignores us and escalates to higher management, or they send us a policy document about security practices for communication channels that can be used to send OTP codes.
Security wields their checklist like a cudgel.
Meanwhile, our bug bounty program, someone found a dev had opened a globally accessible instance of the dev employee portal with sensitive information and reported it. Security wasn't auditing for those, since it's not on their checklist.
I feel this. Recently implemented a very trivial “otp to sign an electronic document” function in our app.
Security heard “otp” and forced us through a 2 month security/architecture review process for this sign-off feature that we built with COTs libraries in a single sprint.
Oh I know that feeling. We got in hot water because the codes were 6 digits long and security decided we needed to make them eight digits.
We pushed back and initially they agreed with us and gave us an exception, but about a year later some compliance audit told them it was no longer acceptable and we had to change it ASAP. About a year after that they told us it needed to be ten characters alphanumeric and we did a find and replace in the code base for "verification code" and "otp" and called them verification strings, and security went away.
To be fair, I would also be alarmed, albeit not by OTP. "sign an electronic document" and "built with COTs libraries in a single sprint" is essentially begging for a security review. Signatures and their verification are non-trivial, case in point: https://news.ycombinator.com/item?id=42590307
I have had to sit through "education" that boiled down to "don't ship your private keys in the production app." Someone needed to tick some security training checkbox, and I drew the short straw.
> This is pretty common, developers are focused on making things that work.
True, but over the last twenty years, simple mistakes by developers have caused so many giant security issues.
Part of being a developer now is knowing at least the basics on standard security practices. But you still see people ignoring things as simple as SQL injection, mainly because it's easy and they might not even have been taught otherwise. Many of these people can't even read a Python error message so I'm not surprised.
And your cybersecurity department likely isn't auditing source code. They are just making sure your software versions are up to date.
and many of these people havent debugged messages more complex than a Python error message. tastelessly jabbing at needing to earn your marks by slamming into segfaults and pushing gdb
I don't think you broke any (did not downvote). But you wrote something along the lines "Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore. I guess this is fine." before you edited the last bit out. I think those who downvoted you think that this is plain wrong.
I guess it's fine if you get rid of sysadmins and have dev splitting their focus across dev, QA, sec, and ops. It's also fine if you have devs focus on dev, QA, code part of the sec and sysadmins focus on ops and network part of the sec. Bottom line is - someone needs to focus on sec :) (and on QAing and DBAing)
I suspect you'll find a lot of intersection between the move to "devops" outfits who "don't need IT anymore" and "there's a lot more security breaches now", but hey, everyone's making money so who cares?
Sometimes when you work less rigidly as a team, covering for others when it’s convenient for you, everyone gets more things done with less stress and less trouble.
> Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
Wow, this really hits home. I spend an inordinate amount of time dealing with false positives from cybersecurity.
I wonder how many people realize you can use the whole 127.0.0.0/8 address space, not just 127.0.0.1. I usually use a random address in that space for all of a specific project's services that need to be exposed, like 127.1.2.3:3000 for web and 127.1.2.3:5432 for postgres.
It’s well-intentioned, but I honestly believe that it would lead to a plethora of security problems. Maybe I am missing something, but it strikes me as on the level of irresponsibility of handing out guardless chainsaws to kindergartners.
That is awful and I hope it will never pass. It would be a security nightmare. If passed, it should lead to a very wide review of all software using 127/8, and that will never be comprehensive...
Also a great way around code that tries to block you from hitting resources local to the box. Lots of code out there in the world blocking the specific address "127.0.0.1" and maybe if you were lucky "localhost" but will happily connect to 127.6.243.88 since it isn't either of those things. Or the various IPv6 localhosts.
Relatedly, a lot of systems in the world either don't block local network addresses, or block an incomplete list, with 172.16.0.0/12 being particularly poorly known.
Also, many people don’t remember that those zeros in between numbers in IPs can be slashed, so pinging 127.1 works fine. This is also the reason why my home network is a 10.0.0.0/24—don’t need the bigger address space, but reaching devices at 10.1 sure is convenient!
I had no idea about this, and been computing for almost 20 years now, thanks!
Trying to get ping to ping `0.0.0.0` was interesting
$ ping -c 1 ""
ping: : Name or service not known
$ ping -c 1 "."
ping: .: No address associated with hostname
$ ping -c 1 "0."
^C
$ ping -c 1 ".0"
ping: .0: Name or service not known
$ ping -c 1 "0"
PING 0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
$ ping -c 1 "0.0"
PING 0.0 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
0.0.0.0 is a reserved address to mean "this device". Also, 0/8 is a reserved subnet to mean "this network" (which no-one uses any more). I wouldn't have expected ping to substitute 127.0.0.1, but it's not that weird either.
We’ve found this out a few times when someone inexperienced with docker would expose a redis port and run docker compose up on a public accessible machine. Would only be minutes until that redis would be infected. Also blame redis for having the ability to run arbitrary code without auth by default.
I want to use Podman but I keep reading the team feels podman-compose to be some crappy workaround they don’t really want to keep.
This is daunting because:
Take 50 random popular open source self-hostable solutions and the instructions are invariably: normal bare installation or docker compose.
So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?
> So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?
After moving from bare to compose to docker-compose to podman-compose and bunch of things in-between (homegrown Clojure config-evaluators, ansible, terraform, make/just, a bunch more), I finally settled on using Nix for managing containers.
It's basically the same as docker-compose except you get to do it with proper code (although Nix :/ ) and as a extra benefit, get to avoid YAML.
You can switch the backend/use multiple ones as well, and relatively easy to configure as long as you can survive learning the basics of the language: https://wiki.nixos.org/wiki/Docker
Of course, that means you need to run NixOS for that to work (which I also do everywhere) and there are networking problems with Docker/Podman in NixOS you need to address yourself. Whereas Docker "runs anywhere" these days.
Worth noting the tradeoffs, but I agree using Nix for this makes life more pleasant and easy to maintain.
> that means you need to run NixOS for that to work
Does it? I'm pretty sure you're able to run Nix (the package manager) on Arch Linux for example, I'm also pretty sure you can do that on things like macOS too but that I haven't tested myself.
Or maybe something regarding this has changed recently?
sorry, yes to build it is fine, but managing them with Nix (e.g. dealing with which ports to expose and etc like in the article) requires NixOS.
edit: I actually never checked, but I guess nothing stops home-manager or nix-darwin from working too, but I don't think either supports running containers by default. EOD all NixOS does is make a systemd service which runs `docker run ..` for you.
I use docker compose for development because it's easy to spin up an entire project at once. Tried switching to podman compose but it didn't work out of the box and I wasn't motivated to fix it.
For "production" (my homelab server), I switched from docker compose to podman quadlets (systemd) and it was pretty straightforward. I actually like it better than compose because, for example, I can ensure a containers dependencies (e.g. database, filesystem mounts) are started first. You can kind of do that with compose but it's very limited. Also, systemd is much more configurable when it comes to dealing service failures.
Docker Compose would not prevent you from doing a "publish port to 0.0.0.0/0", it's not much more than a (very convenient) wrapper around "docker build" and "docker run".
And many if not as good as all examples of docker-compose descriptor files don't care about that. Images that use different networks for exposed services and backend services (db, redis, ...) are the rare exception.
Are you sure about that? Because I was under the impression that these firewall rules are configured by Docker. So if you use Docker Compose with Podman emulating the Docker socket, this shouldn’t happen.
Is there a tool/tutorial that assumes that I already have a running docker compose setup instead of starting with some toy examples? Basically, I am totally excited about using systemd that I already have on my system instead of adding a new daemon/orchestrator but I feel that the gap between quadlet 101 and migrating quite a complex docker compose YAML to podman/quadlet is quite large.
There was not such a tool when I learned how to do this. Quadlet is relatively new (podman 5) so lots of podman/systemd documentation refers to podman commands that generate systemd unit files. I agree there is a gap.
-p 127.0.0.1: might not offer all of the protections the way you would expect, and is arguably a bug in dockers firewall rules they're failing to address. they choose to instead say hey we dont protect against L2, and have an open issue here: https://github.com/moby/moby/issues/45610.
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
What about cloud VMs? I would love to read more about "they don't respect the bind address when they do forwarding into the container" and "machines one hop away can forward packets into the docker container" if you could be so kind!
Upd: thanks for a link, looks quite bad. I am now thinking that an adjacent VM in a provider like Hetzner or Contabo could be able to pull it off. I guess I will have to finally switch remaining Docker installations to Podman and/or resort to https://firewalld.org/2024/11/strict-forward-ports
i cant speak to hetzner, contabo. i have tested this attack on aws, gcp a while back and their L2 segmentation was solid. VMs/containers should be VLANd across customers/projects on most mature providers. On some it may not be though.
if theres defense in depth it may be worth checking out L2 forwarding within a project for unexpected pivots an attacker could use. we've seen this come up in pentests
Tbh I prefer not exposing any ports directly, and then throwing Tailscale on the network used by docker. This automatically protects everything behind a private network too.
FOSS alternative is to throw up a $5 VPS on some trusted host, then use Wireguard (FOSS FTW) to do basically exactly the same, but cheaper, without giving away control and with better privacy.
Does it? I think it only happens if you specifically enumerate the ports. You do not need to enumerate the ports at all if you're using Tailscale as a container.
Would that actually save you in this case? OP had their container exposed to the internet, listening for incoming remote connections. Wouldn't matter in that case if you're running a unprivileged container, podman, rocky-linux or with selinux, since everything is just wide open at that point.
I think podman does not punch holes in the firewall as opposed to docker. I.e., to expose a container on port 8080 on the WAN in podman, you need to both expose 8080:8080 and use, for example, firewalld to open port 8080. Which I consider a correct behaviour.
Sure, but the issue here wasn't because the default behavior surprised OP. OP needed a service that was accessible from a remote endpoint, so they needed to have some connection open. They just (for some reason) chose to do it over public internet instead of a private network.
But regardless of software used, it would have led to the same conclusion, a vulnerable service running on the open internet.
I think exposing 8080:8080 would result in sockets bound to 0.0.0.0:8080 in either Docker or Podman. You still need 127.0.0.1:8080:8080 for the socket binding to be 127.0.0.1:8080 in Podman. The only difference is that Podman would not punch holes in the firewall after binding on 0.0.0.0:8080, thus preventing an unintended exposure given that the firewall is set up to block all incoming connections except on 443, for example.
Edit: just confirmed this to be sure.
$ podman run --rm -p 8000:80 docker.io/library/nginx:mainline
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
595f71b33900 docker.io/library/nginx:mainline nginx -g daemon o... 40 seconds ago Up 41 seconds 0.0.0.0:8000->80/tcp youthful_bouman
$ ss -tulpn | rg 8000
tcp LISTEN 0 4096 *:8000 *:* users:(("rootlessport",pid=727942,fd=10))
I am not a security person at all. Are you really saying that it could potentially cause Iptables to open ports without an admin's knowing? Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?
Worse, the linked bug report is from a DECADE ago, and the comments underneath don't seem to show any sense of urgency or concern about how bad this is.
Your understanding is correct, unfortunately. Not only that, the developers are also reluctant to make 127.0.0.1:####:#### the default in their READMEs and docker-compose.yml files because UsEr cOnVeNiEnCe, e.g. https://github.com/louislam/uptime-kuma/pull/3002 closed WONTFIX
Amazing. I just don't know what to say, except that anyone who doesn't know how to open a firewall port has no business running Docker, or trying to understand containerization.
As someone says in that PR, "there are many beginners who are not aware that Docker punches the firewall for them. I know no other software you can install on Ubuntu that does this."
Anyone with a modicum of knowledge can install Docker on Ubuntu -- you don't need to know a thing about ufw or iptables, and you may not even know what they are. I wonder how many machines now have ports exposed to the Internet or some random IoT device as a result of this terrible decision?
Correct. This can be disabled [0] but you need to work around this then. Usually you can "just" use host-networking and manage iptable rules manually. Not pretty but in that case you at least know what's done to the system and iptables.
To run Docker, you need to be an admin or in the Docker group, which warns you that it is equivalent to having sudo rights, AKA be an admin.
As for it not being explicitly permitted, no ports are exposed by default. You must provide the docker run command with -p, for each port you want exposed. From their perspective, they're just doing exactly what you told them to do.
Personally, I think it should default to giving you an error unless you specified what IPs to listen to, but this is far from a big of an issue as people make it out to be.
The biggest issue is that it is a ginormous foot gun for people who don't know Docker.
I don't remember the particular syntax, but isn't there a different to binding a port on the address the container runs on, VS binding a port on the host address?
Maybe it's the difference between "-P" and "-p", or specifying both "8080:8080" instead of "8080", but there is a difference, especially since one wouldn't be reachable outside of your machine and the other one would be on worse case trying to bind 0.0.0.0.
You can specify the interface address to listen on, like "127.0.0.1:8080:8080" or "192.168.1.100:8080:8080". I have a lot of containers exposed like this but bind specifically to a vpn ip on the host so that they don't get exposed externally by default.
The trouble is that docker seems to default to using 0.0.0.0, so if you do `docker run -it -p 8080 node:latest` for example, now that container accepts incoming connections on port :32768 or whatever docker happens to assign it, which is bananas default behavior.
-p exposes the port from the container on a specific port on the host machine. -P does the same, but for all ports listed as exposed in the container.
If you just run a container, it will expose zero ports, regardless of any config made in the Docker image or container.
The way you're supposed to use Docker is to create a Docker network, attach the various containers there, and expose only the ports on specific containers that you need external access to. All containers in any network can connect to each other, with zero exposed external ports.
The trouble is just that this is not really explained well for new users, and so ends up being that aforementioned foot gun.
For people unfamiliar with Linux firewalls or the software they're running: maybe. First of all, Docker requires admin permissions, so whoever is running these commands already has admin privileges.
Docker manages its own iptables chain. If you rely on something like UFW that works by using default chains, or its own custom chains, you can get unexpected behaviour.
However, there's nothing secret happening here. Just listing the current firewall rules should display everything Docker permits and more.
Furthermore, the ports opened are the ones declared in the command line (-p 1234) or in something like docker-compose declarations. As explained in the documentation, not specifying an IP address will open the port on all interfaces. You can disable this behaviour if you want to manage it yourself, but then you would need some kind of scripting integration to deal with the variable behaviour Docker sometimes has.
From Docker's point of view, I sort of agree that this is expected behaviour. People finding out afterwards often misunderstand how their firewall works, and haven't read or fully understood the documentation. For beginners, who may not be familiar with networking, Docker "just works" and the firewall in their router protects them from most ills (hackers present in company infra excluded, of course).
Imagine having to adjust your documentation to go from "to try out our application, run `docker run -p 8080 -p 1234 some-app`" to "to try out our application, run `docker run -p 8080 -p 1234 some-app`, then run `nft add rule ip filter INPUT tcp dport 1234 accept;nft add rule ip filter INPUT tcp dport 8080 accept;` if you use nftables, or `iptables -A INPUT -p tcp --dport 1234 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT; iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT` if you use iptables, or `sudo firewall-cmd --add-port=1234/tcp;sudo firewall-cmd --add-port=8080/tcp; sudo firewall-cmd --runtime-to-permanent` if you use firewalld, or `sudo ufw allow 1234; sudo ufw allow 8080` if you use UFW, or if you're on Docker for Windows, follow these screenshots to add a rule to the firewall settings and then run the above command inside of the Docker VM". Also don't forget to remove these rules after you've evaluated our software, by running the following commands: [...]
Docker would just not gain any traction as a cross-platform deployment model, because managing it would be such a massive pain.
The fix is quite easy: just bind to localhost (specify -p 127.0.0.1:1234 instead of -p 1234) if you want to run stuff on your local machine, or an internal IP that's not routed to the internet if you're running this stuff over a network. Unfortunately, a lot of developers publishing their Docker containers don't tell you to do that, but in my opinion that's more of a software product problem than a Docker problem. In many cases, I do want applications to be reachable on all interfaces, and having to specify each and every one of them (especially scripting that with the occasional address changes) would be a massive pain.
For this article, I do wonder how this could've happened. For a home server to be exposed like that, the server would need to be hooked to the internet without any additional firewalls whatsoever, which I'd think isn't exactly typical.
>Imagine having to adjust your documentation to go from [...]
I sympathize with your reluctance to push a burden onto the users, but I disagree with this example. That's a false dichotomy: whatever system-specific commands Docker executes by default to allow traffic from all interfaces to the desired port could have been made contingent on a new command parameter (say, --open-firewall). Removing those rules could have also been managed by the Docker daemon on container removal.
> Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?
This the default value for most aspects of Docker. Reading the source code & git history is a revelation of how badly things can be done, as long as you burn VC money for marketing. Do yourself a favor and avoid all things by that company / those people, they've never cared about quality.
If you have opened up a port in your network to the public, the correct assumption is to direct outside connections to your application as per your explicit request.
Interesting trick, it was definitely an "Oh shit" moment when I learned the hard way that ports were being exposed. I think setting an internal docker network is another simple fix for this, though it complicates talking to other machines in the same firewall.
> by running docker images that map the ports to my host machine
If you start a docker container and map port 8080 of the container to port 8080 on the host machine, why would you expect port 8080 on the host machine to not be exposed?
I don't think you understand what mapping and opening a port does if you think that when you tell docker to expose a port on the host machine that it's a bug or security issue when docker then exposes a port on the host machine...
docker supports many network types, vlans, host attached, bridged, private, etc. There are many options available to run your containers on if you don't want to expose ports on the host machine. A good place to start: If you don't want ports exposed on the host machine then probably should not start your docker container up with host networking and a port exposed on that network...
Regardless of that, your container host machines should be behind a load balancer w/ firewall and/or a dedicated firewall, so containers poking holes (because you told them to and then got mad at it) shouldn't be an issue
I think the unintuitive thing is that by "port mapping", Docker is doing DNAT which doesn't trigger the input firewall rules. Unless you're relatively well versed in the behavior of iptables or notables, you probably expect the "port mapping" to work like a regular old application proxy (which would obey a firewall rules blocking all inputs) and not use NAT and firewall rules (and all of the attendant complexity that brings).
It should have the proxy set up but leave opening the port to the user.
No other sever software that I know of touches the firewall to make its own services accessible. Though I am aware that the word being used is "expose". I personally only have private IPs on my docker hosts when I can and access them with wireguard.
Do I understand the bottom two sections correctly? If I am using ufw as a frontend, I need to switch to firewalld instead and modify the 'docker-forwarding' policy to only forward to the 'docker' zone from loopback interfaces? Would be good if the page described how to do it, esp. for users who are migrating from ufw.
More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.
UFW and Docker don't work well together. Both of them call iptables (or nftables) in a way that assumes they're in control of most of the firewall, which means they can conflict or simply not notice each other's rules. For instance, UFW's rules to block all traffic get overriden by Docker's rules, because there is no active block rule (that's just the default, normally) and Docker just added a rule. UFW doesn't know about firewall chains it didn't create (even though it probably should start listing Docker ports at some point, Docker isn't exactly new...) so `ufw list` will show you only the manually configured UFW rules.
What happens when you deny access through UFW and permit access through Docker depends entirely on which of the two firewall services was loaded first, and software updates can cause them to reload arbitrarily so you can't exactly script that easily.
If you don't trust Docker at all, you should move away from Docker (i.e. to podman) or from UFW (i.e. to firewalld). This can be useful on hosts where multiple people spawn containers, so others won't mess up and introduce risks outside of your control as a sysadmin.
If you're in control of the containers that get run, you can prevent container from being publicly reachable by just not binding them to any public ports. For instance, in many web interfaces, I generally just bind containers to localhost (-p 127.0.0.1:8123:80 instead of -p 80) and configure a reverse proxy like Nginx to cache/do permission stuff/terminate TLS/forward requests/etc. Alternatively, binding the port to your computer's internal network address (-p 192.168.1.1:8123:80 instead of -p 80) will make it pretty hard for you to misconfigure your network in such a way that the entire internet can reach that port.
Another alternative is to stuff all the Docker containers into a VM without its own firewall. That way, you can use your host firewall to precisely control what ports are open where, and Docker can do its thing on the virtual machine.
You really, really should. Just because someone is inside your network is no reason to just give them the keys to the kingdom.
And I don't see any reason why having to allow a postgres or apache or whatever run through docker through your firewall any more confusing than allowing them through your firewall installed via APT. It's mor confusing that the firewall DOESN'T protect docker services like everything else.
Ideally, yes. But in reality, this means that if you just want to have 1 little EC2 VM on AWS running Docker, you now need to create a VM, a VPC, an NLB/ALB in front of the VPC ($20/mo+, right?) and assign a public IP address to that LB instead. For a VM like t4g.nano, it could mean going from a $3/mo bill to $23/mo ($35 in case of a NAT gateway instead of an LB?) bill, not to mention the hassle of all that setup. Hetzner, on the other hand, has a free firewall included.
Your original solution of binding to 127.0.0.1 generally seems fine. Also, if you're spinning up a web app and its supporting services all in Docker, and you're really just running this on a single $3/mo instance... my unpopular opinion is that docker compose might actually be a fine choice here. Docker compose makes it easy for these services to talk to each other without exposing any of them to the outside network unless you intentionally set up a port binding for those services in the compose file.
You should try swarm. It solves a lot of challenges that you would otherwise have while running production services with compose. I built rove.dev to trivialize setup and deployments over SSH.
What does swarm actually do better for a single-node, single-instance deployment? (I have no experience with swarm, but on googling it, it looks like it is targeted at cluster deployments. Compose seems like the simpler choice here.)
Swarm works just as well in a single host environment. It is very similar to compose in semantics, but also does basic orchestration that you would have to hack into compose, like multiple instances of a service and blue/green deployments. And then if you need to grow later, it can of course run services on multiple hosts. The main footgun is that the Swarm management port does not have any security on it, so that needs to be locked down either with rove or manual ufw config.
In AWS why would you need a NLB/ALB for this? You could expose all ports you want all day from inside the EC2 instance, but nobody is going to be able to access it unless you specifically allow those ports as inbound in the security group attached to the instance. In this case you'd only need a load balancer if you want to use it as a reverse proxy to terminate HTTPS or something.
There's no good reason a VM or container on Hetzner cannot use a firewall like IPTables. If that makes the service too expensive you increase cost or otherwise lower resources. A firewall is a very simple, essential part of network security. Every simple IoT device running Linux can run IPTables, too.
I guess you did not read the link I posted initially. When you set up a firewall on a machine to block all incoming traffic on all ports except 443 and then run docker compose exposing port 8000:8000 and put a reverse proxy like caddy/nginx in front (e.g. if you want to host multiple services on one IP over HTTPS), Docker punches holes in the iptables config without your permission, making both ports 443 and 8000 open on your machine.
@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).
Either you run a VM inside the VM or indeed two VMs. Jumphost does not require a lot of resources.
The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.
I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.
Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:
$ nc 127.0.0.1 5432 && echo success || echo no success
no success
Example snippet from docker-compose:
DB/cache (e.g. Postgres & Redis, in this example Postgres):
Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.