Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

None of your points make any sense. Docker works beautifully well as an abstraction layer. It makes trivially simple to upgrade anything and everything running on it, to the point you do not even consider it as a concern. Your assertions are so far off that you managed to.l get all your points entirely backwards.

To top things off, you get clustering for free with Docker swarm mode.

> If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.

I have news for you. In fact, you should be surprised to learn that nowadays that today you even get full blown Kubernetes distributions up and running in Linux distributions after a quick snap package install.



Absolutely everything they said makes sense.

Everything you're saying is complete overkill, even in most Enterprise environments. We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

> I have news for you.

I have news for _you_: using Docker to run anything that doesn't need it (i.e. it's the only officially supported deployment mechanism) is like putting your groceries into the boot of your car, then driving your car onto the tray of a truck, then driving the truck home because "it abstracts the manual transmission of the car with the automatic transmission of the truck". Good job, you're really showing us who's boss there.

Operating systems are easy. You've just fallen for the Kool Aid.


> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here

It's pretty simple these days I think to run k3s or something similar and then deploy stuff how you like via a yaml file. I'll agree though if you need services to share filesystem for some reason it gets more complicated with storage mounts.


I self host: Jellyfin qbit Authentik Budibase Pihole Nginx proxy manager Immich Jupyter ….. The list goes on, these software have tons of dependencies, installing everything on a single OS would break them! Containers are godsend for self hosting and for anyone deploying many tools at a time!


> Absolutely everything they said makes sense.

Not really. It defies any cursory understanding of the problem domain, and you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish.

Some people in this discussion even go to the extreme of claiming that messing with systemd to run a service is simpler than typing "docker run".

It defies all logic.

> Everything you're saying is complete overkill, even in most Enterprise environments.

What? No. Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.


> you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish

You'd have to go out of your way to ignore how difficult they are to maintain and secure. Anyone with a few hours of experience trying to design an upgrade path for other people's container; security scanning of them; reviewing what's going on inside them; trying to run them with minimal privileges (internally and externally), and more, will know they're a nightmare from a security perspective. You need to do a lot of work on top of just running the containers to secure them [1][2][3][4] -- they are not fire and forget, as you're implying.

This one is my favourite: https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Se... -- what an essay. Keep in mind someone has to do that _and_ secure the underlying hosts themselves for there is an operating system there too.

And then this bad boy: https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR... -- again, you have to do this kind of stuff _again_ for the OS underneath it all _and_ anything else you're running.

[1] https://medium.com/@ayoubseddiki132/why-running-docker-conta...

[2] https://wonderfall.dev/docker-hardening/

[3] https://www.isoah.com/5-shocking-docker-security-risks-devel...

[4] https://kubernetes.io/docs/tasks/administer-cluster/securing...

They have their place in development and automated pipelines, but when the option of running on "bare metal" is there you should take it (I actually heard someone call it that once: it's "bare metal" if it's not in a container these days...)

You should never confuse "trivial" with "good". ORMs are "trivial", but often a raw SQL statement (done correctly) is best. Docker is "good", but it's not a silver bullet that just solves everything. It comes with its own problems, as seen above, and they heavily outweigh the benefits.

> Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

Ah! I see now. I don't think you work in operations. I think you're a software engineer who doesn't have to do the Ops or SRE work at your company. I believe this to be true because you're hyper-focused on the running of the containers but not the management of them. The latter is way harder than managing services on "bare metal". Running services via "systemctl" commands, Ansible Playbooks, Terraform Provisioners, and so many other options, has resulted in some of the most stable, cheap to run, capable, scalable infrastructure setups I've ever seen across three countries, two continents, and 20 years of experience. They're so easy to use and manage, the companies I've helped have been able to hire people from University to manage them. When it comes to K8s, the opposite is completely true: the hires are highly experienced, hard to find, and very expensive.

It blows my mind how people run so much abstraction to put x86 code into RAM and place it on a CPU stack. It blows my mind how few people see how a load balancer and two EC2 Instances can absolutely support a billion dollar app without an issue.

> You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.

Sure, OK. I find you hostile, so I'll let you sit there boiling your own blood.


What is your opinion on podman rootless containers? In my mind running rootless containers as differe OS users for each application I'm hosting was an easy way of improving security and making sure each of those services could only mess with their own resources. Are there any known issues with that? Do you have experience with Podman? Would love to hear your thoughts


That sounds like a great option to me. The more functionality you can get out of a container without giving up privileges, the better. Podman is just a tool like any other - I'd happily use it if it's right for the job.

All I would say is: can you run that same thing without a containerisation layer? Remember that with things like ChatGPT it's _really_ easy to get a systemd unit file going for just about any service these days. A single prompt and you have a running service that's locked down pretty heavily.


Yeah I could run them as regular systemd daemons themselves, but I would lose the easy isolation between different services and main OS. Feels easier to limit what the services have access to in the host OS by running them in containers.

I do run the containers as systemd user services however, so everything starts-up at boot, etc


You can isolate and lock down services in systemd too! Not too hard at all, and again AI can help here.

https://www.redhat.com/en/blog/mastering-systemd




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: