Hacker News new | past | comments | ask | show | jobs | submit login

> docker

There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.






Maybe. There are pros and cons. Docker means you can run two+ different things on the same machine and update them separately. This is sometimes important when one project releases a feature you really want, while a different one just did a major update that broke something you care about. Running on the OS often means you have to update both.

Single binary sometimes works, but means you need more memory and disk space. (granted much less a concern today than it was back in 1996 when I first started self hosting, but it still can be an issue)


How can running a single binary under systemd need more memory/disk space than having that identical binary with supporting docker container layers under it on the same system, plus the overhead of all of docker?

Conflicting versions, I'll give you that, but how frequently does that happen, especially if you mostly source from upstream OS vendor repos?

The most frequent conflict is if everything wants port 80/443, and for most self-hosted services you can have them listen on internal ports and be fronted by a single instance of a webserver (take your pick of apache/nginx/caddy).


I didn't mean the two paragraphs to imply that they are somehow opposites (though on hindsight I obviously did). There are tradeoffs. a single binary is between docker and a library that uses shared libraries. What is right depends on your situation. I use all three in my selfhosted environment - you probably should too.

If you are using docker, do you save anything by using shared libraries? I thought docker copies everything. So every container has its own shared libraries and the OS running all those containers has its own as well.

Not necessarily. You are still running within the same kernel.

If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.

This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]

[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...


The difference between a native package manager provided by the OS vendor and docker is that in a native package manager allows you to upgrade parts of the system under the applications.

Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.

With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.

Incidentally, the same applies for most languages that prefer/require static linking.

As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.


> With docker, someone has to rebuild every container that contains a copy of the library.

I think you're grossly overblowing how much work it takes to refresh your containers.

In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.


I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage. Most probably YOLO `docker pull` it once and never think about it again.

TBH, a slower upgrade cycle may be tolerable inside a private network that doesn't face the public internet.


> I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage.

What? You think the same guys who take an almost militant approach to how they build and run their own personal projects would somehow fail to be technically inclined to automate tasks?


Yes, because there are a stunning number of people within r/homelab who simply want to run Plex and torrent clients.

> I think you're grossly overblowing how much work it takes to refresh your containers.

last thing I want is to build my own CI/CD pipeline and tend it


There are more options than docker for that. FreeBSD jails for example.

I don’t understand why you would need docker for that.

Oh my god no, docker is so damn useful I will never return to package managers/manual installation.

>>Oh my god no, docker is so damn useful I will never return to package managers/manual installation.

This. These anti-containerisation comments read like something someone oblivious to containers would say if they were desperately grabbing onto tech from 30 years ago and refused to even spend 5 minutes exploring anything else.


Or they have explored other options and find docker lacking. I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (which can optionally easily be wrapped into containers if you want on nixos, but there you're using containers for their isolation features, not for the ability to 'docker pull', and for many purposes you can probably e.g. just use file permissions and per-service users instead of bind-mounts into containers).

Containers as practiced by many are basically static linking and "declarative" configuration done poorly because people aren't familiar with dynamic linking or declarative OS config done well.


> Or they have explored other options and find docker lacking.

I don't think so. Containerization solves about 4 major problems in infrastructure deployment as part of it's happy path. There is a very good reason why the whole industry pivoted towards containers.

> . I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (...)

This comment is void of any credibility. To start off, you suddenly dropped k8s into the conversation. Think about using systemd to setup a cluster of COTS hardware running a software-defined network, and then proclaim it's easier.

And then, focusing on Docker, think about claiming that messing with systemd units is easier than simply running "docker run".

Unbelievable.


I mentioned k8s because when people talk about the benefits of containers, they usually mean the systems for deploying and running containers. Containers per se are just various Linux namespace features, and are unrelated to e.g. distribution or immutable images. So it makes sense to mention experience with the systems that are built around containers.

The point is when you have experience with a Linux distribution that already does immutable, declarative builds and easy distribution, containers (which are also a ~2 line change to layer into a service) are a rather specific choice to use.

If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run. Debugging NAT and iptables when you have multiple interfaces and your container doesn't have tcpdump is all a pain, for example. Dealing with issues like your bind mount not picking up a change to a file because it got swapped out with a `mv` is a pain. Systemd units aren't complicated.


> I mentioned k8s because when people talk about the benefits of containers, they usually mean the systems for deploying and running containers.

No, it sounds like a poorly thought through strawman. Even Docker supports Docker swarm mode and many k8s distributions use containerd instead of Docker, so it's at best an ignorant stretch to jump to conclusions over k8s.

> Containers per se are just various Linux namespace features, and are unrelated to e.g. distribution or immutable images. So it makes sense to mention experience with the systems that are built around containers.

No. Containers solve many operational problems, such as ease of deployment, setup software defined networks, ephemeral environments, resource management, etc.

You need to be completely in the dark to frame containerization as Linux namespace features. It's at best a naive strawman, built upon ignorance.

> If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run.

I'll make it very simple to you. I want to run postgres/nginx/keycloak. With Docker, I get everything up and running with a "docker run <container image>".

Now go ahead and show how your convoluted way is "way simpler".


Containers do not do deployment (or set up software defined networks). docker or kubernetes (or others) do deployment. That's my point.

nix makes it trivial to set up ephemeral environments: make a shell.nix file and run `nix-shell` (or if you just need a thing or two, do e.g. `nix-shell -p ffmpeg` and now you're in a shell with ffmpeg. When you close that shell it's gone). You might use something like `direnv` to automate that.

Nixos makes it easy to define your networking setup through config.

For your last question:

    services.postgres.enable = true;
    services.nginx.enable = true;
    services.keycloak.enable = true;
If you want, you can wrap some or all of those lines in a container, e.g.

    containers.backend = {
        config = { config, pkgs, lib, ... }: {
            services.postgres.enable = true;
            services.keycloak.enable = true;
        };
    };
Though you'd presumably want some additional networking and bind mount config (e.g. putting it into its own network namespace with a bridge, or maybe binding domain sockets that nginx will use plus your data partitions).

Find any self hosted software, the docker deployment is going to be the easiest to stand up/destroy and migrate.

I would agree with that.

Docker has a lot of use cases but self hosting is not one of them.

When self-hosting you wanna think long term and the fact you will loose interest in the fiddling after a while. So sticking with software packaged in a good distribution is probably the way to go. This is the forgotten added value of a Linux or BSD distribution, a coherent system with maintenance and an easy upgrade path.

The exception are things like Umbrel which I would say use docker as their package manager and maintain everything, so it is ok.


I feel the exact opposite. Docker has made self-hosting so much easier and painless.

Backing up relevant configuration and data is a breeze with Docker. Upgrading is typically a breeze as well. No need to suffer with a 5-year old out of date version from your distro, run the version you want to and upgrade when you want to. And if shit hits the fan, it's trivial to roll back.

Sure, OS tools should be updated by the distro. But for the things you actually use the OS for, Docker all the way in my view.


> Docker has made self-hosting so much easier and painless.

Mostly agreed, I actually run most of my software on Docker nowadays, both at work and privately, in my homelab.

In my experience, the main advantages are:

  - limited impact on host systems: uninstalling things doesn't leave behind trash, limited stability risks to host OS when running containers, plus you can run a separate MariaDB/MySQL/PostgreSQL/etc. instance for each of your software package, which can be updated or changed independently when you want
  - obvious configuration around persistent storage: I can specify which folders I care about backing up and where the data that the program operates on is stored, vs all of the runtime stuff it actually needs to work (which is also separate for each instance of the program, instead of shared dependencies where some versions might break other packages)
  - internal DNS which makes networking simpler: I can refer to containers by name and route traffic to them, running my own web server in front of everything as an ingress (IMO simpler than the Kubernetes ingress)... or just expose a port directly if I want to do that instead, or maybe expose it on a particular IP address such as only 127.0.0.1, which in combination with port forwarding can be really nice to have
  - clear resource limits: I can prevent a single software package from acting up and bringing the whole server to a standstill, for example, by allowing it to only spike up to 3/4 CPU cores under load, so some heavyweight Java or Ruby software starting up doesn't mean everything else on the server freezing for the duration of that, same for RAM which JVM based software also loves to waste and where -Xmx isn't even a hard limit and lies to you somewhat
  - clear configuration (mostly): environment variables work exceedingly well, especially when everything can be contained within a YAML file, or maybe some .env files or secrets mechanism if you're feeling fancy, but it's really nice to see that 12 Factor principles are living on, instead of me always needing to mess around with separate bind mounted configuration files
There's also things like restart policies, with the likes of Docker Swarm you also get scheduling rules (and just clustering in general), there's nice UI solutions like Portainer, healthchecks, custom user/group settings, custom entrypoints and the whole idea of a Dockerfile saying exactly how to build an app and on the top of what it needs to run is wonderful.

At the same time, things do sometimes break in very annoying ways, mostly due to how software out there is packaged:

https://blog.kronis.dev/blog/it-works-on-my-docker

https://blog.kronis.dev/blog/gitea-isnt-immune-to-issues-eit...

https://blog.kronis.dev/blog/docker-error-messages-are-prett...

https://blog.kronis.dev/blog/debian-updates-are-broken

https://blog.kronis.dev/blog/containers-are-broken

https://blog.kronis.dev/blog/software-updates-as-clean-wipes

https://blog.kronis.dev/blog/nginx-configuration-is-broken

(in practice, the amount of posts/rants wouldn't change much if I didn't use containers, because I've had similar amounts of issues with things that run in VMs or on bare metal; I think that most software out there is tricky to get working well, not to say that it straight up sucks)


What are you talking about?

Docker is THE solution for self hosting stuff since one often has one server and runs a ton of stuff on it, with different PHP, Python versions, for example.

Docker makes it incredibly easy to a multitude of services on one machine however different they may be.

And if you ever need to move to a new server, all you need to do is move the volumes (if even necessary) and run the containers on the new machine.

So YES, self hosting stuff is a huge use case for docker.


I think your view show the success of Docker but also an over-hype and generation that only know how to do things with Docker (or and so think everything is easier with it).

But before Docker there was the virtualisation hype when people sweared every software/service needs its own VM. VM or containers we end up with frankenstein systems with dozens of images on one machine. And with Docker we probably lost a lot of security.

So this is fine I guess in the corporate world because things are messy anyway and there are many other contraints (hence the success of containers).

But in your home, serving a few apps for a few users you actually don't need that gas factory.

If you wanna run everything on your home lab with Docker or Kubernetes because you wanna build a skillset for work or reuse your professional skills, fine go for it. But everything you think is easy with Docker is actually simpler and easier with raw Linux or BSD.


Why are you assuming my age and experience because I use Docker? This doesnt help your argument.

I have been around since long before Docker was a thing, so yes I have been there, serving apps on bare metal and then using unwieldy VMs.

It doesnt matter if its my home lab or some Saas server, how is it simpler to serve 3 PHP apps with different PHP versions on raw linux than simply using docker for example?

This is called progress and thats why its a popular tool. Not because of some “hype” or whatever you are implying.


OTOH, no.

Been self-hosting for 35+ years. Docker's made the whole thing 300% easier — especially when thinking long term.


I run Debian on my machine, so package are not really up to date and I would be stuck, not being able to update my self hosted software because some dependencies were too old.

And then, some software would require older one and break when you update the dependencies for another package.

Docker is a godsend when you are hosting multiple tools.

For the limited stuff I host, navidrome, firefly, nginx, .. I have yet to see single binary package. It doesn’t seem very common in my experience.


FWIW, Navidrome has bare binaries, packages (apt, rpm, etc.) and docker container options: https://github.com/navidrome/navidrome/releases

> There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

None of your points make any sense. Docker works beautifully well as an abstraction layer. It makes trivially simple to upgrade anything and everything running on it, to the point you do not even consider it as a concern. Your assertions are so far off that you managed to.l get all your points entirely backwards.

To top things off, you get clustering for free with Docker swarm mode.

> If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.

I have news for you. In fact, you should be surprised to learn that nowadays that today you even get full blown Kubernetes distributions up and running in Linux distributions after a quick snap package install.


Absolutely everything they said makes sense.

Everything you're saying is complete overkill, even in most Enterprise environments. We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

> I have news for you.

I have news for _you_: using Docker to run anything that doesn't need it (i.e. it's the only officially supported deployment mechanism) is like putting your groceries into the boot of your car, then driving your car onto the tray of a truck, then driving the truck home because "it abstracts the manual transmission of the car with the automatic transmission of the truck". Good job, you're really showing us who's boss there.

Operating systems are easy. You've just fallen for the Kool Aid.


> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here

It's pretty simple these days I think to run k3s or something similar and then deploy stuff how you like via a yaml file. I'll agree though if you need services to share filesystem for some reason it gets more complicated with storage mounts.


I self host: Jellyfin qbit Authentik Budibase Pihole Nginx proxy manager Immich Jupyter ….. The list goes on, these software have tons of dependencies, installing everything on a single OS would break them! Containers are godsend for self hosting and for anyone deploying many tools at a time!

> Absolutely everything they said makes sense.

Not really. It defies any cursory understanding of the problem domain, and you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish.

Some people in this discussion even go to the extreme of claiming that messing with systemd to run a service is simpler than typing "docker run".

It defies all logic.

> Everything you're saying is complete overkill, even in most Enterprise environments.

What? No. Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.


> you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish

You'd have to go out of your way to ignore how difficult they are to maintain and secure. Anyone with a few hours of experience trying to design an upgrade path for other people's container; security scanning of them; reviewing what's going on inside them; trying to run them with minimal privileges (internally and externally), and more, will know they're a nightmare from a security perspective. You need to do a lot of work on top of just running the containers to secure them [1][2][3][4] -- they are not fire and forget, as you're implying.

This one is my favourite: https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Se... -- what an essay. Keep in mind someone has to do that _and_ secure the underlying hosts themselves for there is an operating system there too.

And then this bad boy: https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR... -- again, you have to do this kind of stuff _again_ for the OS underneath it all _and_ anything else you're running.

[1] https://medium.com/@ayoubseddiki132/why-running-docker-conta...

[2] https://wonderfall.dev/docker-hardening/

[3] https://www.isoah.com/5-shocking-docker-security-risks-devel...

[4] https://kubernetes.io/docs/tasks/administer-cluster/securing...

They have their place in development and automated pipelines, but when the option of running on "bare metal" is there you should take it (I actually heard someone call it that once: it's "bare metal" if it's not in a container these days...)

You should never confuse "trivial" with "good". ORMs are "trivial", but often a raw SQL statement (done correctly) is best. Docker is "good", but it's not a silver bullet that just solves everything. It comes with its own problems, as seen above, and they heavily outweigh the benefits.

> Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?

Ah! I see now. I don't think you work in operations. I think you're a software engineer who doesn't have to do the Ops or SRE work at your company. I believe this to be true because you're hyper-focused on the running of the containers but not the management of them. The latter is way harder than managing services on "bare metal". Running services via "systemctl" commands, Ansible Playbooks, Terraform Provisioners, and so many other options, has resulted in some of the most stable, cheap to run, capable, scalable infrastructure setups I've ever seen across three countries, two continents, and 20 years of experience. They're so easy to use and manage, the companies I've helped have been able to hire people from University to manage them. When it comes to K8s, the opposite is completely true: the hires are highly experienced, hard to find, and very expensive.

It blows my mind how people run so much abstraction to put x86 code into RAM and place it on a CPU stack. It blows my mind how few people see how a load balancer and two EC2 Instances can absolutely support a billion dollar app without an issue.

> You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.

Sure, OK. I find you hostile, so I'll let you sit there boiling your own blood.


What is your opinion on podman rootless containers? In my mind running rootless containers as differe OS users for each application I'm hosting was an easy way of improving security and making sure each of those services could only mess with their own resources. Are there any known issues with that? Do you have experience with Podman? Would love to hear your thoughts

That sounds like a great option to me. The more functionality you can get out of a container without giving up privileges, the better. Podman is just a tool like any other - I'd happily use it if it's right for the job.

All I would say is: can you run that same thing without a containerisation layer? Remember that with things like ChatGPT it's _really_ easy to get a systemd unit file going for just about any service these days. A single prompt and you have a running service that's locked down pretty heavily.


Yeah I could run them as regular systemd daemons themselves, but I would lose the easy isolation between different services and main OS. Feels easier to limit what the services have access to in the host OS by running them in containers.

I do run the containers as systemd user services however, so everything starts-up at boot, etc


I completely disagree.

> Docker adds indirection on storage, networking, etc.,

What do you mean by "indirection"? It adds OS level isolation. It's not an overhead or a bad thing.

> makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

Literally the entire selfhost stack could be updated and redeployed in a matter of:

      docker compose pull
      docker compose build .
      docker compose up -d
Self hosting with something like docker compose means that your server is entirely describable in 1 docker-compose.yml file (or a set of files if you like to break things apart) + storage.

You have clean separation between your applications/services and their versions/configurations (docker-compose.yml), and yous state/storage (usually a NAS share or a drive mount somewhere).

Not only are you no longer depended on a particular OS vendor (wanna move your setup to a cheap instance on a random VPS provider but they only have CentOS for some reason?), but also the clean seperation of all the parts allows to very easily scale individual components as needed.

There is 1 place where everything goes. With the OS vendor package everytime you need to check is it in systemd unit? is it a config file in /etc/? wth?

Then next time you're trying to move the host, you forget the random /etc/foo.d/conf change you made. With docker-compose, that change has to be stored somewhere for the docker-compose to mount or rebuild, so moving is trivial.

It's not Nixos, sure. but it's much much better than a list of APT or dnf or yum packages and scripts to copy files around


Tools like Ansible exist and can do everything you mention on the deploy side and more, and are also cross platform to a wider range of platforms than Linux-only Docker.

Isolation technologies are also available outside of docker, through systemd, jails, and other similar tools.


> Tools like Ansible exist and can do everything you mention on the deploy side and more (...)

Your comment is technically correct, but factually wrong. What you are leaving out is the fact that, in order to do what Docker provides out of the box, you need to come up with a huge custom Ansible script to even implement the happy path.

So, is your goal to self host your own services, or to endlessly toy with the likes of Ansible?


Is your goal to run your own services, or to understand them? The two are not mutually exclusive, and one can certainly understand containers, but the general vibe from this thread seems to be “I like containers because I don’t have to understand the magic they’re doing.”

funny because it seems to me that the general vibes are "I don't like containers because I can't learn something new"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: