The difference between a native package manager provided by the OS vendor and docker is that in a native package manager allows you to upgrade parts of the system under the applications.
Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.
With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.
Incidentally, the same applies for most languages that prefer/require static linking.
As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.
> With docker, someone has to rebuild every container that contains a copy of the library.
I think you're grossly overblowing how much work it takes to refresh your containers.
In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.
I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage. Most probably YOLO `docker pull` it once and never think about it again.
TBH, a slower upgrade cycle may be tolerable inside a private network that doesn't face the public internet.
> I'd argue that the number of homelab folks have a whole CICD pipeline to update code and rebuild every container they use is a very small percentage.
What? You think the same guys who take an almost militant approach to how they build and run their own personal projects would somehow fail to be technically inclined to automate tasks?
If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.
This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]
[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...