Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just make sure your host workstation has automatic security updates turned on, but otherwise yeah letting docker manage all the services is totally fine.


Yeah, I'd probably do something else "in production" but since it hasn't caused a problem in ~3 years of use, and the cost of it breaking is effectively zero because it's only for our own use, I'm just letting Docker figure it out. If it ever breaks I'll write some Systemd unit files or whatever they call them, but until then, one less thing to worry about, to back up and reconfigure on restoration, et c.

My main operation pain is ZFS. Every time I have to touch it, I'm terrified I'll destroy all my data. It's like Git. "I want to do [extremely common thing], how can I do that?" "Great, just do [list of arcane commands, zero of which obviously relate to the thing you want to do] but don't mess up the order or typo anything or your system is hosed". Yeah, super cool. Love the features, hate the UI (again, much like git)


docker is very bad for security due to its large attack surface.


Using container features to limit access of a program to the broader machine (disk, network, other processes) seems like it would tend to be more secure than... not doing that. Right? It's not as if I'm exposing any docker remote-control-related stuff to the network.


No. What you are thinking about is sandboxing, which is not docker's main objective and can be done with many better tools like firejail.

docker adds its own daemon that creates additional attack surface that you would not have otherwise.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: