Doesn't statically compiling programs solve the deployment issue better? I mean, as far as I can tell Docker only exists because it's impossible to link to glibc statically, so it's virtually impossible to make Linux binaries that are even vaguely portable.
Except now Go and Rust make it very easy to compile static Linux binaries that don't depend on glibc, and even cross-compile them easily.
Binaries are one thing, but there are the other abstractions that containers bring in regard to networking and storage.
You expose what are the network APIs of your apps (e.g open ports), filesystem mounts, variables (12 factors), etc.
Your application becomes a block that you can assemble for a particular deployment; add some environment variables, connect a volume with a particular driver to a different storage backend, connect with an overlay to be able to talk to other containers privately across different servers or even DCs, etc.
It's really all about layers of abstraction for operating an application and deploying it to different environments.
With the latest container orchestration tools, you can have a catalog of application templates defined simply in Yaml and it's very easy to make it run anywhere. Add some autoscaling and rolling upgrades and it becomes magic for ops (not perfect yet, but checkout latest Kubernetes to see new advancements in this space).
With the proper tools and processes, this removes a lot of complexity.
> add some environment variables, connect a volume with a particular driver to a different storage backend, connect with an overlay to be able to talk to other containers privately across different servers or even DCs, etc.
But environment variables already exists without docker.
Volumes already exists, aka partitions.
"Overlay network" already exists, aka unix sockets or plain TCP/UDP/etc over the loopback interface.
I'm not trying to be a dick here, it's just that the points you brought up doesn't really bring anything new to the table. How is this different from just having a couple bare-metal or virtual machines behind a proxy?
There are some aspects to containerization that are very feasible, but only at certain scales and the points you brought up makes me question whether you perhaps might be over-engineering things a bit.
Those things exist, but you need the "setup" bit to achieve the level of isolation that you want.
For example, volumes: With Kubernetes (on Docker), the lifetime of the volume mount is handled for you. No other containers have access to the mount. Container dies, mount dies. Whereas on plain Linux, mounts stay. You need cleanup, or you need to statically bind apps to their machines, which will seriously limit your ability to launch new machines -- there will be a lot of state associated with the bootstrapping of each node. Statefulness is the enemy of deployment, so really what you want is some networked block storage (EBS on AWS, for example) plus an automatic mount/unmount controller, thereby decoupling the app from the machine and allowing the app to run anywhere.
Environment vars are inherited and follow the process tree, so those are solved by Linux itself.
Process trees also handle "nesting": Parent dies, children die. But you will end up in a situation where a child process might spawn a child process that detaches. This is particularly hard to fix when a parent terminates, because the child doesn't want to be killed. Now you have orphaned process trees. The Linux solution is called cgroups, which allows you to associate process trees with groups, which children cannot escape from. So you use cgroups, and write state management code to clean up an app's processes.
I could go on, but in short: You want the things that containerization gives you. It might not be Docker, although any attempt to fulfill the principles of containerization will eventually resemble Docker.
You now have generic interfaces (Dockerfile, docker-compose, Kubernetes/Rancher templates, etc.) to define your app and how to tie it together with the infrastructure.
Having these declarative definitions make it easy to link your app with different SDN or SDS solutions.
For example, RexRay for the storage backend abstraction of your container:
You can have the same app connected to either ScaleIO in your enterprise or EBS as storage.
We are closer than ever to true hybrid cloud apps and it's now much more easier to streamline the development process from your workstation to production.
We are closer than ever to true hybrid cloud apps and it's now much more easier to streamline the development process from your workstation to production.
This sounds exactly like the "It's the future!" guy in the original post...
Have to admit, as a fellow Go dev, with single binary static compiles, I don't really GET why I need docker... all it seems to offer is an increased workload and complicated build proc
You don't really, but tools like kubernetes, which are really useful if you're deploying a number of heterogeneous apps, expect a container format as they aim at a market wider than just golang. The overhead of putting the service inside docker and following 12 factor is minimal and largely worth it, but if you're only running a single go binary, you could legitimatly go other ways.
Something like kubernetes also lets you abstract away the lock-in of your cloud infrastructure, so whilst it adds another layer and a bit of complexity, it again is arguably worth the effort if you're worried about needing to migrate away from your current target for some reason in the future.
As a framework it abstracts apps from infrastructure quite well. It's super easy for me to replace my log shipping container in kubernetes and have most things continue to work, as all the apps have a uniform interface.
Nobodies saying you can't build these things without kubernetes, but it definitely gives me more of the things than configuation managment systems currently do. Personally, I'd rather aim at the framework than handles more of what I need it to do.
Finally, bootstrapping a kubernetes cluster is actually quite trivial and you can get one off the shelf in GKE, so I'm not really sure why I'd personally want to go another route.
In my humble case, Docker solves the problems I have to manage the systems on which my application runs (and that's mainly it). A single dockerfile of 20-30 lines describes a whole system (operating system, versions, packages, libraries, etc), and cherry on the cake, I can version it in my git repository.
This is not revolutionary in itself, but having the creation and deployment of a server being 100% replicable (+ fast and easy!) on dev, preproduction, and production environments, plus it's managed with my usual versionning tool, that is something I appreciate very much.
Sure, there are other tools to do the same, but docker does the job just fine.
Isolation is a strong argument. You don't want one process to starve another. You can get isolation via one-host-per-service or you can get it using cgroups. Docker sort of gives you both, without the waste of one-per-host and with a manageable set of tooling around cgroups.
Yes yes yes. We're nearly 100% Go on the backend and deployment is a breeze. We don't use Docker because it wouldn't give us anything beyond more things to configure. Our CI deploys binaries to S3 and the rest is just as easy.
Namespaced filesystem shouldn't even be a special requirement - your program should use relative or at least configurable paths. I mean, directories are namespaced filesystems.
Your program don't see what else is running on the system. Also means that it removes possible conflicts for shared libraries and other system-wide dependencies.
This kind of isolation is not only good for app bundling as a developer, but even more important as an operator in a multi-tenant scenario. You throw in containers and they don't step on each other toes. Plus, system stay clean and it's easy to move things around.
Except now Go and Rust make it very easy to compile static Linux binaries that don't depend on glibc, and even cross-compile them easily.
Hell I think it's actually not even that hard to do with C/C++: https://www.musl-libc.org/how.html
If I have a binary built by Go, what problems does Docker solve that just copying that binary to a normal machine doesn't?