Does container mean something different to y’all than it does to me?
I ask because I read your comment as saying “the advantage of Docker is that it uses (explanation of what containers are)” and the parent comment as saying “all I want from Docker is (explanation of what containers are)” and I am confused why (a) y’all are not just saying “containers” but rather “the part of docker that packages up my network of scripts so I can think about it like a statically linked binary” and (b) why you think this is a competitive advantage over other things you might have instead recommended here (Buildah, Makisu, BuildKit, img, Bazel, FTL, Ansible Container, Metaparticle... I am sure there are at least a dozen) to satisfy the parent comment’s needs.
Is there really any container ecosystem which has write-an-image-but-you-can’t-run-it-locally semantics? How do you finally run that image?
Docker is too general, too much of a Swiss army knife for this particular problem. The problem I am talking about is where a C++ program has all of its dependencies vendored into the source tree. When you run Make, everything including the dependencies build at the same time. All you need is a chroot, namespaces, cgroups, btrfs, squashfs--plain old Linux APIs--to make sure the compiler has a consistent view of the system. Assuming the compiler and filesystem are well behaved (e.g., don't insert timestamps), you should be able to take a consistent sha256sum of the build. And maybe even ZIP it up like a JAR and pass around a lightweight, source-only file that can compile and run (without a network connection) on other computers with the same kernel version.
Again, Bazel is basically this already. But it would be nice to have something like OP's tool to integrate in other build systems.
I could just make a Dockerfile and say that's my build system. But then I'm stuck with Docker. The only way to run my program would be through Docker. Docker doesn't have a monopoly on the idea of a fully-realized chroot.
For some scenarios, most (all?) of them have write-an-image-but-you-can’t-run-it-locally semantics.
My build server is x64, but target output is ARM. Can't exactly just run that locally super easily. Perhaps somebody has created a container runtime that will detect this, and automatically spin up a qemu container, running an arm host image, and communicate my container run request (and image) to that emulated system, but I haven't heard of that feature. (Not that I actually looked for it.)
I ask because I read your comment as saying “the advantage of Docker is that it uses (explanation of what containers are)” and the parent comment as saying “all I want from Docker is (explanation of what containers are)” and I am confused why (a) y’all are not just saying “containers” but rather “the part of docker that packages up my network of scripts so I can think about it like a statically linked binary” and (b) why you think this is a competitive advantage over other things you might have instead recommended here (Buildah, Makisu, BuildKit, img, Bazel, FTL, Ansible Container, Metaparticle... I am sure there are at least a dozen) to satisfy the parent comment’s needs.
Is there really any container ecosystem which has write-an-image-but-you-can’t-run-it-locally semantics? How do you finally run that image?