Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Podman Desktop 1.2 Released: Compose and Kubernetes Support (podman-desktop.io)
206 points by twelvenmonkeys on July 14, 2023 | hide | past | favorite | 125 comments


I'm very happy to see you better support for docker compose. I think about 50% of the time I find Kubernetes used in production/development when it could have just used Compose along with a simple Terraform or Pulumi deploy script.


Docker compose is extremely basic. Even on a single node Kubernetes is more capable than docker compose. Ingresses, Services, Deployments, Cronjobs, Statefulsets. Awesome operators. Cert-manager. There're so many things I could easily do with Kubernetes that would require countless unmaintainable bash scripts and ad-hoc approaches with docker-compose.


> Docker compose is extremely basic

Yup thanks for noticing, that's why I use it


Yes, but 50% of the projects the parent poster sees, and 90% of the projects that I see, do not require any of that.


For me, I really enjoy how basic it is. I’ve never missed any of these things for my simple single-host scenarios.


Simple setups will be as simple with Kubernetes.

However with Kubernetes your infrastructure will be ready to scale. You need to expose both front-end and back-end services under the same host? No need to tinker with nginx configs, you just create two ingresses and do it in a standardized way. You need to run your service with two replicas and rolling zero-downtime update? Kubernetes has it out of the box. Good luck implementing all the necessary dances with custom docker-compose setup without dropped requests. You want to customize healthcheck? Kubernetes got you covered with all the imaginable scenarios.

The only missing block in Kubernetes is support for building containers. This is implemented with docker-compose extremely elegantly and simple. That I admit.


> Simple setups will be as simple with Kubernetes.

That is true if you already have Kubernetes. If you don't, then you still need to run and configure the Kubernetes' control plane (e.g. kube-apiserver, etcd, scheduler, etc). Doing that alone may exceed the complexity of a simple setup.

I say this as someone who has looked at Kubernetes a lot and wanted to use it, but could never justify it. I have concluded multiple times that docker compose is a better fit for my use case.

> However with Kubernetes your infrastructure will be ready to scale.

True, but for many purposes you aren't gonna need it.

https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it


k3s is a super easy way of deploying kubernetes in a linux machine.


I haven't used k3s, but I will check it out.


kubeadm init is all you need to run and configure everything. It's installed via dep/rpm.


I almost spilled my coffee, seriously.

I hope you're not putting something like that in production.


Statements like these offer zero value, maybe you could qualify it with some reasoning?


kubeadm alone is maybe 40% of what a production Kubernetes platform is. Just running init and deploying your app is good for a toy project or really small scale cluster.

Every time you hear yourself saying Kubernetes, "only", and "just" on the same sentence, please pause for a moment.


I'm running kubeadm-powered cluster in production for a year and couldn't be happier about it. Absolutely zero issues.


That's not the point. I too have managed kubeadm clusters mostly without issues. The tool is solid.


How many lines of unnecessary code have been written to be “ready to scale” and that never happens? This is an anti pattern and I would caution anyone from falling into this mindset where they feel they need Kubernetes even for simple things. I urge people to start simple, focus on features, and deal with complexity when the need actually arises.

This Cult of Kubernetes has to go.


I'm just using docker alone sometimes w some AWS service set - agreed completely. So simple to iterate, test and deploy. The performance and even reliability of stuff now is impressive. One AMD 7950x3d box on a 5 gig att fiber connection. Price / perf / simplicity is amazing


Most people don't really need things like rolling zero-downtime updates though. If you deploy your service once per week, and it takes 5 seconds to restart, that's still five 9s of availability, which is more than enough for tons of applications. A single Scala+postgres service on my 7 year old quad core machine can handle 10s of thousands of HTTP requests per second, which is plenty of scalability for most applications.


> However with Kubernetes your infrastructure will be ready to scale.

90% of the projects won't need scale. You're paying forward for something you won't use most of the time.

Speaking as a k8s admin.


Premature optimization is the root of all evil.


> Simple setups will be as simple with Kubernetes.

"Simple setups" in Kubernetes will have massive amounts of complexity and overhead, they're just buried in a different abstraction layer.

If you have "free" k8s and don't need to touch it operationally? Sure, do everything with it!

But somebody is going to have to run that k8s environment. If it's you, and you're not a k8s expert, then you'd better buckle up, because all that hidden complexity is now your problem.

In the world where you need to deploy "an app" to "a VM", the simplicity of docker-compose is a sweet spot that any developer can grok without needing an encyclopedic knowledge of how to manage it.


I wonder if there are any plans to add a docker compose like api to kubernetes? I know, kubernetes should not be used for that but there must be a better way than to use docker compose or manually building containers? Don't know a lot about DevOps so I sometimes just build my containers manually with a docker file but it's not ideal for up to date dependencies.


> Awesome operators

https://github.com/operator-framework/awesome-operators archived since 2021 and now https://operatorhub.io/

I hadn't heard of this, interesting. Layers and layers of abstractions. What an interesting way to solve things. "It's YAML all the way down"?


If you can grasp the layers of abstraction it's a really really fast way to ship operational solutions. Yaml aside it's great for smaller teams and empowers them to do a lot more than they could traditionally do.


What are some of the most common use cases you can think of, like the most popular ones? I'm drawing a blank on how I'd use this.


We use a few operators currently.

- Prometheus operator

Lets us spin up a prometheus monitoring stack on our kubernetes cluster for monitoring metrics and scraping metrics from our services pretty much automatically.

- Thanos Operator

Spins up prometheus monitoring aggregations across multiple clusters with object storage backends, basically you can store metrics for years and query them with grafana.

- Strimzi Operator (Kafka)

Orchestrates a kafka cluster for kafka connect or even full blown kafka brokers with zookeeper.

- Istio Operator

Builds a service mesh on the cluster, offering MTLS and an envoy load balancer with some incredible flexibility

- Cert-manager operator

Lets us auto-generate certificates for resources via letsencrypt. This one is so nice for internal services and keeps certs valid without any hand holding

- Argo CD operator

We use this operator to deploy argocd pipelines and ship new services and updates via github actions in a very standardized way, and gives our users a UI to see what's going on.

- Actions runner controller

Using this actions runner controller we can offload github actions runners to self-hosted runners that automatically scale for pipeline workloads. Another set up and forget system and saves us from buying more github actions minutes.

Ultimately we use a ton of operators because it gives us an out of the box framework for tooling we need to deploy without reinventing the wheel. Of course some operators aren't worth using and it's definitely worth checking them out to see if they worth your time. A lot of them are really open to updates and changes also so you can help evolve them as you grow into other cases.

I will say, it's very important to understand what you are abstracting with these, you can absolutely blindly deploy services with operators and have it blow up in your face if you aren't sure what's going on behind the scenes or lack distributed service knowledge in production.


I'm running postgres database with two replicas and continous S3 backup. It works flawlessly and with zero effort. If I would need to implement this setup without operator, I'd spent few weeks or more. Operator makes it effortless.


Docker compose falls apart when you need something clustered across nodes or any other multi-host deployment. Once you have to rip it apart it becomes way less appealing. Though I haven't tried it in a while and perhaps the ergonomics have changed.


Docker swarm is supposed to be the migration target for these cases, but I’ve never actually used it.


I've done some labbing with Swarm. Swarm is _just different enough_ to be a pain, but just similar enough to fool you into thinking you know what you're doing. Because swarm was also confused terminology (at one point there were two things with the name swarm that did things differently) the tech definitely got a bad rap for a while, but it also stagnated supporting tooling. You can use things like Portainer and Swarmpit to get some visibility into your swarm cluster members, deploy workloads, etc, but it definitely feels like a second class citizen.

Ultimately, I would love for something to exist out there that had the opinions and scope of deployment like Swarm in terms of simplicity, but didn't conflate other tools out there so that ergonomics and documentation were better and less confusing. Kubernetes gives you so much, but I do feel like it's a misstep that the only reasonable way we've simplified Kubernetes for production workflows is to pay a large PaaS to manage it for us.


Honestly with a github action and Argo CD workflow the kubernetes patterns aren't that scary. However maybe I'm just too comfortable in the ecosystem at this point.

I will say Kubernetes being as open as it is and all the overlapping tooling would seem incredibly overwhelming for someone trying to enter that space.


Swarm works, but has poor support for volumes - which means it's tricky to run legacy applications on swarm (which eg uploads files to local disk, not s3 - or keeps state/cache on disk, not a database).

Ingress is also more complicated/bespoke - the best I've found is traefik with labels for routing/config.

My advice today would be to scale Docker compose vertically (eg: on a dedicated server) - then move to Kubernetes.

The swarm middle ground isn't really worth it IMNHO.


> Swarm works, but has poor support for volumes - which means it's tricky to run legacy applications on swarm (which eg uploads files to local disk, not s3 - or keeps state/cache on disk, not a database).

One way round that is to use an NFS volume. However, I've hit problems with too many client NFS connections on a docker swarm and so found it better to mount the NFS volume on each host and use a bind mount instead.


CSI support will hopefully make this easier. But: there are quite some options once you Look deeper.

Also the volume plugin spec is so simple that it is possible to maintain your own plugin (even without csi).


My general feeling about adding an NFS depency to multinode swarm is that it's effectively adding a single point of failure to a system which is otherwise somewhat robust against single node failure...


Traefik is really simple to set up and I’d bet setting up s3 is 100x easier than kubernetes no? Unless I’m missing something.

Fwiw I found swarm lovely and just so much easier to work with than anything else solving the same problems.


Porting a ten year old app from local file storage to s3 might not be trivial.

For a new app, one generally should and can embrace 12-facors, and delegate state to stateful services (managed databases, key-value stores, s3 etc).

Do note that for simple services, local disk can be very hard to beat for low complexity, extremely high performance - with the caveat that horizontal scaling might be tricky.

Ed: also depending on privacy requirements - self- hosted s3 (eg minio) might be tricky for a production load. OTOH self-hosted Kubernetes is no walk in the park either!


I run it on a personal server, so there is no multi server setup, but I enjoy the fact that it's just an extension of docker compose.


neither has anyone else!

the inevitable "you could have used something simpler than Kubernetes!" comments that appear every time it's mentioned neglect to note that you're more likely to find a Kubernetes example for whatever you're doing readily available in the wild.


I lost track about it, I thought swarm was canceled


Docker swarm mode (still around) != docker swarm (cancelled). Poorly named so the confusion is normal


Not sure if I follow. Let's say I have 2 kinds of clusters with these sets of containers: A) 3 microserver containers on each machine B) Postgres container and a logging container

The A and B group can be implemented with docker compose and deployed to each cluster with pulumi/Terraform. Two or a hundred, it doesn't really matter how many cluster groupings you have.


Don't you lose a bunch of the compose niceties (without swarm anyway)? Aliases, networks, shared volumes, etc.


I guess they stopped doing this and now consider the product mature from looking at their website, but Docker Compose used to tell you that it wasn't suitable or intended for production use. That seemed like a pretty good reason not to do it.


Compose has its own footguns, but I've run a number of services with just Compose on a VM or direct on bare metal and the few little issues I've had were easy to debug and fix.

Very much unlike Kubernetes, where even a small deployment can sometimes be maddening to debug. Great for larger deployments where complexity is basically guaranteed, but anything that could run on one server with a hot spare is a great fit for Compose!


Surprised to hear you think debugging in kubernetes is maddening. It’s absolutely very different, but if I had to debug a system I knew nothing about, I’d rather debug a kubernetes based system rather than any other. Standards and all that. Anyways - for a single container or two obviously you’re right - I’m just not sure that’s so common outside of a side project anymore.


Especially because "production" implies an ingress controller, monitoring, etc. Does a Docker Compose-based production setup contain run its own Nginx with its own manually-wired rules? Its own Prometheus?


I'd wager most production systems don't have those.


Compared to Compose on a single server, Kubernetes deployments can have multiple layers (control plane, containers, usually some sidecars) where little quirks occur.

It almost necessitates deeper monitoring and/or extra tools to give a dashboard/'single pane of glass', whereas I can run Compose and just log into the individual server and jump around logs.


Kubernetes control plane is the same as docker daemon.

Sidecars do not exist in kubernetes, unless you install something that adds them.

Containers are... containers? The same containers you'd see in docker.

Kubernetes does not need deeper monitoring compared to docker. You can jump around kubernetes logs just like you do with docker. Except you don't even need to log into individual server.

I don't use any kind of dashboard for my kubernetes cluster. I installed kubernetes-dashboard, so developers and managers can enjoy their dashboards but I've found no value with it. kubectl is more than enough.


I've never understood using K8s in local dev over docker-compose, but I can absolutely see why someone would use it production.


If someone is working on complex kubernetes configuration, it's beneficial to have kubernetes in local dev environment, since it's trivial to test changes.


ah the 1% of the time you do want to use something in kubernetes and you have a pile of compose is pretty demoralising though


I haven't run into the need to do that, but there is the Kompose project that exists to help with the conversion (https://kompose.io/)!


I wish Podman wouldn't be a Docker replica, but a standalone product that competes in the container ecosystem with innovative features.

Initially the main advantage it had over Docker was rootless mode, but this has been available in Docker for a while now. What currently sets Podman (Desktop/Compose) apart from its Docker counterparts?

I've also run into issues running some containers with Podman that work fine with Docker. I still use it for some simple use cases, but I'm wondering more why bother with a copycat product that only plays catch-up to Docker, and has its own set of issues. And, incidentally, is run by Red Hat, and by extension IBM, which are increasingly hostile to OSS.


Big thing for me is Podman being daemon-less, making it a lot easier to integrate with existing tools since you don't have to treat it as it's own thing. It's already had easy systemd integration for a while now[0] and with Quadlet[1] I don't even bother writing compose files anymore; I just add a [Container] or [Volume] section to the same unit file templates I use everywhere else and it's all taken care of. Though to be honest, I mostly used bash script over compose files anyway since podman-compose never really worked as well.

Someone else has already mentioned built in pod support, plus you can generate kube files from existing pods for easy deployment.

So you end up with this nice progression of managing:

1. one-off/short term containers with podman run

2. longer running containers with podman generated systemd units

3. generation deployment of above long running containers with Quadlet

4. pods and more complicated setups on one machine with kube files

5. pods and more complicated setups across multiple machines with kubernetes

Which is a really nice stack to work with since they all use the same toolbox.

And that's outside of other small features like not having to worry about the docker package updating restarting the daemon and taking it all down, socket activation (through systemd units), and auto-updates.

[0] https://docs.podman.io/en/latest/markdown/podman-generate-sy... [1] https://docs.podman.io/en/latest/markdown/podman-systemd.uni...


Docker network effect is too important to ignore. That would be like swimming against the current.

For example, Deno (https://deno.land/) initially launched with precarious compatiliblity with Node.js, but eventually had to make compromises and adapt so Node.js projects could migrate easier.


All of that can be accomplished by being compatible with the broader OCI ecosystem. But Red Hat explicitly tries to be a Docker replacement (they suggest `alias docker=podman`, which actually breaks in many ways), to the point where I'm not sure why I should bother using a product that will always play catch-up to the real thing. This Desktop and Compose announcement is part of that roadmap, and I'm just wondering what Podman does better than Docker at this point.


I find the Pod and Deployment support in Podman to be a killer feature personally. I've also gotten small apps running manually in podman, and once the config is working, podman can emit k8s compatible yaml that can be re-applied to a different podman or applied to a k8s cluster. Pretty sweet IMHO.


That's a good point, thanks. I haven't yet experimented with that.


Do you do embedded? Look at the size of crun vs runc and at the portability of Go vs C.

Do you like having to have a daemon running? I don’t.

Also, the “pod” in the name may yield a few hints.

Finally, despite all the recent and undeserved hate, I know that Red Hat will keep their source open. I have no such faith in Docker.

Have you priced out Docker Desktop and Podman Desktop lately?


> For example, Deno (https://deno.land/) initially launched with precarious compatiliblity with Node.js, but eventually had to make compromises and adapt so Node.js projects could migrate easier.

And we have yet to see if that ends up being the right call. These things are always really tough. On the one hand, compatibility with existing stuff makes migration easier, but then if it's not actually different enough, we have no reason to change to the new thing.


> but a standalone product that competes in the container ecosystem with innovative features.

It has tons of features, but people don't pay attention to them due to the Docker being the lowest common denominator.

For example, there is absolutely no need for Compose. You create a pod and all the containers in it. You can then ask podman to generate all the systemd units for it.

I am also very worried about the IBM issue though, I wonder how long it will be until they slay this golden goose.


Podman can run a socket-activated network server (such as docker.io/library/nginx) with the "--network=none" option. This improves security.


Do you run with Docker daemon running in rootless mode? Does rancher default to rootless mode? I hear Docker can run in rootless mode but does anyone really run it that way. If I want to start a single container in my homedir, I need to start up multiple docker daemons (dockerd, containerd) to run and then they run forever even if I run in daemon mode. Or I can shut them down until I need to interact with the container again.

Have you tried running with Pods? Have you tried Quadlet? Have you tried to generate kube yaml from running pods and containers on your system? Have you used podman to generate pods and containers from existing kubernetes yaml files? Have you launched containers each in their own User Namespace with --userns=auto?


Docker still requires a daemon, even in rootless mode. Podman does not.


Just happened on macOS, with this new release:

> Podman Desktop quit unexpectedly.

I was about to ask whether it works on macOS yet. I guess I've got my answer.


Forgive my naïveté regarding Podman. On macOS, does this use the same virtualization approach that docker desktop uses?


To run Linux containers you will always need Linux somewhere. On macOS this would be in a VM that's usually managed by the tooling. Docker Desktop, Rancher Desktop, lima, and others all do this. On Windows you have the Windows Subsystem for Linux v2 that uses a VM.

For simplicity, the tooling often tries to make this simple and transparent to use.


Fun fact: in Linux, Docker Desktop also uses a managed VM.

https://docs.docker.com/desktop/faqs/linuxfaqs/#why-does-doc...


Yeah that's the worst thing docker did


It sorta makes sense for a bunch of reasons:

1. You only have to deal with one Linux distro in the VM. Managing things across many distros can be difficult at that level.

2. There are times you want to blow away your environment. Can't do this if things running on the host.

3. You'll want to controll the resources your workloads uses so they don't make the Linux desktop unresponsive. VMs provide that level of separation.

I could go on but you get the point. If you're a power user who wants it on the host you can do that. For the masses of this kind of product, a VM has a lot of advantages.


If its hardware accelerated the performance difference would likely not matter. And you're not going to deploy a high performance container on a desktop environment since edit:[the desktop] will consume a much greater deal of resources from your host than the VM ever will. For deploying a development environment or self-hosted services it's fine.


It sure does. It crippled my intern gaming laptop just later found out that she is using docker desktop instead of docker.io in Linux.after reinstalling that , there's no visible performance problems running containers


Yes. Perhaps OrbStack[1] is what you're looking for.

[1] https://orbstack.dev/


Orbstack also uses a virtual machine.

How does it work? Why is it fast?

OrbStack uses a lightweight Linux virtual machine with tightly-integrated, purpose-built services and networking written in a mix of Swift, Go, Rust, and C. See Architecture for more details.


It's using Apple Virtualization Framework + Rosetta acceleration instead of third party hypervisor like QEMU. There are ways to do this via the CLI with a simple one line command if you have Xcode installed.


Thanks I’m gonna try it out. Sucks that it’s closed source but I do so much work in containers these days that it’s worth it just for the speed


Thank you for that link!

It's a very interesting product, it does use a VM, but seems to have some "special sauce" code to improve the speed of things. It also says it shares the kernel among multiple VMs. Very cool stuff.



Dev here — there's a lot more special sauce than that! https://docs.orbstack.dev/architecture


Interesting, I thought Docker Desktop was doing that too. When I have time (HA!) I will try to compare podman/orb/docker a bit more.


Same basic approach in that it is a Linux VM that actually runs the containers, but it is based on qemu instead of VirtualBox.


Is Docker still using a VirtualBox based solution? I thought they were using the macOS virt framework.


No, it has never used virtual box. Originally it used hypervisor.framework. I wouldn't be surprised if they now use the virtualization framework (newer higher-level abstraction on macOS).


I doubt they're using VirtualBox. VirtualBox didn't even run on Apple Silicon until 9 months ago, but Docker Desktop still worked, so it must have been using something else.


colima allows to chose between qemu and apple virtualization framework.

I found there is not much diffence in performance between both on my intel macbook.

I think it will not take long until rancher also updates to latest lima to support apple virtualization. but I haven't checked the latest release notes.


That's because QEMU can do hardware acceleration in an x86 host but not in M architecture. But as far as I know, only the Apple framework can do hardware acceleration or use Rosetta 2 to make things near native speed on an M-series Mac.


Yes.


At work I was initially using podman on an M1 MacBook, but switched to Rancher Desktop + dockerd a couple months ago after having too many issues with podman. Many in my org are also moving away from podman for similar reasons.

I could never get bind mounts working consistently, relying instead on volumes, which are more awkward/less explicit when persisting local DBs used when testing.

I've had zero problems with Rancher Desktop + dockerd - definitely recommend it for M1 users that are having issues with podman.


> I could never get bind mounts working consistently

What type of problems were you seeing? Was it with podman or through podman desktop, and possibly some issue with what it was attempting?

bind mounts are a pretty standard kernel feature, so I'm wondering how podman/podman desktop could have been screwing it up, unless it was some user level permissions thing.


I wish I remembered the exact error; it was just with podman though. Podman desktop being newer, I hadn't really tried it out much.

FWIW, it could simply be chalked up to M1 weirdness. We have an ongoing list of various workarounds for the M1, whereas those using Intel MacBooks seem to be mostly fine.


To all the people pointing out that docker compose doesn't scale, I use docker compose locally in a vs code dev container and then k8s on production. Running k8s locally seems too complex, maybe it's not so bad but I'm happy with using docker compose dev container for development


To me, docker compose is simply a more readable way of running docker commands; anything you can do with the docker cli, you can run as a docker compose file.

Just like how a shell script is easier to manage than stringing lots of commands in the terminal, defining services in yaml is easier to manage than adding a million flags to your docker commands.

Of course at a certain point you may need further abstractions, but I agree with you that these should only be used if they're actually needed.


I have no idea why anyone would want to run K8s locally. The concept makes absolutely no sense to me.

If you want to run a container, just run the container? K8s is here for scaling, not for running stuff in local.


What if you need to run 20 services simultaneously? You would either need to maintain two sets of configuration for kubernetes and compose or use just one kubernetes configuration both for production/staging and local development.


I'm not happy with Podman Desktop's Svelte GUI.

Antd or MUI (React) feel much more mature.

But they are working on it.

https://github.com/containers/podman-desktop/pull/2863


What's missing to fully replace Docker Desktop?


Nothing for me: I use it as a Docker Desktop replacement at the moment. There aren't so many complex docker-compose setups I have to deal with, but everything works about as expected.

(Although honestly most of the time I'm using Podman it's on native Linux, so I don't use 99% of the stuff that's there anyway.)


That depends on what you need. For most people and use cases you can replace it with something like Rancher Desktop [1]. But, there are use cases and situations where you needed Docker Desktop.

Disclaimer, I started Rancher Desktop.

[1] https://rancherdesktop.io/


I switched from podman to Rancher Desktop at work recently, and it's been a very smooth experience - thank you for your work!


Lazydocker is all you need to replace docker desktop


When I was running Docker Desktop on Windows, I never used the GUI. Instead I always ran commands (docker or docker-compose) on the command line. In that case, the major benefit to Docker Desktop was having it set up the Linux VM that you run Docker on. That is a huge benefit.

Lazydocker is awesome, though. Thanks for introducing it to me. I just installed it and I'm loving it. I've used Portainer in the past, but didn't like it enough to leave it running continuously.


I've been using Podman Desktop in Win11 for local development for about 6 months and I'd say nothing.


Nothing good? Nothing bad?

I mean... if you're going to say nothing at all, no need to announce it...


The person you're responding to is responding to a comment asking what is missing from Podman Desktop it to replace Docker Desktop. It is missing nothing for them per their message.


ooo thanks


As in nothing is missing to replace Docker Desktop.


thanks, and my bad for not reading carefully


Is it usable on windows now? I constantly had issues previously both with Podman desktop and also rancher desktop. Docker socket not reachable, something stuck, and needed to reset the installation constantly. Docker desktop works quote stable.


Do people still use Vagrant? Last time I tried Docker on Mac, it was painfully slow. So I kept using Vagrant, which used more disc space, but very fast, and the Intel Macbook pro fan never kicked in once like Docker did.


Containers on anything other than Linux run in a Virtual Machine. Depending on the platform, you may or may not have hardware acceleration. It's either using the software emulator (slow) or hardware acceleration. On the new M architecture, I believe Docker now supports Rosetta2 emulation via Apple's Hypervisor Framework so it is near native performance.

As far as Vagrant being faster than Docker, it's possible the qemu backend it uses was hardware accelerated and for whatever reason the Docker VM wasn't? Or perhaps you were using Docker Desktop which does take up a lot of compute capacity away? Try using the docker cli without docker desktop and see if it works better for you.


Just a nit. Docker on Windows is near native-like performance, using WSL2.

WSL2 even supports running Linux GUI applications and GPU passthrough.


WSL2 still runs in a VM so it has the performance implications of that, not that they usually end up being that significant (I have found that memory usage in WSL2 is usually ridiculously high, though).

I think the support for Linux GUI applications is achieved by running an X server on the Windows side of things.


WSL2 uses a Hyper-V virtual machine [1], which is a native hypervisor [2] and doesn't come with most of the performance costs of software VMs.

[1] https://en.wikipedia.org/wiki/Hyper-V

[2] https://en.wikipedia.org/wiki/Hypervisor#Classification


Funny thing is that most of my stuff runs faster in WSL2 than Windows because it bypasses Windows built-in virus scanner.


I don't think speed is a matter of your container engine (Docker vs Vagrant) but what virtual machine you are using. I usually use Docker without any virtualization since I'm running linux containers on linux. I'm guessing when you say Vagrant, your're probably using VMware or something. I don't know what Docker on mac might be using since I avoid using mac.


I guess when I mentioned slow, it uses lots of resources on parent OS. Yes, it's Virtual Box I believe.


Yeah, so it's Virtual Box using your resources, not Vagrant or Docker. You might want to read up a bit on how containers work -- on a unix system, assuming same cpu architechture (so you don't need a VM) and filesystem that supports overlays, containers have barely any overhead in terms of system resources. You could have a thousand containers running on your machine and be fine. VMs, on the other hand, have pretty serious overhead.


Is there a way to make Podman Desktop use the default system connection rather than the default podman machine?

When I do `podman ps` my client connects to my development VM that is not managed by podman, but I can't seem to find a way to make Podman Desktop do that.


podman system connection

That command will let you manage what instance the podman cli is connecting to

https://docs.podman.io/en/latest/markdown/podman-system-conn...


yeah, I've been using this for a while now but Podman Desktop doesn't seem to care about the setting and wants to make it's own VM on my machine


Why does Compose still only work on one server? I am perpetually perplexed that nobody has patched Compose to manage k8s resources, have its own multi-node support, or push Swarm more. How has nobody fixed this? The tool is 9 years old.


Well there are two things:

Compose, "the binary" [1]. It used to be "just" a Python helper script that called Docker, used to deploy multiple containers in a common network easily. It was rewrited in Go, and now it's kind of a plugin for Docker. I think it's strongly tied to Docker primitives, and wouldn't be that easy to modify it to work on different container orchestrators.

But there's also Compose "the file format" [2]. There are tools, like Kompose[3], that understand the Compose format, and can use it to deploy in Kubernetes.

  1: https://github.com/docker/compose
  2: https://compose-spec.io/
  3: https://kompose.io/


At that point you basically have a kube yaml. So might as well use that since it knows about kube things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: