Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh nice - this is by the Rancher guys. Would love to compare this not with k8s, but with Docker Swarm.

Docker Swarm on Raspberry Pi is a very common thing. So, from a performance perspective, these are on par.

Another question is around ingress and network plugin - is it a seamless "batteries included" experience ? Because these are two of the biggest pains in k8s to decide and setup .



Based on the README [0], quickly comparing to Swarm:

- this has the same great K8S API, that Swarm lacks. (Deployment is a first class citizen in kube land, but you only have to make do with the service YMLs in the Swarm sphere.)

- k3s lacks some in-tree plugins, that swarm might have (mount cloud provider managed block device), but there are out of tree addons

- sqlite instead of etcd3 [but available], so out of the box k3s is not HA ready (if I interpret this[1] right, there's one APIserver, and it's something in development )

- this seems more lightweight than a docker setup in some sense (it uses containerd, not full docker), but of course this needs a proper evaluation

- ingress plugins: good question. it should work with traefik [2]

[0] https://github.com/rancher/k3s/blob/master/README.md#what-is...

[1] https://github.com/rancher/k3s/blob/master/README.md#server-...

[2] https://github.com/rancher/k3s/search?q=ingress&type=Commits


> great K8S API, that Swarm lacks

Hmm... nothing against k8s, but it’s deployment api is an abomination on par with aws cloudformation.

You need teams of yaml engineers to manage these things.


https://twitter.com/kelseyhightower/status/93525292372179353...

That's the exact problem Rancher itself is trying to solve, and it does a pretty fantastic job at it.

Though suggesting its "on par" with CloudFormation suggests that you either know a ton about CloudFormation (anything becomes second nature if you're skilled in it) or you don't know much about either of them. Kubernetes isn't that bad.


It's definitely bad when you have thousands of lines of YAML configuration to maintain.

On the other hand, a thought out declarative language like Hashicorp's HCL is much saner thanks to IDE code completion/refactoring and static typing.


K8s YMLs are the baseline, and there are projects to have a lot more ergonomic description interface. They are ugly because they are also static typed and very extensible.

YAML is 1-1 mappable to HCL - because both map to JSON, but the Hashicorp interface seems nicer because it's simpler.


Is YAML engineer a real thing? I always considered YAML to be a slightly hacky configuration DSL that I needed to be familiar with, not an actual core, career, competence.

For example, can I become an INI engineer?


I think that's the joke :)

Of course k8s's YAML API is just a clunky interface, because it's evolving very rapidly, and it's a rather low-level thing. So basically if you work a lot with k8s, it seems like all you do all day is to copy paste [or generate, or parse] YAMLs.


I think that's the first time I've seen anyone advocate cloudformation above anything.


I cannot read what your parent says as an advocation for cloudformation, let alone above anything; only that sth is as bad as cloudformation.


The syntax is ugly, the features are amazing. And it's easy to abstract over ugly syntax, but it's hard to work around Swarm's missing features.


>this has the same great K8S API, that Swarm lacks. (Deployment is a first class citizen in kube land, but you only have to make do with the service YMLs in the Swarm sphere.)

With all due respect, some (like me) consider that a feature of Swarm.

If k3s comes with the same API (and implicit yaml complexity) of k8s, but just reduces RAM usage...well that's not very interesting for me.

Docker Swarm is very lean - there are zillions of videos of how people are building interesting raspberry Pi stacks using it (https://youtu.be/mMpZpa7uUSk).

But for most people using Swarm - they are using it for one reason only : simplicity.


I use Docker Compose on a small, cloud-hosted VM to run all my development dependencies and stuff I need for testing (I've got RabbitMQ, Postgres and Splunk running at present). It was really simple to setup, and "just works" - and the nodes don't have to compete with a greedy K8s orchestrator for resources!


Ditto. For a single machine you don't need "orchestration" - you just need something cleaner than a bash script to start/stop and potentially restart your containers.

I set up my home server with minikube more as a learning exercise, but went back to docker-compose for running all the things I actually rely on (Plex for instance) just because it's so much simpler.


Agreed. k8s makes no sense on 1 node. But in my experience Swarm is just not enough when you have more than one node. You have to work around a few missing key features. (Like the management of deployments. With Swarm you have stacks, and you can update them, but you can't easily roll back the update, you can't easily manage/configure the update. And if the update gets stuck, it just gets stuck, it's not managed.)


The stuck issue is not a limitation of Swarm, but was a bug that has gotten fixed in 18.06 - https://github.com/moby/moby/issues/37493

Rollback has existed for a while now - https://docs.docker.com/engine/reference/commandline/service... . you can actually tune the parallelism and the delay


It includes the flannel network plugin by default (although you can change this) and a basic service load balancer (technically not an ingress, but providing the same functionality). From the README:

> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.


Traefik is deployed for layer-7 ingress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: