Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who sat in on the product development discussions at aws about EKS the internal view was K8S was: * a lock in strategy by Google to substitute for the fact they don’t yet have systemic abstractions at a provider level. By “owning” the design and engineering around k8s through capture they can ensure the backing services they build in gcp naturally support k8s users as they develop their roadmap * the providing of customer space SDN and infrastructure services via a OS/user space runtime was seriously weaker than what an infrastructure provider can offer in terms of stability, durability, security, audit, etc. * the complexity of running an abstraction layer on top of an abstraction layer that provide essentially identical or similar services was crazy * the semantics of durable stores (queues, databases, object stores, etc) would never be sufficient in a k8s model compared to a hosted provider service * the bridging of k8s into provider durable stores breaks the run anywhere model as even stores like s3 that have similar APIs across providers have vastly different semantic behaviors * as such, k8s solved mostly stateless problems, which being trivial doesn’t merit the complexity of k8s * k8s wasn’t a standard, that requires standardization and a standards body. K8s was a popular solution to a problem that has had many solutions. Google promoting it and investing in it didn’t make it a standard, nor did the passion of the k8s community. * that said for customers in data center installs would benefit from the software defined infrastructure and isolation as most data center installs are giant undifferentiated flat prod blobs of badness. The same could be said for all the various similar solutions to k8s though and it wasn’t obvious why k8s was the “right” choice beyond the hype cycle at that time.

Eventually EKS was built to satisfy customers that insisted these issues were just FUD from aws to lock customers into the aws infrastructure. However what I have seen since is a basic progression of: customer uses k8s on prem, is fanatical about its use. They try to use it in aws and it’s about as successful as on prem. Their peers squint at it and say “but wouldn’t this be easier with ECS/fargate?” K8s folks lose their influence and a migration happens to ECS. I’ve seen this happen inside aws working with customers and in three megacorps I’ve worked on cloud strategies for. I’ve yet to encounter a counter example, and this was sort of what Andy predicted at the time. I’m not saying there aren’t counter examples, or that this isn’t a conspiracy against k8s to get your dollars locked into aws.

On standards Andy always said that at some point cloud stuff would converge into a standards process but at the moment too little is known about patterns that work for standards to be practical. Any company launching into standards this early would get bogged down and open the door to their competitors innovating around them and setting the future standard once the time is right for it. Obviously not an unbiased viewpoint, but a view that’s fairly canonical at Amazon.



Eventually EKS was built to satisfy customers that insisted these issues were just FUD from aws to lock customers into the aws infrastructure.

I mean..the customers are not wrong.


At the most charitable minimum that wasn’t the spirit of the convos internally though. These were the points of why aws didn’t think k8s was a great idea, even for customers, if they’re customers of aws rather than gcp. Aws makes money if you use them using k8s or ECS, and once you use any stateful service or spend the time to specify the eks infrastructure, you’ve got a switching cost no matter what.

My thought in this space is go with whatever is the least effort. There is no meaningful portability between cloud providers using anything right now. But if you don’t make your stuff baroque it’s also not hard to port between one provider and another from an infrastructure specification point of view. I think the “lock in” at the specification of infrastructure is a canard. Lock in happens to a much deeper level at the integrations between dependencies inside the customers own infrastructure and the stored state. Having 1000 services across an enterprise integrated inside (aws|gcp|azure|oracle|on prem) makes it hard to switch anywhere else from a basic connectivity, rights, identity, etc level - so hard that it degenerates into why “hybrid” cloud infrastructures basically fail. But that means switching is either all or nothing, which is impractical, or you bite off this integration problem, which is apparently impossible or at least absurdly hard. Then you’re also left with stored state, which is heavy and difficult to move, let alone expensive, but also the challenge of moving the state over with the state managing services without downtime or loss of data is also pretty hard. Hard enough that you can’t expect every team owning the 1000 services can do it.

So, you can pick k8s and run an abstraction on an abstraction, or not, but when it comes time to break your lockin, k8s won’t buy you anything.


> There is no meaningful portability between cloud providers using anything right now

Where are you getting this from? If you use k8s as base layer, lift and shift your infra or even running multi-cloud is not much harder than bringing up new region on the same cloud


I’d refer you to the rest of what I wrote. If you have a single stack owned by a single team that has no meaningful use of the providers stateful services, yes. Otherwise, my points apply.


As we can now tell k8s was absolutely genius play bc one option is aws embraces it reducing their competitive edge in service offerings (at the time) which didn’t happen or doesn't do it at all/half-asses it (which is what happened) while the platform gains popularity which also reduces their edge


Not sure why you’re being downvoted, this is a very interesting history.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: