Our number is closer to 1,700 now, but yes this means 1,700 distinct applications. Each application has many instances, some have thousands of instances.
Could you give some information as to what the breakdown of functionality is for those services? I can't fathom 1700 different and unique pieces of functionality that would need to be their own services.
I work at a microservice company. An example microservice is our geoservice which simply takes a lat/lon and tells you what service region the user is in (e.g. New York, San Francisco, etc..). You can see how dozens of these services might be needed when handling a single request coming in from the front end or mobile apps. The service may eventually gain another related function or two as we work on tearing down our monolith (internally referred to as the tumor because we do anything to keep it from growing...).
You might want to add functionality such as caching or business rules.
A better question would be why not write a module or class? There a pros and cons to either, but advantages include: better monitoring and alerting, easier deployments and rollbacks, callers can timeout and use fallback logic, you can add load balancing and scale each service separately, it's easy to figure out which service uses what resources, it makes it easy to write some parts of application in other programming languages, different teams can work on different parts of the application independently as long as they agree on an API.
Strictly this is not necessarily relevant. You can easily roll that table hit into another db hit you were already making. Can't do that with services.
Yes, but instead of making it a whole new service, you are probably already using a database and can use that service for this functionality as well.
But since asking the question I've realized that if your application already needs a huge amount of servers because it simply gets that much traffic, then putting something like this in its own docker instance is probably the simplest way (it might even use postgres inside it), if those boundaries change now and then.
I understand what microservices are, but I can't understand what 1700 pieces of unique functionality of Uber could be abstracted into their own services. I am struggling to even think of 100, so I was curious what exactly some of these things were, and how they structured things to need so many service dependencies.
It looks that for functions as simple as that the RPC overhead should be pretty small, or it will eclipse the time / resources spent by the actual business logic.
E.g. I can't think of a REST service doing this; something like a direct socket connection (over a HA / load-balancing level) with a zero-copy serialization format like Cap'nProto might work.
Whoa. That's is an insanely large amount of applications. I'm assuming that's essentially one function per microservice which is one of the huge do-not-do of microservices as it's just a complete waste of time, resources, etc.
I would love to hear a breakdown. This sounds like a nightmare to maintain and test.
>>> Our number is closer to 1,700 now, but yes this means 1,700 distinct applications. Each application has many instances, some have thousands of instances.
Time to forbid adding more stuff and start cleaning.
(12) [...] perfection has been reached not when there
is nothing left to add, but when there is nothing
left to take away.
Why? I'm assuming their highish engineer/service ratio is because their services do less individually.
Anecdotally, I've worked on services that ran tens of thousands of instances across the world. You build the tools to manage them and it works very well.
People talk about interpreted languages being slow sometimes, now your program is divided over 1700 separate servers and instead of keeping variables in memory you have to serialize and send them over the network all the time.