Hacker News new | past | comments | ask | show | jobs | submit login

Our number is closer to 1,700 now, but yes this means 1,700 distinct applications. Each application has many instances, some have thousands of instances.



Could you give some information as to what the breakdown of functionality is for those services? I can't fathom 1700 different and unique pieces of functionality that would need to be their own services.


I work at a microservice company. An example microservice is our geoservice which simply takes a lat/lon and tells you what service region the user is in (e.g. New York, San Francisco, etc..). You can see how dozens of these services might be needed when handling a single request coming in from the front end or mobile apps. The service may eventually gain another related function or two as we work on tearing down our monolith (internally referred to as the tumor because we do anything to keep it from growing...).


This just sounds crazy to me. How often do the services that are responsible for a given region change?


Why would you make that a separate service when e.g. one query on a PostGIS table can do that?


You might want to add functionality such as caching or business rules.

A better question would be why not write a module or class? There a pros and cons to either, but advantages include: better monitoring and alerting, easier deployments and rollbacks, callers can timeout and use fallback logic, you can add load balancing and scale each service separately, it's easy to figure out which service uses what resources, it makes it easy to write some parts of application in other programming languages, different teams can work on different parts of the application independently as long as they agree on an API.


One query on a table is the exact same thing as a HTTP GET call on a service.


Strictly this is not necessarily relevant. You can easily roll that table hit into another db hit you were already making. Can't do that with services.


Yes, but instead of making it a whole new service, you are probably already using a database and can use that service for this functionality as well.

But since asking the question I've realized that if your application already needs a huge amount of servers because it simply gets that much traffic, then putting something like this in its own docker instance is probably the simplest way (it might even use postgres inside it), if those boundaries change now and then.

But most companies aren't near that scale.


You're both missing the point here. Both things are conceptually equivalent:

- Select(db, somekey, someparameters) [return some db object]

- http_get_query(http://service.com/somekey/someparameters [return some JSON]

They are external (micro)service:

- they both need the target system to be available.

- they both may fail in weird and unexpected ways.

- they both need to handle failure gracefully.

Their usage have different properties:

- A database call need to have a permanent connection pool to the database, usually requiring db user and password.

- A http call is just call and forget. It's a lot easier to use, in any applications, at any time.


I understand what microservices are, but I can't understand what 1700 pieces of unique functionality of Uber could be abstracted into their own services. I am struggling to even think of 100, so I was curious what exactly some of these things were, and how they structured things to need so many service dependencies.


It looks that for functions as simple as that the RPC overhead should be pretty small, or it will eclipse the time / resources spent by the actual business logic.

E.g. I can't think of a REST service doing this; something like a direct socket connection (over a HA / load-balancing level) with a zero-copy serialization format like Cap'nProto might work.


Whoa. That's is an insanely large amount of applications. I'm assuming that's essentially one function per microservice which is one of the huge do-not-do of microservices as it's just a complete waste of time, resources, etc.

I would love to hear a breakdown. This sounds like a nightmare to maintain and test.


>>> Our number is closer to 1,700 now, but yes this means 1,700 distinct applications. Each application has many instances, some have thousands of instances.

Time to forbid adding more stuff and start cleaning.

    (12) [...] perfection has been reached not when there
        is nothing left to add, but when there is nothing
        left to take away.
https://tools.ietf.org/html/rfc1925


This type of comment presumes that there is a right number of services for a company to run.

What number is that?


Good god, what on earth for?


that sounds nightmarish


Why? I'm assuming their highish engineer/service ratio is because their services do less individually.

Anecdotally, I've worked on services that ran tens of thousands of instances across the world. You build the tools to manage them and it works very well.


> Why?

People talk about interpreted languages being slow sometimes, now your program is divided over 1700 separate servers and instead of keeping variables in memory you have to serialize and send them over the network all the time.


It's not the number of instances that he's talking about, it's the number of images.

Microservices can go to far. I'm very thankful for this video.


The # of services vs # of engineers isn't really bad though; you just need the tools to make the commonalities between services easy.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: