I assume Hacker News has a bot that automatically posts this skit for every microservice article, but it case it doesn't https://www.youtube.com/watch?v=y8OnoxKotPQ. We should all know Galactus' pain.
From experience this happens just as often inside monoliths.
It's a symptom of over-engineering and building for the future rather than anything inherent to microservices. Java had a whole decade of being obsessed with design patterns e.g Facade, Decorator that resulted in the same spaghetti architecture.
Microservices invite over-engineering more than monoliths do. Monoliths are more prone to inviting a lack of a structure. That’s kind of…a big potential advantage of microservices, I guess.
That's why moduliths are becoming more popular. These are basically monoliths that enforce structure. The other advantage is that each "module" can be extracted as a micro-service later without much work.
the theoretically nice thing about microservices is that because the API boundary should be well defined, any possible application that can fulfill that API, whether it's Java, Rust, or three raccoons in a trench coat can become the new microservice fairly easy.
The problem with that. Maintenance.
Team X writes a micorservice in Node. Team A->W write in elixir. Bug in Micro_X and tada... company is screwed. No one wants to learn Node, so the bugs stay around, get worked around and then Micro_X gets re-written in elixir, because coders have to fix stuff.
The only sane microservice amalgamation I worked on was with Elixir/Phoenix. It was event driven and that was the key. I think event driven microservices are an extemely powerful architecture but it’s still the early days for them until we can get really good open source support for replays, sectioning off events from a period, and lots of utilities like that as well as figure out all of the best practices.
I’ve built monoliths but never at scale, but don’t see why they wouldn’t have scaled incredibly well. I have built macroservices that have scaled super well (5ish services IIRC).
having built my own startup in elixir/phoenix, i'd have to agree.
I think it comes down to how you define a microservice/monolith. when it comes down to codebase, our codebase is very much a monolith. however elixir lets you connect nodes together and setup individualized processes that can be communicated with by any other node in your network toplolgy using the BEAM's built in pubsub system. in essence, creating a new microservice is easy. you just define a geneserver file and add it to your application.ex. about as much effort as adding a controller in rails and that makes a big difference. you get a lot of the advantage of microservice architecture where you spread computation across your physical machines but the codebase is a small easy to maintain monolith.
I've been through similar transformation (just 250 microservices) and I'm not sure the end result was actually better. Microservices are ok if things go well and you can maintain a large army of developers - which you didn't really need in the first place.
In my case: Fast forward 5 years and the business growth didn't materialize; the board made working in the content unpleasant enough so that all the good and expensive developers left and outsourced the rest to India.
These poor contractors have to deal with 20 microservices per team (while we were juggling 5-10, already to much, I think 1-2 services per team).
The old monolith were fine. Microservices - and transitions to new languages - create a lot of new problems (performance of joins over network, rabbitmq dead letters handling, services ddosing each others, updating a shared library and having to bump it in every service in the entire company)
For what it's worth, what you are describing is a pretty well known failure mode. It was known in the days of CORBA and DCOM, and then again in the days of SOAP web services. Microservices as a distributed monolith is not how you partition things if you want to gain productivity and be successful. Literally everything you mention other than updating shared libraries are problems that our team does not have and yet we have around 100 autonomous back end components and over 20 web applications maintained with a team of around 15. We've never felt like there were too many, because usually you don't have to touch most of them. You only have to touch the one or ones that are relevant to your work.
Imagine eliminating network joins (make sure all the data is where it needs to be and/or share a database for reads, which is totally doable)... and eliminating dead letter queues (make sure your service goes offline/retries indefinitely if there is a failure and fix it. Don't tolerate failures. See jidoka)... and don't let services talk to each other (see pub/sub and event sourcing). Oh, and also limit the number of times updating a single library must be applied to all services by getting things as close to right as you can and respecting the physics of software design (see afference and efference).
I missed the part where the person describes that they have a very large development team which justified a non-monolithic architecture. True microservices (with independent development and inter-service contracts) are a reflection of the makeup and scale of the development team. Using true microservices for performance reasons is a misnomer these days: A modular monolith (one codebase that can be deployed into multiple independently scalable services) makes much more sense when the dev team is not big enough to justify the added overhead, but some aspects of the application require independent horizontal scaling.
I'm currently working in an environment where the company I work for has a quite limited IT department that has responsibilities covering hardware and software. One of our investor has a bigger headcount for IT only that's bigger than our company size and we're trying to integrate with them... they're on micro service. It's funny because even though it seemed very well engineered, it create a bureaucratic environment. That means it's very hard to have the right people on the table, they're slow to react to any change (if they even can)... For a small change, they requested 3 months. On our side, if a dev is available, this would have been solved in a week.
Neither situation is good: we're under stress for ressources availability, when they're stuck in Kafka's world. Last week I learned that two different team ingest some of our data — which is fine, except that we moved to an API and only the first team use it. The second team was not aware that it exist (well, forgot about it).
I don't know what the good solution when you reach this kind of headcount would be. As a PM/PO, I'm baffled about this kind of complexity. My experience lead me to think that no one is managing that currently... it's kind of working, till it falls, hard.
The problem you describe is one of a lack of ownership. I'm at a company that has a quite small engineering team, but other departments have their own services (dashboards that read from our database, marketing CRM, etc) which means any change at all to any database table requires bringing together multiple departments to make sure nobody gets upset by the change.
It's easier to have a single owner at a smaller scale (something we should do) but even at a company with a large engineering team, there needs to be someone whose responsibility is to manage dependencies. Otherwise, it falls back to design by committee, and that's how you end up in your situation: nobody actually knows how anything works, and there is literally no way to find out.
Exactly this - they needs months to respond to "small" changes because nobody knows who at the day is responsible for a given service. They probably need to time to filter the request through the product office, find out which service(s) are affected, find the dev team(s) responsible for those service(s), and beg/harass/bribe the EM(s) for those teams to get the work scheduled.
If they had documented service contracts with ownership information attached they could probably have the GP's team communicate directly with the service devs and it would likely go a lot smoother.
Having 1000+ services seems like an overkill for an ecommerce company. I certainly hope they aren't doing something silly like a service for each payment type or a service to manage inventory and another service to manage orders.
I have no clue if it's too much but allegro is biggest ecomerce website in Poland and it works amazingly. I'm currently in Spain and I'm forced to use Amazon and oh lord - I can't event star to wrap my head aourd how on earth people can use that crap - it is slow AF, it's search functionality is abysmal and filtering results almost doesn't. exist... o_X
As a side note, the management of this company a while go announced they would cancel WFH. A standard story - they built a huge office nobody wanted to use. So whoever could, tried to find a new job. The rest have to go to the office 3 days a week by default. Which is plain stupid because they could get much more talent if they weren't that inflexible.
I don't think it is true, because one side says: "You absolutely must come to the office at least N days a week", and the other side doesn't say "Nobody must come to the office" but "Why don't you let people decide for themselves"?
In other words, it is a discussion between inflexible dogmatism and elasticity.
I think the idea with companies is that management decides. If you don't like that, you can discuss it, go someplace else, or even start your own company. But demanding it, and calling management stupid would be my last resort.
I wouldn't call i stupid if I didn't have direct experience, several times. Once a board asked my opinion regarding RTO and I told them openly if they do that, the top talent will leave. They answered, "Nah, they won't". Well, it turned out I was right. I'm not saying it was the same in the case of Allegro, maybe the board realized this will happen but decided to proceed anyway, but I have the right not to call it a smart strategy.
To suggest that everyone should be allowed to decide their own situation is either ignorant or dogmatic. Pretending everyone that works from home gets as much done as in person is either ignorant or dogmatic. Context is important. There are people working from home that shouldn't be. There are people working from an office that don't need to be. There is no single rule that will make everyone happy, nor should there be.
Yes, literally everyone that wants to work from home is able to. That's what you're saying. That every single employable person decides whether to work from home?
I see your point, but if you follow this avenue, any two arguments can be called dogmatic. Please try to se it differently: the other side doesn't insist on one solution, understands that people have different needs, that different jobs have different requirements, and opposes to one inflexible policy for everyone.
It's like having people who insist that everybody should wear shoes of the same size: you can call those who oppose this stupid idea "dogmatic" but it's just semantics.
Are you not insisting on one solution that is "remote flexibility"?
> opposes to one inflexible policy for everyone.
Remote flexibility imposes remote peer requirements on everyone.
I understand that you want to work remote and have created a narrative in support of validating that desire. I'm saying if you can't see any valid reasons why an employer would choose not to do that beyond some pouty made up "Validating the purchase of office space", you're being stubborn just for the sake of it.