I do have a positive micro-service experience, and although we are still in that process of breaking down our monolith SOA based app, we have seen the benefits already.
The more dramatic effect was on a particular set of endpoints that have a relative high traffic (it peaks at 1000 req/s) that was killing the app, making upset our relational database (with frequent deadlocks) and driving our Elasticsearch cluster crazy.
We did more than just split the endpoints into microservices. We also designed the new system to be more resilient. We changed our persistence strategy to make it more sensible to our traffic using a distributed key-value database and designed documents accordingly.
The result was very dramatic, like entering into a loud club and suddenly everything goes silent. No more outages, very consistent response times, the instances scaled with traffic increase very smoothly and in overall a more robust system.
The moral of this experience (at least for me) is that breaking a monolith app into pieces has to have a purpose and implies more than just move the code to several services keeping the same strategy (that's actually slower, time consuming and harder to monitor)
Do you think the result could also be a dramatic improvement if you kept old system and do those other things except splitting into microservices?
I can't get my head around how people introduce changes to their system if they have to update 12 different microservices at once? It must be horrible.
Often you hear stories how people are converting monolithic app to microservices - but this is easy. Rewriting code is easy and it's fair to say it always yields better code (with or without splitting into microservices - it doesn't matter).
What I'd like to hear is something about companies doing active development in microservice world. How do they handle things like schema changes in postgres where 7 microservices are backed by the same db? What are the benefits compared to monolithic app in those cases?
It seems to me that microservices can easily violate DRY because they "materialise" communication interfaces and changes need to be propagated at every api "barrier", no?
Multiple microservices are supposed to have different data backends, so that they are completely independent. Splitting your data up this way isn't all roses, but ideally the services are isolated so an update to one doesn't affect the others.
>Do you think the result could also be a dramatic improvement if you kept old system and do those other things except splitting into microservices?
As I said in another thread, the separation in different components was key for resiliency. That allowed independence between the higher volume update and the business critical user facing component.
>I can't get my head around how people introduce changes to their system if they have to update 12 different microservices at once? It must be horrible.
The thing is, if you design the microservices properly it is very rare to introduce a change in so many deployments at once. Most of the time is just 1 or 2 services at a time.
>What I'd like to hear is something about companies doing active development in microservice world. How do they handle things like schema changes in postgres where 7 microservices are backed by the same db? What are the benefits compared to monolithic app in those cases?
We don't introduce new features in our monolith service anymore. So, from that perspective we do all active development in microservices.
>"How do they handle things like schema changes in postgres where 7 microservices are backed by the same db?
The trick is, you want to avoid sharing relational data between microservices. I don't know if it is just us, but we have been able to split our data model so far and in most cases we don't even need a relational database anymore, so having a schemaless key/value store makes seems easy too.
> What are the benefits compared to monolithic app in those cases?"
There are several advantages, but the critical one for me is being able to have a resilient platform that can still operates even if a subsystem is down. With our monolithic app is an all or nothing thing. Another advantage is splitting the risk of new releases.
>It seems to me that microservices can easily violate DRY because they "materialise" communication interfaces and changes need to be propagated at every api "barrier", no?
Not necessarily. YMMV but you can have separation of concerns and avoid sharing data models. When you do have shared dependencies (like logging strategy or data connections) you can always have modules/libraries.
Which one of the four major improvements do you attribute the success to though? Could you have done the work on making it more resilient, persistence, sensible, redesign the docs without breaking into micro-services and still have seen the positive results?
I don't think the level success comes from one dimension, but I don't think either that we could have achieved the resiliency without breaking it in micro-services (or just services that happened to be small if you will).
One key factor was decoupling the high volume updates from the users requests so one didn't affect the other one.
The more dramatic effect was on a particular set of endpoints that have a relative high traffic (it peaks at 1000 req/s) that was killing the app, making upset our relational database (with frequent deadlocks) and driving our Elasticsearch cluster crazy.
We did more than just split the endpoints into microservices. We also designed the new system to be more resilient. We changed our persistence strategy to make it more sensible to our traffic using a distributed key-value database and designed documents accordingly.
The result was very dramatic, like entering into a loud club and suddenly everything goes silent. No more outages, very consistent response times, the instances scaled with traffic increase very smoothly and in overall a more robust system.
The moral of this experience (at least for me) is that breaking a monolith app into pieces has to have a purpose and implies more than just move the code to several services keeping the same strategy (that's actually slower, time consuming and harder to monitor)