First build the thing that works, and only if it's really necessary, split it up in separate (networked) parts. You won't have to deal with unreliable network communication, or coordinate on a breaking API change with several teams when a simple search/replace on several function definitions and calls suffices.
I agree, though well designed software, even big monoliths, can be written in a way that isn't too hard to distribute later.
For example, if you utilize asynchronous queues everywhere, instead of something like a shared-memory mutex, it's relatively straightforward to turn that into some kind of networked queue system if you need to. Pretty much every language has a decent enough queue implementation available.
Asynchronous queues make your data out of sync (hence the name) and inconsistent one of the main downsides of microservices. Their use should be minimized to cases where they are really necessary. A functional transactional layer like postgres is the solution to make your state of truth accessed in a synchronized, atomic, consistent way.
Functions and handlers should not care where data comes from, just that they have data, and a queue is the abstraction of that very idea. Yes, you lose atomicity but atomicity is generally slow and more problematic has a high amount of coupling.
I don’t agree that being out of sync is the main downside of microservices; the main downside is that anything hitting the network is terrible. Latency is high, computers crash, you have to pay a cost of serialization and deserialization, libraries can be inconsistent, and zombie processes that screw up queues. Having stuff in-process being non-synchronized wouldn’t even hit my top five.
ETA:
I should be clear; obviously there are times where you want or need synchronization, and in those cases you should use some kind of synchronization mechanism, like a mutex (or mutex-backed store e.g. ConcurrentHashMap) for in-process stuff or a SQL DB for distributed stuff, but I fundamentally disagree with the idea that this should be the default, and if you design your application around the idea of data flow, then explicit synchronization is the exception.
I'll agree that the network layer adds more problems to microservices, but even with a perfect network, they are problematic. Everything being out of sync, (if they are stateful microservices which queues imply), is one big issue. Things being interconnected in broad global scopes instead of more locally scoped is the other big issue.
The more you have globally interconnected and out of sync states, the less predictable your system is.
The solution is to be as hierarchical, as tightly scoped, as functional and as transactional as you can.
To add to this. There's fundamental theoretical reasons why microservices or bad. They increase the entropy of code (https://benoitessiambre.com/entropy.html) by increasing globally scoped dependencies. They are the global variables or architecture. Having lots of interconnected global variables makes for an unpredictable chaotic system.
Funnily enough, microservices. In the macro economy you don't have to have such strict coordination with Microsoft, or OpenAI, or Google, or whomever you interface with. You just figure out how to make your solution work within the confines of the service they give you. Like it or not.
Microservices is exactly the same concept except in the micro economy of a single organization. Each team is like Microsoft, OpenAI, Google, etc. You don't coordinate with them, you deal with what they give you. Like it or not.
I expect the earlier statement confused microservices with a multi-process application.
Yes, in practice you very well might end up there, but then you are not providing microservices and would not call it as such. But it remains that microservices is the solution. Fair to say that it is the solution like not eating too many calories is the solution to losing weight — as in it is not exactly fun to have to put yourself through it, and thus most with the problem will never try to fix it/give up — but the solution all the same.
Or even, someone leaves and you end up with a mess of maintaining multiple services that aren't coherently seperate at all, but have no time to refactor them together to make sense. That's been my experience.
But why choose, just do all three at the same time! Actually you don't even have to choose, it will naturally happen when transitions are never fully completed... So before you know it you're stuck with a partially integrated legacy monolith which talks to a legion of half-baked microservices and emits events processed by arcane workflow engines orchestrating lambda execution.