First build the thing that works, and only if it's really necessary, split it up in separate (networked) parts. You won't have to deal with unreliable network communication, or coordinate on a breaking API change with several teams when a simple search/replace on several function definitions and calls suffices.
I agree, though well designed software, even big monoliths, can be written in a way that isn't too hard to distribute later.
For example, if you utilize asynchronous queues everywhere, instead of something like a shared-memory mutex, it's relatively straightforward to turn that into some kind of networked queue system if you need to. Pretty much every language has a decent enough queue implementation available.
Asynchronous queues make your data out of sync (hence the name) and inconsistent one of the main downsides of microservices. Their use should be minimized to cases where they are really necessary. A functional transactional layer like postgres is the solution to make your state of truth accessed in a synchronized, atomic, consistent way.
Functions and handlers should not care where data comes from, just that they have data, and a queue is the abstraction of that very idea. Yes, you lose atomicity but atomicity is generally slow and more problematic has a high amount of coupling.
I don’t agree that being out of sync is the main downside of microservices; the main downside is that anything hitting the network is terrible. Latency is high, computers crash, you have to pay a cost of serialization and deserialization, libraries can be inconsistent, and zombie processes that screw up queues. Having stuff in-process being non-synchronized wouldn’t even hit my top five.
ETA:
I should be clear; obviously there are times where you want or need synchronization, and in those cases you should use some kind of synchronization mechanism, like a mutex (or mutex-backed store e.g. ConcurrentHashMap) for in-process stuff or a SQL DB for distributed stuff, but I fundamentally disagree with the idea that this should be the default, and if you design your application around the idea of data flow, then explicit synchronization is the exception.
I'll agree that the network layer adds more problems to microservices, but even with a perfect network, they are problematic. Everything being out of sync, (if they are stateful microservices which queues imply), is one big issue. Things being interconnected in broad global scopes instead of more locally scoped is the other big issue.
The more you have globally interconnected and out of sync states, the less predictable your system is.
The solution is to be as hierarchical, as tightly scoped, as functional and as transactional as you can.
To add to this. There's fundamental theoretical reasons why microservices or bad. They increase the entropy of code (https://benoitessiambre.com/entropy.html) by increasing globally scoped dependencies. They are the global variables or architecture. Having lots of interconnected global variables makes for an unpredictable chaotic system.
First build the thing that works, and only if it's really necessary, split it up in separate (networked) parts. You won't have to deal with unreliable network communication, or coordinate on a breaking API change with several teams when a simple search/replace on several function definitions and calls suffices.