From my dated experience, Ceph is absolutely amazing but latency is indeed a relative weak spot.
Everything has a trade-off and for Ceph you get a ton of capability but latency is such a trade-off. Databases - depending on requirements - may be better off on regular NVMe and not on Ceph.
No, it's important when planning - eg: one big database cluster that provides db-as-a-service (but maybe needs some dedicated ops resources) vs smaller DBs with virtualized storage on ceph (ops resources for ceph cluster and vm tools like k8s).
If the latter is too slow for your typical usage...
Oh, don't get me wrong, you will pay a price for disaggregated highly available storage, and you might need to evaluate whether you want to pay that price or not. But those are two very different worlds, and only one of them gives you elastic disk size, replication, scale-out throughput, and so on.
GP makes Ceph sounds worse than it is, when reality is that just shoving all your reads & writes over the network, writes multiple times because of replication, is gonna cost you no matter what tech you build that with.
Everything has a trade-off and for Ceph you get a ton of capability but latency is such a trade-off. Databases - depending on requirements - may be better off on regular NVMe and not on Ceph.