Not sure how common the use-case is, but we're using Ceph to effectively roll our own EBS inside AWS on top of i3en EC2 instances. For us it's about 30% cheaper than the base EBS cost, but provides access to 10x the IOPS of base gp3 volumes.
The downside is durability and operations - we have to keep Ceph alive and are responsible for making sure the data is persistent. That said, we're storing cache from container builds, so in the worst-case where we lose the storage cluster, we can run builds without cache while we restore.
http://www.45drives.com/blog/ceph/what-is-ceph-why-our-custo... is a pretty good introduction. Basically you can take off-the-shelf hardware and keep expanding your storage cluster and ceph will scale fairly linearly up through hundreds of nodes. It is seeing quite a bit of use in things like Kubernetes and OpenShift as a cheap and cheerful alternative to SANs. It is not without complexity, so if you don't know you need it, it's probably not worth the hassle.