EOS was designed and developed with the unique needs of the LHC experiments in mind. The advantages it has are features used by the experiments, like support for remote access via the XRootD protocol, which is used for data analysis with ROOT (only the parts of files needed by an analysis are downloaded); rich support for client authentication methods (Kerberos, X509, OIDC, etc); and support to also FUSE mount everything to give a convenient POSIX-like view of the data. EOS needs to sustain ingestion of data at high rates from experiments (10s of GB/s each) for several months at a time during data taking without any downtime, while at the same time having tens of thousands of clients connected reading data as well. It's also integrated with the CERN Tape Archive (CTA) and File Tranfer Service (FTS), used for long term archival and data management across sites, respectively.
In the cases where block/object storage is needed, like storage for VMs, S3 storage for various uses, etc, then ceph is better suited. It has lower latency and EOS does not offer block-level access. In addition to providing storage services for OpenStack/Openshift, ceph is used to provide storage to back AFS and CVMFS, for example. CVMFS is another interesting piece of CERN's infrastructure, it's a read-only, HTTP-based FUSE filesystem used to distribute the software used by the experiments to grid sites around the world. Dan van der Ster, mentioned in the article above, has a good overview of ceph usage at CERN here: https://youtu.be/2I_U2p-trwI?si=Tsq4h8NIu4vSZQwt