meh. Practically speaking, we're talking about a single point of disk failure - which certainly happens but at a rate that is sufficiently low. Plus, the actual amount of data stored is tiny, you can replicate it in seconds. Amazon has solutions for this if it's truly of concern, I would guess google does as well.
IMHO, the operation of etcd - and the fact the data became unreadable if you lost quorum - was a much higher risk factor than possible disk failure. It was impractical to backup as well, you either have quorum or you don't. Even without NFS, I could backup that sqlite db every 5 minutes via a cron job and have most of my cluster state perfectly preserved.
Disk failure happens quite frequently for me at scale, but so do other things like RAM going bad or network cards dying or entire mainboards just acting weird or top-of-rack switches silently dropping packets because of memory corruption (all of these have happened to machines I'm responsible for in the last six months). Again I think this is a matter of scale. If you've got enough machines that disk failure is a concern, you can also run a 9-node etcd cluster and have a big enough pager rotation that keeping 5 of them up 99.999% if not 100% of the time isn't a challenge. If you have less than a rack of machines and you're not at the point where you're worried about having a SPOF in your network switches or power supplies, running etcd a bunch of overhead for a problem you don't have and you are genuinely better served by a robust non-distributed system whose availability is the availability of your hardware.