The reason I chose Synology over others was their SHR "filesystem", where you can continue adding heterogeneously sized disks after constructing the FS and it will make the most use possible out of the extra capacity in the new disks.
When I researched it ZFS did not yet have the resizing feature merged, now it does, though I think it is still not able to use this extra space.
I'm wondering if anybody has any better recommendations given the requirement of being able to add storage capacity without having to completely recreate the FS.
BTRFS doesn't care how big the disks are and you can just tell it to keep x number of copies of each data / metadata / system block and it will do the work of keeping your copies on different devices across the file system. Much like SHR, performance isn't linear with different sized devices, but it's super simple to setup and in tree where ZFS has a lot more complexity and is not baked into the kernel.
Snapshots are available, but a little more work to deal with since you have to learn about subvolumes. It's not that hard.
To expand on this with an example. Adding a new device we'll call sdz to an existing Logical Volume Manager (LVM) Volume Group (VG) called "NAS" such that all the space on sdz is instantly available for adding to any Logical Volume (LV):
pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup":
lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:
"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
Synology SHR is btrfs (or ext4) on top of LVM and MD. MD is used for redundancy. LVM is used to aggregate multiple MD arrays into a volume group and to allow creating one or move volumes from that volume group.
Comment I responded to was using LVM on its own and I was wondering about durability. The docs seem to suggest LVM supports various software raid configurations but I'm not clear how that interacts with mixing and matching physical volumes of different sizes.
In addition to the alternatives already mentioned, I've been very happy with SnapRAID+MergerFS for a few years now. I don't have to worry about a magical black box as with btrfs or ZFS, I can expand the array with disks of any size, if one disk fails I only lose the data on that disk while the array remains usable, and it's dead simple to setup and maintain.
The only drawback, if I can call it that, is that syncs are done on-demand, so the data is technically unprotected between syncs. But for my use case this is acceptable, and I actually like the flexibility of being in control of when this is done. Automating that with a script would be trivial, in any case.
I was disappointed when I fully understood the limitations of SHR after purchasing my Synology box, and subsequently failed to install MergerFS on it. It's the only thing I miss about my old self managed server.
I think it's great advice, but the "finish rarely" part is maybe understated. The goal is to try as many things as possible, as quickly as possible in order to find your true calling. You'll stick with it once found.
It does imply there are likely to be different classes of bugs. If the thing compiles at all, someone's done the work to get the types right, and implement all the arms of match expressions, and written at least basic error handling. The errors are more likely to be high-level logic issues that you'd face in any code, not so much trivial implementation mistakes.
Disney could allows 3rd party cafes in Disneyland, mandate the use of Disney logos on all products, and charge a $2 royalty of every product sold with a Disney logo.
It's a dick move but shifts the legal argument from one of monopoly / gatekeeper status to one of intellectual property rights, the latter being much more business friendly and entrenched in international agreements.
Interesting article. I wonder if the FS abstraction is not too large, maybe it could be split into various layers, such as the Physical layer (block KV store), intermediate with building files from blocks and last UI layer with FS or Object Store type access. Kind of similar to what's happening in the database space with Query Engines like DataFusion.