Have you checked out the The National Museum of Computing (TNMoC) archive. Last time I was there they had a rather good magazine collection going back to the early 1980s. It may be worth a call. I see they have an (incomplete) online catalogue:
When using LVM there is no need to use separate mdadm (MD) based RAID - just use LVM's own RAID support.
I have a workstation with four storage devices; two 512GB SSDs, one 1GB SSD, and one 3TB HDD. I use LUKS/dm_crypt for Full Disk Encryption (FDE) of the OS and most data volumes but two of the SSDs and the volumes they hold are unencrypted. These are for caching or public and ephemeral data that can easily be replaced: source-code of public projects, build products, experimental and temporary OS/VM images, and the like.
dmsetup ls | wc -l
reports 100 device-mapper Logical Volumes (LV). However only 30 are volumes exposing file-systems or OS images according to:
ls -1 /dev/mapper/${VG}-* | grep -E "${VG}-[^_]+$" | wc -l
The other 70 are LVM raid1 mirrors, writecache, crypt or other target-type volumes.
This arrangement allows me to choose caching, raid, and any other device-mapper target combinations on a per-LV basis. I divide the file-system hierarchy into multiple mounted LVs and each is tailored to its usage, so I can choose both device-mapper options and file-system type. For example, /var/lib/machines/ is a LV with BTRFS to work with systemd-nspawn/machined so I have a base OS sub-volume and then various per-application snapshots based on it, whereas /home/ is RAID 1 mirror over multiple devices and /etc/ is also a RAID 1 mirror.
The RAID 1 mirrors can be easily backed-up to remote hosts using iSCSI block devices. Simply add the iSCSI volume to the mirror as an additional member, allow it to sync 100%, and then remove it from the mirror (one just needs to be aware of and minimising open files when doing so - syncing on start-up or shutdown when users are logged out is a useful strategy or from the startup or shutdown initrd).
Doing it this way rather than as file backups means in the event of disaster I can recover immediately on another PC simply by creating an LV RAID 1 with the iSCSI volume, adding local member volumes, letting the local volumes sync, then removing the iSCSI volume.
I initially allocate a minimum of space to each volume. If a volume gets close to capacity - or runs out - I simply do a live resize using e.g:
lvextend --resizefs --size +32G ${VG}/${LV}
or, if I want to direct it to use a specific Physical Volume (PV) for the new space:
lvextend --resizefs --size +32G ${VG}/${LV} ${PV}
One has to be aware that --resizefs uses 'fsadmn' and only supports a limited set of file-systems (ext*, ReiserFS and XFS) so if using BTRFS or others their own resize operations are required, e.g:
Mdadm raid is rock solid. Lvm raid is not at the same level. There was a bug for years that made me doubt anybody even uses lvm-raids.
I could not fix a broken raid without unmounting it. Mdadm and ext4 is what I use in production with all my trust. Lvm and btrfs for hobby projects.
To expand on this with an example. Adding a new device we'll call sdz to an existing Logical Volume Manager (LVM) Volume Group (VG) called "NAS" such that all the space on sdz is instantly available for adding to any Logical Volume (LV):
pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup":
lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:
"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
Synology SHR is btrfs (or ext4) on top of LVM and MD. MD is used for redundancy. LVM is used to aggregate multiple MD arrays into a volume group and to allow creating one or move volumes from that volume group.
Comment I responded to was using LVM on its own and I was wondering about durability. The docs seem to suggest LVM supports various software raid configurations but I'm not clear how that interacts with mixing and matching physical volumes of different sizes.
I have a wake-alarm[0] that triggers 30 minutes before civil twilight, that is roughly 60 minutes before local sunrise.
In the northern hemisphere at 52 degrees it gets earlier by about 2 minutes each day (additional 4 minutes of daytime).
So I get more sleep and short days in winter and less sleep and longer days in summer. It's liberating basing schedule on it and not some arbitrary time.
For Starlink the User Terminal (antenna a.k.a. "Dishy") is a phased array. It tracks the satellite as it passes from west to east. Each satellite is in view for around 15 seconds - the phased array instantly flips from east to west and acquires the new in-view satellite in microseconds. There's no degradation in almost all 'flips' especially if the U.T. has an unobstructed view of the sky.
I've been using Starlink since early 2021 with IPv6 only internally. Starlink User Terminal hands out a /56 prefix (via DHCPv6) and mine has not changed in all that time so I wouldn't call it dynamic.
The User Terminal issues a router advertisement (RA) and my gateway gives itself an address in that /64 via SLAAC in addition to assigning itself an address from the /56 prefix.
If not using prefix delegation each host's address is dependent on their SLAAC policy - if not preferring stable addresses (e.g: EUI64) then of course the public address will vary (be dynamic) when using temporary "privacy" addresses.
My gateway delegates /60 sub-prefixes of the /56 and bare-metal hosts then either delegates /62 or advertises /64s from the /60 to VMs, containers, network namespaces and so forth.
As someone else described, I have my gateway also delegate ULA prefixes by changing just the first two octets of the public delegated prefix to fddc (fd = ULA, dc = "data center :) but otherwise identical and likewise on the bare-metal hosts, etc.
ULA is used for internal services; ISP delegated prefix for anything that needs public access.
Multicast-DNS takes care of internal hostnames; everything is ${hostname}.local
There's a separate VLAN for legacy IPv4-only devices that does NAT64 using a ULA prefix.
DNS64/NAT64 for the laggards like github.com that can't grok 128 bit addresses :)
The only time I have problems with web services is when their DNS advertises an AAAA resource record but their firewall/load-balancers/servers are not configured to allow/listen on it.
As for static addresses, it says "a reservation system retains the ... IPv6 prefix even when the system is off or rebooted. However, relocating the Starlink or software updates may change these addresses."
I suspect in practice the IPv6 address will only change if you get moved to a different POP ground station. Some customers never get moved. I've been moved several times because I'm in NorCal and they keep switching me between Seattle and Los Angeles.
Yes, I use direct IPv6 peer-to-peer connections both outbound and inbound using the delegated prefix.
Even for a changing prefix, if operating a DNS authoritative server for a domain, any changes to the prefix can be quickly and automatically updated in both forward (AAAA) and reverse (PTR) resource records provided the TTL for those records is appropriately short, and thus allow almost seamless inbound via FQDNs. I do this with a bind9 (hidden) master locally that notifies external slave servers operated by a highly available, anycast, DNS service.
In my long experience of debugging and fixing ACPI errors exposed by Linux the reason MS Windows avoids (exposing to the operator) these firmware bugs is due to the fixes being incorporated into the Windows platform/chipset device drivers they ship.
I do something similar except that I do not allow wildcard reception - I create unique service-identifying user@ for each service I give an address to, and have a simple script that immediately adds that to the Postfix virtual table.
That way the SMTP server can reject all unknown user@ without accepting them in the first place - preventing spamming and some types of denial of service through resource starvation.
I also apply greylist based on a unique tuple (From, To, client IP address) so on first connection with that tuple valid SMTP clients need to re-deliver the email after a waiting period. Any subsequent delivers are accepted immediately.
This is often due to the total costs being externalised (pushed off to others) and therefore not reflecting the true cost of the replacement nor the costs of (safe) disposal of the old unit.
Externalised costs such as emissions from manufacturing of new raw materials (metals, plastics, gases, etc.), transportation, disposal, and more.
Obviously it depends on what exactly fails. I've kept 'white goods' going for over 20 years despite:
1) known defect where Hotpoint Fridge/Freezer evaporator thermistor fails due to freeze/defrost thermal cycle. Replaced more than 10 times; cost of new thermistor is pennies; time to replace (after initial explore) 10 minutes.
2) Freezer control PCB misreading thermistor; replace PCB: UK£35.
3) LG Washing machine bearing failures; replaced about 6 times; time to replace (after initial explore): 45 minutes.
I think sometimes repair-or-replace depends on one's state of mind. Figuring out what is wrong and how to fix can be frustrating but, equally, it can be extremely satisfying to realise you can do it and are no longer reliant on some mystical "expert" !
Society as a whole in many countries is losing (or has already lost) the ability to be self-reliant and that lack makes people and communities generally more fragile.
Self-reliance is one of the drivers of hackers and tinkerers.
I often develop feelings for the products I use (I know...). When I look at dishwasher I reminisce how many moments I had whilst standing next to it tirelessly working through my dirty dishes. I'll give it a tap. Sometimes I talk to it when loading like "Hey there, I got you some new stuff. Don't worry I'll feed you salt at the end of the week. Now I'll do your favourite program". Then once it finishes I say like "Oh what a great work you did there!" and so on.
Then when it broke (the motor seized) I just wouldn't have heart to simply dispose of it. I sourced the motor and called in repair guy who installed it. It did cost me in total as much as I would pay for a new dishwasher, but I would never get the sense of feeling that I saved a friend.
I am so glad that I'm not the only one. Every time I shut the hood of a car after working on it, I give it a little loving pat-pat.
I have this inexplicable feeling, contrary to my usually rational self, that machines have a sort of soul and "feel better" when they're taken care of, and I feel like I'm letting it down when I extend an oil change/put off maintenance/don't take care of a problem I'm aware of yet. I don't really believe they have souls or anything; it's just a feeling I get.
Come to think of it, I do the exact same things with my plants too.
My economic theory is that the price reflects ecological costs fairly well.
At first glance it might appear that fixing would have a lower environmental cost. But the money spent will be spent by the repairman on things like international travel or whatever - and each of the things the money is spent on have environmental costs and externalities.
I don’t quite think so. I think we’re in a status quo that prevents/obfuscates a more efficient economic activity because it inflates GDP which makes the politicians and economists happy.
The magic number won’t go up much if you call somebody and they tell you what you need to do change the PCB and ship you the part(for a small markup price).
https://www.tnmoc.org/library-archive