As the article points out: the hardware is at risk of physically failing and it’s getting harder to replace like for like. That’s the reason for looking at an upgrade. Hell, even turning the machines off to replace them is a challenge since some systems need to run 24/7!
Not necessarily. For example, if there is custom hardware used for communications with other systems, such as radar for example, there might be specific timing and latency requirements that could be difficult to meet under emulation.
The most recent Dolphin Emulator post referenced a bug they had where memory cards were written to too quickly under the emulator (and even on actual hardware if you had memory cards that were sufficiently fast) which caused some games problems because they did not expect save files to be written so quickly. Imagine things like that, but where the worst case isn't having Wind Waker hang while saving, but planes crashing.
Back when I was designing electronic circuits, the rule was to design for minimum speed, but faster speed should not cause a failure. The rationale was that newer parts were usually faster, and the older parts disappeared.
Of course, nothing can prevent poorly designed code and hardware.
The speed thing was just an example that easily came to mind. I can imagine there are other kinds of analog vs digital interactions that might be occurring that may not be easily replicated under emulation. Especially with a system that grew somewhat organically over the last half-century.
Emulation is likely possible, probably for many of the systems involved, but this is not a field where bugs, especially ones introduced due to emulation, would be easily acceptable.
> Of course, nothing can prevent poorly designed code and hardware.
Agreed, but the reality is that here, trying to fix things and ending up breaking them can and probably will kill people.
If it uses any digital hardware directly accessed by software (something that Win9x still allowed, see DOS games w/ ISA sound cards), it makes virtualization impossible.
Operating systems have gotten a whole lot more reliable since Windows 95. The way I remember it, Windows 98 would regularly corrupt itself and need to be manually reinstalled. I'd done it so many times that I could pretty much recite the license key from memory. Modern Linux is rock solid. Even Windows 10 is very stable. They might be 'bloated', but modern OS's are way, way more stable.
corrupt itself and need to be manually reinstalled
In my experience that's normally the fault of third-party software, and otherwise quite easy to determine and avoid/fix. Now OSes with more protections just hide those bugs, causing most software to regress to a barely-working state.
I ran 98SE as a daily driver from late 1999 until 2010, and it was reinstalled at most 3 times, not even coinciding with hardware upgrades.
Or of just a power outage or driver causing a loss of write back cache.
95 and 98 and ME crashed on a regular basis. I specifically remember upgrading from ME to XP and being so happy with the massively improved stability of the NT kernel over the 9x kernels.
If you think that's 9x was stable and reliable, you may be thinking very nostalgicly.
I am not so sure. I've ran 98 on bad hardware, and it crashed regularly. So much so, that I installed linux on it already in 1998, and that was much more stable. It only crashed now and then. No doubt in both cases the poor hardware was the cause of it.
Anyway, two years later I got a brand-new laptop with good hardware that was running 98se. As far as I remember, it didn't crash during normal usage. By then I was studying computer science, and would sometimes write or run programs that would make it crash, but that was on me. I did dual boot in Linux, and that didn't have any problems on that machine either.
Fun fact, I still have that laptop, it's over 25 years old now, but it still works and runs Windows 98se!
I've noticed that operating systems can get very flaky when the disk space gets tight. It seems that too much code does not check for disk full write failures.
It was still very much like modern systems. If you didn't install, uninstall, or aggressively reconfigure things they were pretty stable, and controlled changes could be achieved. Some of the problem though was that the systems required a lot of that to do anything fun with them at home.
Yes, we know that floppy disks and drives will wear out, and they have few if any sources for new repair parts. So the fact that the system is still more or less working today doesn't mean it isn't doomed and needs to be replaced before experiencing a catastrophic unrecoverable failure.
Because it wouldn't be profitable? How many do you think they could sell to a dying market, and what would those manufacturing costs be? What experts could you tap who know this space? they are all gone
I read some years ago - IIRC the letters pages of BYTE, which dates it - about a critical factory control system in a company somewhere running on an IBM XT. The MFM drive had started to show some errors, so they got in touch with IBM, who being IBM, did not have any drives in stock (they'd stopped making them 15 years previously), but could retool a manufacturing line and make some. They offered to do it for $250k/drive. The company paid up.
That was cheaper at that time, than modernising that system. But it's clearly not long-term scalable.
I've heard of S/360s in KTLO mode in basements keeping banks running. Teams of people slowly crafting COBOL to get new features in at a cost of thousand of dollars a day each, and it "still works". But from a risk point of view, this is also ridiculous.
Safety critical systems have different economics. Yes, you can keep the floppy systems going, but the cost of keeping them going is rising exponentially each year, and at some point a failure will cost one or more airliners full of civilians and the blame will be put on not having a reasonable upgrade policy.
Sometimes you have to fix things before they stop working, or the cost is not just eyewateringly expensive in terms of dollars, but of human lives too.
Let's be a little more reasonable. I don't think anyone is saying we need AI. There are numerous other technological advances between floppy drives and AI that our air traffic control system could benefit.
Does it work? Sure. You have to ask more questions. How much does it cost to keep it working? How much would it cost to upgrade? If we do nothing, along what sort of timeline can we expect it to stop working, or become cost prohibitive to maintain?
Well also, 20 years is less time then you think. For a system of this magnitude, deploying the replacement could easily take 5 years to get all the way through to full completion. So that's 1/4 of your runway gone right there.
Every year you delay is pushing that lower, and then there's whether the funding is available because you're in fairweather economic conditions or if crisis will happen concordantly with some other crisis (I.e. do you want to be stuck replacing air traffic control systems in a rush because some war has wiped out the floppy supply chain right as your air logistics is a critical issue?)
The article completed skipped over this. This video was released literally a week ago and is completely mocking the FAA. Floppy disks are a big joke in this video.
I would trust a floppy-powered Windows 95 system over the horror show that passes for common operating systems in 2025.
What will they think of next? Adding AI to the ATC system?