Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did the person survive?


We have limited visibility into this in the emergency department. You stabilize the patient and admit them to the hospital, then they become internal medicine or ICU's patient. Thankfully most of the work was done and consults were called prior to the outage, but they were in critical condition.


I will say - the way we typically find out really sends a shiver down your spine.

You come in for you next shift and are finishing charting from your prior shift. You open one of your partially finished charts and a little popup tells you "you are editing the chart for a deceased patient".


Sounds like this is hugely emotionally taxing, do you just get used to it after a while, or is it a constant weight?

This is why I'm impressed by anyone who works in a hospital, especially the more urgent/intensive care


i'll admit i have no idea what i'm talking about but isn't there some Plan B options? something that's more manual? or are surgeons too reliant on computers?


There are plan B options like paper charting, downtime procedures, alternative communication methods and so on. So while you can write down a prescription and cut a person open you can't manually do things pull up the patient's medical history for the last 10 years in a few seconds, have an image read remotely when there isn't a radiologist available on site, or electronically file for the meds to just show up instantly (all depending on what the outage issue is affecting of course). For short outages some of these problems are more "it caused a short rush on limited staff" than "things were falling apart". For longer outages it gets to be quite dangerous and that's where you hope it's just your system that's having issues and not everyone in the region so you can divert.

If the alternatives/plan b's were as good or better than the plan a's then they wouldn't be the alternatives. Nobody is going to have half a hospital's care capacity sit as backup when they could use that year round to better treat patients all the time, they just have plans of last resort to use when what they'd like to use isn't working.

(worked healthcare IT infrastructure for a decade)


> So while you can write down a prescription and cut a person open you can't manually do things pull up the patient's medical history for the last 10 years in a few seconds, have an image read remotely when there isn't a radiologist available on site, or electronically file for the meds to just show up instantly (all depending on what the outage issue is affecting of course).

I worked for a company that sold and managed medical radiology imaging systems. One of our customers' admins called and said "Hey, new scans aren't being properly processed so radiologists can't bring them up in the viewer". I told him I'd take a look at it right away.

A few minutes later, he called back; one of their ERs had a patient dying of a gunshot wound and the surgeon needed to get the xray up so he could see where the bullet was lodged before the guy bled out on the table.

Long outages are terrifying, but it only takes a few minutes for someone to die because people didn't have the information they needed to make the right calls.


Yep, when patients often still die while everything is working fine even a minor inconvenience like "all of the desktop icons reset by mistake" can be enough to tilt the needle the wrong way for someone.


I used to work for a company that provided network performance monitoring to hospitals. I am telling a Story second hand that I heard the CEO share.

One day, during a rapid pediatric patient intervention, a caregiver tried to log in to a PC to check a drug interaction. The computer took a long time to log in because of a VDI problem where someone had stored many images in a file that had to be copied on login. While the care team was waiting for the computer, an urgent decision was made to give the drug. But a drug interaction happened — one that would have been caught, had the VDI session initialized more quickly.

The patient died and the person whose VDI profile contained the images in the bad directory committed suicide. Two lives lost because files were in the wrong directory.


What's insane medical malpractice is that radiology scans aren't displayed locally first.

You don't need 4 years of specialized training to see a bullet on a scan.


We can definitely get local imaging with X-Ray and ultrasound - we use bedside machines that can be used and interpreted quickly.

X-Ray has limitations though - most of our emergencies aren't as easy to diagnose as bullets or pneumonia. CT, CTA, and to a lesser extent MRI are really critical in the emergency department, and you definitely need four years of training to interpret them, and a computer to let you view the scan layer-by-layer. For many smaller hospitals they may not have radiology on-site and instead use a remote radiology service that handles multiple hospitals. It's hard to get doctors who want to live near or commute to more rural hospitals, so easier for a radiologist to remotely support several.


GP referred to "processed," which could mean a few things. I interpreted it to mean that the images were not recording correctly locally prior to any upload, and they needed assistance with that machine or the software on it.


I am talking out my ass, but...

Seems like a possible plan would be duplicate computer systems that are using last week's backup and not set to auto-update. Doesn't cover you if the databases and servers go down (unless you can have spares of those too), but if there is a bad update, a crypto-locker, or just a normal IT failure each department can switch to some backups and switch to a slightly stale computer instead of very stale paper.


We have "downtime" systems in place, basically an isolated Epic cluster, to prevent situations like this. The problem is that this wasn't a software update that was downloaded by our computers, it was a configuration change by Crowdstrike that was immediately picked up by all computers running its agent. And, because hospitals are being heavily targeted by encryption attacks right now, it's installed on EVERY machine in the hospital, which brought down our Epic cluster and the disaster recovery cluster. A true single point of failure.


Can only speak for the UK here, but having one computer system that is sufficiently functional for day-to-day operations is often a challenge, let alone two.


My hospital's network crashed this week (unrelated to this). Was out for 2-3 hours in early afternoon.

The "downtime" computers were affected just like everything else because there was no network.

Phones are all IP-based now; they didn't work.

Couldn't check patient histories, couldn't review labs, etc. We could still get drugs, thankfully, since each dispensing machine can operate offline.


There are often such plans from DR systems to isolated backups to secondary system, as much as risk management budget allow at least. Of course it takes time to switch to these and back, the missing records cause chaos (both inside synced systems and with patient data) both ways and it takes a while to do. On top of that not every system will be covered so it's still a limited state.


Yes buy the more high available you do the more it costs and it's not like this happens every week.


As I was finishing my previous costs it occurred to me that costs are fungible.

Money spent on spares is not spent on cares.


Thank you, I'm quickly becoming tired of HN posters assuming they know how hospitals operate and asking why we didn't just use Linux.


There are problems with getting lab results, X-rays, CT and MRI scans. They do not have paper-based Plan B. IT outage in a modern hospital is a major risk to life and health of their patients.


I don't know about surgeons, but nursing and labs have paper fallback policies... they can backload the data later.


It's often the case that the paper fallbacks can't handle anywhere near the throughput required. Yes, there's a mechanism there, but it's not usable beyond a certain load.


I think it's eventually manageable for some subset of medical procedures, but the transition to that from business as usual is a frantic nightmare. Like there's probably a whole manual for dealing with different levels of system failure, but they're unlikely to be well practiced.

Or maybe I'm giving these institutions too much credit?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: