> Many of America’s most critical sectors, such as
healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry.
I'm sure "move fast and break things" will work out great for health care.
And there are already "clear governance and risk mitigation standards" in health care, they're just not compatible with "try first" and use unproven things.
> I'm sure "move fast and break things" will work out great for health care.
Health care is already broken to the point of borderline dystopia. When I contrast the experience I had as a young boy of visiting a rural country doctor to the fast food health care experience of "urgent care" clinics, it makes my head spin.
The last few doctors I've been to have been completely useless and generally uncaring as well. Every visit I've made to a doctor has resulted in my feeling the same at the end but with a big medical bill to go home with.
At this point the only way I'll intentionally end up in a medical facility is if I'm unconscious and someone else makes that call.
Dentistry has met a similar fate as more and more dentists have been swallowed up by private equity. I've had loads of dental work, including a 'surprise' root canal, and never had an issue. My last dentist had a person on staff dedicated to pushing things through on the insurance front and my dental procedure was so awful it boarded on torture.
I used to be an annual check + 3 times a year dentist person. Today I'm dead set on not stepping foot in any kind of medical facility unless the alternative is incredible pain or certain death.
I’m sure move fast and break things, now with AI (tm) will reduce the deepening monetization of the doctor-patient relationship that’s the root of your complaints.
Sometimes I just need a prescription. I don't know why I have to drive somewhere, fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form and have him write me a prescription.
Why can't I just chat with an AI bot and get my prescription? Much cheaper to administer which helps monetization (!) but much better and cheaper for me.
Things aren't slow and wasteful because of monetization. Having all these steps doesn't necessarily mean more profit. I would argue that its deeply inefficient for everyone involved and doctors. For instance, physician salaries have decreased 25% in real terms over the last 20 years.
>Why can't I just chat with an AI bot and get my prescription?
Because people have decided that whatever drug you are taking shouldn't be taken without a doctor's oversight. If you have a problem with that conclusion, the response should be lobbying to get that drug reclassified as safe for over-the-counter sale, not completely removing the doctor's oversight from the prescription process. Ironically, your proposal here is using AI to treat the symptom that frustrates you without any attempt to diagnose or treat the actual root cause of the problem.
Doctors oversight should be removed (not as an option, but as a requirement) for 90% of all prescriptions. Unless the drug has externalities like antibiotic resistance, is heinously addictive, or is so difficult to administer correctly that you cant take it outside of a hospital, there's no good reason to tell people what they can't put in their bodies. Whether or not your insurance will pay for it without a prescription is another matter.
This is not an argument that has any relevancy to AI. If anything, someone who believes what you say should be against the introduction of AI into the system because your argument is fundamentally that these drugs shouldn't have a gatekeeper. Swapping out one gatekeeper for another, especially with the new gatekeeper being the unknown black box of some AI middleman, won't actually address your complaint.
I think the AI could help. Sure there are a lot of drugs that should be over the counter. Don't know why anyone would abuse ear drops for a kid. But the AI could let you know if there are any dangerous interactions or if your alignment would be helped with this prescription. Besides the AI could spend more time w you than a doctor and answer questions
When it comes to abuse, you already have it with real doctors. Pill mills exist.
Thank the deity of whatever direction you pray in that you are on the happy path and the whole procedure seems frivolous to you. Those people not on the happy path are saved great trouble, as are their families, by having a doctor in the loop.
> Sometimes I just need a prescription. I don't know why I have to drive somewhere, fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form and have him write me a prescription.
You haven't had to do this for years, unless you need certain controlled substances, and then after the first in-person visit for that, you can make remote follow up appointments.
I can see room for an argument for liberalizing what’s available over the counter (and, I guess we’ll have to work out something with insurance in that case..). But the whole point of the prescription system is that some medicines need doctor consultation before you use them. Working around that with AI seems quite silly.
My 3 1/2 year old daughter woke up from a nap on a weekend after just recovering from a cold, screaming while grasping her ear, telling us how much it hurt. I looked in it with an otoscope and confirmed it was super red. I figured they wouldn't be able to send a prescription but my wife tried it anyways - and sure enough, the telemedicine option was no good. One very rushed trip 30 minutes into town into urgent care before they closed to have a nurse practitioner look in her ear and confirm what we absolutely, already knew and we finally had our prescription - and $200 less in our bank account.
> Why can't I just chat with an AI bot and get my prescription?
You don't understand why the person who dispenses dangerous drugs to the public needs to be a licensed professional and not a chatbot who called me a genius earlier today when I said I want to code up a script to pull some data from an endpoint and decode the data so it's human readable?
Very few prescription drugs are dangerous. Of those few that are, almost none of them are more dangerous than alcohol or cigarettes, and boy are the people who dispense those to the public not licensed professionals.
No. We shouldn't even require Eliza Chatbot 2.0 unless they're more dangerous than alcohol or cigarettes. Just an ID check by the clerk at CVS will do.
it's tough to tell what's going wrong for you but concierge medicine will give you a full hour and be much more invested in finding the root of your issues.
keep in mind, drs are also trying to figure out if you're a reliable narrator (so many patients are not) or trying to scam for drugs. best of luck!
Why is healthcare a borderline dystopia? How would you compare health outcomes of human beings in 2025 vs every year since dawn of homo sapiens? One thing you point to is your experience as a child to an adult with medical bills, couldn't there be another factor there? I mean saying you would never set foot in any kind of medical facility, I don't think is a typical person's experience. Maybe I'm delusional.
It's true. Recently I moved to a rural area and many nurses work as doctors. Soon we won't have hospitals here so there's no more need to keep up the cruel charade. It's absolutely disgusting and the primary reason I could never have children. It would be impossible to guarantee their security.
Edit:
I haven't yet achieved my savings goal so I can escape to a place where it's safe to have a family.
AI for treatment is rightfully scrutinized. AI for billing or other administrative tasks could be a big cost saver since administrative costs are a huge expense and a major factor of high consumer costs.
AI for billing or other administrative tasks could be a big cost saver…
You’d hope so, but doubtful. More likely it’ll be health care providers using “AI”s to scheme how to charge as much as possible, and insurers using “AI” to deny claims.
Luminae AI accurately predicts your uninsured patient's asset values so that you can quickly write off bad debts and only chase those with high asset values. Luminae AI will increase your net collection rate by at least 15%.
"It's a game changer, we've increased our gross collection rate by 30%. We've also started a new business to flip foreclosed homes nearby."
Yeah, I'm sure they'll find a way to fire lots of staff but still charge patients the same. Of course if they use the current data for training, it will result in similarly terrible outcomes.
Billing and administration feels like a made up self own tho. A lot of that crap could just... not be done, as shown with the huge expansion of administration : medical dollar ratio in the past 50 years.
Of course, one side of this is that the models will also be used adversarially against patients seeking legitimate treatment in order to squeeze more profits out of their suffering.
The other side of this is with less administrative insurance jobs, the talking point that universal healthcare will "kill insurance jobs" can finally be laid to rest, with capitalism doing that for them instead of the free healthcare boogeyman.
It will be an interesting arms race. The real losers will be the human individuals, not insurers, who will have to contend with an AI when disputing claims. I have little faith that the prompt will encourage fair interpretation of (sometimes deliberately) ambiguous rules.
Healthcare in the US is already in very poor shape. Thousands die because of waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims because insurers won't cover it. AI is already better at diagnosis than physicians in most cases.
There is quite a lot of easy to find information on the web that shows that the US spends twice as much per capita than our European peers and we have worse outcomes, not just on average, but worse outcomes comparing similar economic demographics, including wealthy Americans. We spend $5T a year on health care, or a comparative waste of over $2.5T a year.
sorry. I was responding to the part that says that "Healthcare in the US is already in very poor shape."
Is AI better than most physicians for diagnosis ? I doubt it, and I doubt that there have been any real studies as the area is so new and changing.
My personal experience ? I am actually quite impressed, and I am an AI skeptic. I have fed in four complex scenarios that either I or someone close to me was actually going through (radiology reports, blood and other tests, list of symptoms, etc) and got diagnosis and treatment options that were pretty spot on.
Would I say better ? In one case (this was actually for my dog), it really was better in that it came up with the same diagnosis and treatment options, but was much better at providing risks and outcome probabilities than the veterinarian surgeon did, which I then verified after getting a second opinion. My hunch that this was a matter of self interest, not knowledge.
In two other scenarios, it was spot on, and in the fourth case it was almost completely spot on except for one aspect of a surgical procedure that has been updated fairly recently (it was using a slightly more old fashioned way of doing something).
So, I think there is a lot of promise, but I would never rely solely on an AI for medical opinions.
>A physician or AI begins with a short case abstract and must iteratively
request additional details from a gatekeeper model that reveals findings only when explicitly queried.
Performance is assessed not just by diagnostic accuracy but also by the cost of physician visits and
tests performed.
I believe that dataset was built off of cases that were selected for being unusual enough for physicians to submit to the New England Journal of Medicine. The real-world diagnostic accuracy of physicians in these cases was 100% - the hospital figured out a diagnosis and wrote it up. In the real world these cases are solved by a team of human doctors working together, consulting with different specialists. Comparing the model's results to the results of a single human physician - particularly when all the irrelevant details have been stripped away and you're just left with the clean case report - isn't really reflective of how medicine works in practice. They're also not the kind of situations that you as a patient are likely to experience, and your doctor probably sees them rarely if ever.
Either way, the AI model performed better than the humans on average, so it would be reasonable to infer that AI would be a net positive collaborator in a team of internists.
How else would a study scientifically determine the accuracy of an AI model in diagnosis? By testing it on real people before they know how good it is?
Why not ? Have AI do it then have human doctor do a follow-up/review ? I might not be a fan of this for urgent care but for general visits I wouldn't mind spending a bit extra time if they it was followed by an expert exam.
So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims?
The implication was that since I didn't work in a clinic I couldn't make any inferences or reference to facts sourced from the news. If that's the standard for truth then there's no point in making claims since anyone could just say "uhhh you're not a doctor so you can't say that" ad infinitum.
> I'm sure "move fast and break things" will work out great for health care.
It probably would if you quantify risk correctly. I'm not likely to die from some experimental drug gone wrong, but extremely likely to die from some routine cause like cancer, heart disease, or other disease of old age. If I trade off an increase in risk from dying from some experimental treatment gone wrong for faster development of treatments that can delay or prevent routine causes of death, I will come out ahead in the trade unless the tradeoff ends up being extremely steep in favor of risk from bad treatments.
But that outcome is very unlikely because for this to be the case the bad treatments would have to actually harmful instead of just ineffective (which is much more common). And it also fails to take into account the possibility that there isn't even a tradeoff and AI actually makes it less likely that I will die by experimental treatment gone wrong or other medical mistake, so it's just a win-win. And there is already evidence that AI outperforms doctors in the emergency room. https://pmc.ncbi.nlm.nih.gov/articles/PMC11263899/
American Healthcare's "brokenness" involves massive bureaucracy, gate-keeping and processes that pressure providers to limit resources. But it does provide necessary things to people. A system that reduced the accuracy of diagnosis and treatment could still cost many lives.
i’m getting super fatigued on this change we’ve had where what used to be beta testing to a closed group of invested parties has morphed into what we have now.
from video games to major product roll outs to cars.
will all of the knowledge gained from this product research testing of AI on medicine be given away to the public in the same way university research used to be to the scientific community? or will this beta test on the public’s health be kept as company’s “trade secret”
if they’re going to “move fast and break things” with the public, in other words beta research on the public, then it’s incredibly worrisome if the research is hidden and “gifted” to a handful of their cronies.
particularly so when quite a lot of these people in the AI sphere have vocally many times declared they despise the government and that the government helping people is awful. from one side of their mouth chastise government spending money to boost regular communities of people while simultaneously using it to help themselves.
The solution to this is to form a Health Group that combines insurance with a whole bunch of provider services. Then approve those claims that result in you paying yourself.
These are terrible. Then you have a very small in network option and get screwed when going anywhere else. This is basically what happens with CVS Caremark drug coverage - any recurring prescription must go through them or they won't cover it. They'll only cover one off prescriptions at competitors. It's really pretty terrible, especially if you need a compounded medication. I'm not sure how such an anticompetive racket is legally allowed to exist.
>Move fast and break things is why we have progressed from chopping off people's limbs and giving them cocaine to now
No it wasn't. The "move fast and break things" people were selling snake oil and alchemy while the actual science progressed slowly and deliberately, and the regulations around the latter were often written in the blood carelessly shed by the former.
Anaesthetics in the form of ether went from official invention to worldwide use (in wealthy countries) in less than a decade. Many other medical inventions followed suit.
"The FDA moves slowly." Is a sentence I would agree with
I'm sure "move fast and break things" will work out great for health care.
And there are already "clear governance and risk mitigation standards" in health care, they're just not compatible with "try first" and use unproven things.