Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Medicine's Machine Learning Problem (bostonreview.net)
71 points by happy-go-lucky on Jan 8, 2021 | hide | past | favorite | 110 comments


I'm an eye surgeon and self-taught machine learning practitioner, I started to learn Python in 2016 when the deep learning hype was at his highest.

After 3 years of research, playing with datasets, extracting and cleaning data from EMR and from different machines, I not sure that the biggest problem with the so-called "AI" is the inequalities that it can induce ; it is rather, is it useful at all ? This is a little bit provocative so let me explain.

First, it took me a very long time to really, fully get that AI is not fundamentally different from a simple linear regression (infering a rule from data). More powerful, but definitely, no intelligence added. Most of my fellow doctor colleagues still think that AI is special, different, like magic; I also thought like this before learning to code.

Scores inferred from data were used in medicine from decades and fundamentally, nothing changed with the AI wave. I'm extremely impressed with the performances of GAN for image generation, and with what allows deep RL in controlled environments (which the human body is not): however, I can't see any applications of those technologies in medicine.

Ok, Deep Learning allows to analyze images with a great level of performances. However, at the end of the day, nobody (informed) wants an AI diagnosis and the doctor will read the images. He will maybe have a pre-completed report : wow, incredible. We are very far from the disappearance of radiologists that Geoffrey Hinton took for granted a few years ago.

At this time, a team published in Nature a paper about a DL algorithm which could diagnose melanomas better than dermatologists, using a picture. Unfortunately, no real-life application. Why? Because when you suspect a melanoma, if you have any doubt, you won't take a chance: you will take a biopsy. What is the interest of guessing the result of the biopsy that you will do anyway, because if you guessed wrong, the patient dies? No interest.

I also realized that it is extremely difficult, if not impossible, to use data from EMR out of the box. Medical data is dirty, intrinsically, because humans are complex things that do not fit easily in little boxes. Hence, if you want quality data, you have to think your data collection in advance, and motivate all your fellow doctors to check the little boxes correctly. For many years (we are talking about big data, no ?) Of course, there is some exceptions, but most of the time the data cleaning process is extremely hard to perform (however, if a dedicated team of people with medical knowledge is concentrated on this work, things could be different. I had to clean the data myself).

I'll finish with the most ironic part : I dedicated a few years of my life to a topic where both optics and prediction from data are involved (intraocular lens calculation in cataract surgery). I tried a great deal of ML approaches, only to find recently that by better accounting for the optical specificities of the problem I was trying to solve, I obtained excellent results, better than with ML, even with a dumb multiple regression. Ouch. The lesson is : physics beats AI.

I would be happy to be challenged on this topic.


I am a dermatologist, AI researcher and co-founder of an AI startup (skinsmart.ai) and I would agree with you regarding the utility of AI in making an accurate diagnosis of melanoma. I don't think it has a significant role it play in the Dermatology clinic for this application. However, I am very optimistic about the potential for AI to help in the triage of patients referred to dermatology by non-specialists. For this application you are not trying to diagnose melanoma but instead aiming to diagnose - with a high degree of accuracy benign lesions that do not need review in the Dermatology clinic.


Is it necessary to go so far as making a diagnosis at all? Wouldn't it suffice to detect -and alert the user- that some of her moles have changed shape and she might need to have them looked at more carefully by an expert? This is a task that is very difficult to perform with the naked eye, especially for people with skin types that have lots of moles and an automated decision that could be relied on to detect otherwise imperceptible changes, could perhaps even save some lives.


Yes this is the idea of mole mapping and there are 3D whole body photo imaging systems available for this with automated detection of changing lesions. It's harder to do on a smart phone but maybe possible.


Thanks - I'll have a look at "mole mapping" now that I know the term.


If I had a benign lesion referred by my PCP to dermatology, I’d want a dermatologist to take a look at it. It’s never been difficult to get a dermatology appointment.


Situation may be slightly different in the NHS (national health service) where there is an overwhelming number of referrals from general practitioners for suspected skin cancer most of which turn out to be benign. As a consequence there is lack of capacity to see patients with other skin conditions. Of course it's always possible to see a private Dermatologist if you have health insurance or are happy to pay.


If that’s true that sounds like a different problem. Maybe they need to train more dermatologists? And if there are appointments available privately well... I don’t know what to say. Seems like an structural systemic failure which is odd. Maybe dermatologists are gaming the system to induce private pay..


The number of Dermatologists trained in the UK is entirely decided (and paid for) by central government. UK Dermatologists have for many years highlighted the need for training of more consultants.


Sure but the US has similar restrictions and problems. Medicare DME pays for almost all of the residency spots. In 2015 there were 400 dermatology spots in the USA. I guess one issue is travel time. A lot of folks don't live near cities and have access to specialty care.


Maybe for you, I would have needed more than 3 months to get one. Thankfully in my case the issue seems to have been benign and was gone in a month.


May I suggest, in response to your sentiment that applications of AI to medicine are lacking, is that you are seeing applications replace current medical practices. An AI diagnosis of a medical image seems redundant indeed, however in this situation a patient has seen a doctor out of complaints and has been sent to the radiologist for further investigation. This medical practice is reactionary, and suspicions are already present, so of course the AI isn't doing much useful here.

Alternatively, imagine a proactive medical world, in which preventative screenings are commonplace. Currently, the implementation of routine screenings without any complaints is prohibitively expensive on a large scale. This is because it requires manpower, and manpower is prohibitively expensive and the expense of manhours needs to be justified by a medical practitioner. However, AI can help in this proactive medical world by reducing the number of hours real people are looking through data to detect problems of patients, reducing the cost of routine screenings at large. Again, this wouldn't replace doctors, as you'd still need a specialist to analyze any positive hits, but it differs from your scenario in which the AI diagnosis seems redundant.

So, when preventative medical practices are more prevalent, the mass routine screening procedures will need help from machines to keep it cost effective, and that I believe is where this technology will find its application.


I disagree. You don't really want to routinize that level of medical surveillance, due to the classical Bayesian predictive power problem. When you come in with a complaint, it changes the prior and is additional evidence to revise the diagnosis on top of the screening information.

What you do want out of AI is to flag areas of interest in imaging for example and help identify when records are at risk of being incorrectly normalized. Ideally, even if the end effect is marginal (say bumping accuracy from 80% to 90%), if it enables a workflow that decreases the exhaustion and frustration of the doctor you will want that in place.

Of course it could just as well be used as an excuse by management to increase any given doctor's throughput, so it might not work as you would want.


Screening has been shown to be effective for lung cancer. With enough data, we can improve the posterior enough for certain applications that we don’t need the stronger prior of complaints.

Over time as AI improves, more and more diagnoses can look like this.


Sounds good, doesn't it? But you have to have a really, really low false positive rate for this to work out. This is already a problem with mammography:

https://www.cochrane.dk/news/new-study-finds-breast-cancer-s...


Health policy is fraught with counter-intuitive phenomenon - and screening is one of them.

Seems like it should help, but in practice leads to over-diagnosis.

For example - Cancer rates jumped in Korea after screening with no impact on patient outcomes [1]. There are several others.

[1] Lee, J. H., & Shin, S. W. (2014). Overdiagnosis and screening for thyroid cancer in Korea. The Lancet, 384(9957), 1848.


You can hardly conclude that broadly screening populations are ineffective from this study. You have to consider, among other things, the treatments available for the given disease being screened and the cost of that screening program. If treatments for the disease already have a low success rate (what is low?), the timing of detection doesn't really help. Additionally, if the cost of the screening program is negligible (what is negligible?), then even successfully treating a few patients may be worth it.


The current consensus about over-diagnosis (as I understand it) is that when there is a significant false positive rate and the cost of proving the positive false is high (in money, time, effort, worry), the screening program is not helpful. Some go further to say that low cost screening drives some of the high cost to outcome ratio in the US. I'll try to find a cite in my textbooks if you are interested.


I think the issues are deeper than that of false positives. Its possible that transient diseases get detected that would have fixed themselves without any treatment. Insead of a non-treatment one now has to deal with the side-effects of the interventions applied.


This is exacerbated by the fact that if the AI told the doctor that there is a doubt, no doctor will take the risk of not doing a biopsy / scanner / MRI / surgery (depending on the case). Because, how would you defend yourself in front of the judge ? This is something we always have in mind.

This is how you end with false positives and over-diagnosis.


This is a false blanket statement. Also one that could change as we start to see human+ai performance be better than just human performance.

For lung cancer screening, NLST showed a 20% reduction in mortality and now NELSON has shown even stronger results in Europe.

This “all screening is bad” is FUD in the medical field, frankly. Yes it has to be studied and implemented carefully, but to make blanket statements about screening as a whole is factually incorrect.


I have not stated "all screening is bad".

Broad-based population screenings as the parent comment suggests, in my opinion, are.

I'm yet to see any clinically-valid distinguishing aspects that would suggest AI would add value to screening. Curious to hear evidence that drives your optimism of human+AI.

Just to state, the NELSON study [1] focuses on high-risk segments. Their paper also recommends a "personalized risk-based approach" to screening. This seems reasonable.

[1] https://www.nejm.org/doi/full/10.1056/nejmoa1911793


The general thread here is about AI helping with a more proactive approach to medicine. Screening for high risk populations certainly falls under that.

You certainly said that screening leads to over diagnosis.

I think for screening, the best results are probably the upcoming prospective study from Kheiron.

https://www.kheironmed.com/news/press-release-new-results-sh...


I suspect, btw, that the Google model in this paper https://www.nature.com/articles/s41586-019-1799-6

will show stronger performance. But Kheiron appears to be ahead as far as proving the value of the tool since they have actually validated prospectively.


I would differentiate prevention from screening. Screening is for early detection of a problem (for example, the Pap smear to detect pre-cancerous lesions on the cervix). Prevention prevents the problem in the first place (eg the vaccine against the human papilloma virus which causes cervical cancer).

Truly effective preventative measures are the apex achievement of medical science, and have simply deleted an unimaginable amount of human suffering from our modern lives. Vaccines and sanitation are the best examples. They are astoundingly cost effective measures, and are so good that in many ways they are unimprovable in any significant way. 2020 is yet another example of how important vaccine technology is to every single person on this planet.

Screening is nowhere near as beneficial as actual prevention. It is expensive, labour intensive, requires behavioural modification, definitely harms a significant proportion of people due to false positives, and in controlled trials only modestly improves hard clinical outcomes under the most charitable assumptions of compliance and follow up care.

You are proposing a model of 'high-touch' medicine, where people have a raft of continuously administered screening tests for a long list of conditions. This could only ever be applied to a small proportion of the world's population, and would require highly motivated and well educated patients. In my opinion, spending a day in a primary care medical clinic would disabuse you of the notion that this is a feasible or desirable outcome.


> I tried a great deal of ML approaches, only to find recently that by better accounting for the optical specificities of the problem I was trying to solve

I want to point out, that any serious machine learning researcher is not oblivious to this, despite the deep learning boom suggesting to the contrary. Modern methods have shown that we are capable of building predictors with surprisingly complex representations, that can solve large-scale downstream tasks. i.e. our models are "flexible" enough.

The next challenge is whether they favor the "right kind" of solutions. For instance, Convolutional Neural Networks (CNNs) are architecturally just sparse version of Fully-Connected Neural Networks. Why is it then that CNNs perform far better on images? A key reason is that "inductive biases" afforded by MLP aren't strongly favored towards images. Another instance of this is the covariance functions used is Gaussian Processes - the Squared Exponential Kernel is very flexible and can in principle fit anything possible. Nevertheless, if the problem has specific structures, say periodicity, one better use the Periodic Kernel because it's inductive biases rightly align with the kind of solutions we expect.

> The lesson is : physics beats AI.

As a consequence, the single biggest reason physics would beat a generic AI in the short-term is precisely due to our ability to explicitly provide inductive biases that align with our expectations from the physical system.

We haven't found the secret sauce for every possible system in the universe. I don't think we can, either. But what we can do is devise ways to "control" such inductive biases we can encode in machine learning systems, which align with our expectations of the way the system should behave.


I think you are not being creative enough about how AI can influence medical care, and also not aware of existing deployed solutions making significant clinical impact.

For example, viz.ai has a solution to help get brain bleeds spotted to the eyes of surgeons more quickly. It is deployed and has cut the average length of stay in the neuro ICU significantly

https://mobile.twitter.com/viz_ai/status/1314710308603133953

I work at Caption Health, where we are enabling novices to take echocardiography scans. The doctors who work with our technology found it extremely useful to help diagnose cardiac involvement during covid.

https://captionhealth.com/education/

As much as I have respect for the expertise of medical doctors, I would ask that you have respect for folks working to apply AI in medicine.


Hi, I did not intend to be disrespectful, sorry if you read my message like this.

I mainly intended to underline the fact that we (doctors) were promised a revolution in healthcare (AKA : to disappear) and we ended with diagnostic scores.

However, I gladly admit that I exaggerated and that AI technologies can be helpful in some cases, of course.


Geoffrey Hinton really made things hard for folks on the AI side and even walked back that promise.

I think it’s the classic thing where it’s overestimated in the short term and underestimated in the long (longggg) term.

My sense is that for AI to have the full impact it will one day reach, it will take rethinking medical care entirely with online machine learning and data at the core of how decisions are made.

ML was able to revolutionize how ads are delivered (for better or worse, but at least reaching the objectives of those who deployed it) because you can update and deploy the models multiple times a day.

If we can one day get to a world like that where an ML mode is constantly learning and updating itself, and has seen far more patients than any individual doctor, then maybe we will see the sorts of bigger shifts that were imagined shortly after we started to see ML surpass human ability on long standing difficult tasks like object recognition.

Getting there is a long, long road where we need to learn to work together with AI and figure out where the holes are in terms of robustness, earn trust over years of successful deployment, and figure out how to properly put safety bounds around more frequent updates to models.


I agree on the very long term possibilities. However, the first problem to solve is the data collection. Saving doctors and nurses from their horrible professional softwares and replacing them with user-friendly, well-thought, data collection friendly softwares would be a huge step forward.


There are certainly tons of people working on this. I think that the entrenched competitors will only be displaced by other folks who are achieving things they cannot via AI. These two problems are closely linked for sure.


I do respect your experience and take on the matter, however, let's replace this statement:

"I'm an eye surgeon and self-taught machine learning practitioner, I started to learn Python in 2016 when the deep learning hype was at his highest."

with:

I'm a [machine learning researcher] and self-taught [ophthalmologist], I started to learn [ophthalmology] in 2016 when the [clinical medicine] hype was at his highest.

In this hypothetical situation, I bet you would instantly discount what I would have to say about ophthalmology because I clearly would not have the depth or experience to have an informed opinion on ophthalmology.

Over the past few years with the ML hype, I have noticed quite a few clinicians who have self taught some deep learning methods claim expertise in the subject area (not targeting you, a general observation). I feel like many clinicians do not understand the breadth of machine learning approaches. There is just so much to know! from robust statistics, non-parametric methods, to kernel methods. Deep learning and deep generative models are by no means the only tools at our disposal.

I absolutely agree with you though. Applied machine learning practitioners have been over selling their accomplishments -- which I believe is detrimental to progress in the field.

I would highly encourage you to collaborate with ML researchers who have spent a decade or more working on hard problems. From the other side, I can tell you I gained a lot discussing ideas with domain experts (neurologists, radiologists, functional neurosurgeons). They have insights that I could never have picked up by self teaching.


The troubles we are seeing with medical AI integration are not stemming from lack of personal abilities, though. The problem is clearly systemic, with medical data being currently mostly unusable (for both humans and machines, although humans often believe otherwise). So you can be as good as you want either in medicine or ML or both, material support is lacking for wide applicability of medical AI.


Haha, you are perfectly right. I totally admit that I'm an amateur with a low level of ML expertise.

One the other hand, ML researchers with a deep knowledge expertise are extremely hard to find, even among statisticians / programmers. I suppose that the people with a real expertise are working on their own startup or in FAANG.

This leads to a situation where the medical research involving ML is largely without interest or full of bias. It is easy to spot in the literature.


I think it's partly the incentive structure that is to be blamed. Historically, quantitative PhDs in healthcare(medical physicists, statisticians, comp. genetics) have been underpaid (in my opinion). Now with FAANG and Quant Funds willing to pay $400K+ comp packages to these PhDs, there are far more exit opportunities for these PhDs.

On a positive note, I'm so glad that clinicians are taking interest in ML! As a practicing ophthalmologist, the fact that you were able to self teach is really impressive! I do know that a lot companies are looking for people like you, who have clinical experience. If you are interested you should explore roles/potential collaborations with some of these health research teams in tech.


I have barely any biology/medecine nor Machine Learning knowledge (though some physics, maths, programming), yet I might have to do an internship in the field of ML applied to leukocyte classification, where would you recommend to start ?


Depends on the scope of the project. Would the goal be to come up with a better algorithm for cell classification based on histological images? Or to apply an existing algorithm to a new dataset?

The former would be quite difficult without much background in ML/Computer Vision (you would have to spend some time self-teaching basics of ML/Deep Learning and the pre-reqs for those — Basic Linear Algebra and Probability).

The latter is doable. I would recommend a very hands on approach. Pick some computer vision object classification tutorials and code them up (using a high level library). Make a mind map of the concepts and look them up as and when you’re unclear about a concept. Then move on to replicating some well cited, peer reviewed papers. Often papers will have their code on GitHub. Try and relocate their results on their dataset. After this you would have the basic working knowledge to modify the algorithm slightly for your specific use case.


The data in the database comes from a bidimensional matrix (LMNE) where leucocytes are classified on resistivity on one axis and light absorption (?) on the other. (I wonder how they managed the separation by absorption... indirectly via centrifugation ?) So I guess not really histological ?

Looks like it's a new model, I have no idea if they already have any ML models yet. There's also some database work.

I'm finishing a Masters degree in Computational Physics, so Linear Algebra and Probability shouldn't be an issue. (We also have an Image Processing and Analysis course.) I guess that's why they contacted us despite the fact that we don't have any ML training ?

Yeah, this is basically what I thought to do, but thank you for your advice !


Given your background, I think it would be worthwhile for you to pick up ESL [0] and read some relevant sections (supervised/sparse/linear methods). It's a great book and a good starting point for thinking about ML methods for high dimensional data.

Also, might be useful to took at webpages of some researchers in this space and courses they teach [1,2].

  [0] https://web.stanford.edu/~hastie/ElemStatLearn/  
  [1] https://scholars.duke.edu/person/dunson  
  [2] https://www.cs.princeton.edu/~bee/


Thank you !

Funny (but I guess expected) to see the Markov Chain Monte Carlo method that we very recently learned in that book's table of contents ! (Unless it's another MCMC ?)


This is an interesting perspective. Since you're an eye surgeon, this might be a relevant question.

What do you think of the relative success of Diabetic Retinopathy (DR) diagnostic models, especially the FDA approval of the clinical trials Digital Diagnostics (formerly IDxDR) [1]? Their approach to the model architecture was slightly different from the black-box approach of other labs, wherein IDxDR's model is trained to look for clinically relevant indicators for DR. Is that a more likely route for future diagnostic AI models?

[1]: https://dxs.ai/newsroom/pivotal-trial-results-behind-the-fda...


Not OP, but anesthesiologist and hobby programmer for 15 years. What you are describing is a fundamental flaw of the current AI effort: the data that supports AI models is mostly irrelevant to the problem. In medicine, the saying goes: 90% of diagnoses are made on patient history. Ironically, there is no reason that would change for AI-enabled systems given the same information.

So to answer you directly yes, it's a better route until we have better information available. But it's also the wrong route to take in the long term. It would be far better to attempt to produce better supporting information.


Honestly, I don't know what to think. Ophthalmology is a great field for AI researchers (lots of images: the eye is an organ that you can photograph and analyze visually in every angles, almost like in dermatology). In Ophthalmology, diabetic retinopathy is an evident take : lots of people involved, lots of annotated pictures available, screening programs.

However, I would like to see the performances of the algorithm on different fundus camera. It is also important to realize that diabetic retinopathy classification is very easy to learn, to the point that if the screening is such a problem, it is easier to ask the person that takes the pictures (In France, a nurse or an orthoptist) to phone the doctor when he/she sees something strange on the eye fundus.


Do you happen to know of a dataset that contains a fundus picture of the back of the eye plus a regular high-res photo of the full eye?



I worked for a couple of years providing modelling services to health institutions. We did the heavy work under the hood, while med practicioners were mostly interested in getting academic papers published.

For companies with the enterprise expertise it is difficult to enter with the right mindset because sales cycle ib healthcare is way too long.

Confirm that that major botlenecks are indeed getting data in shape for modelling. Feature engineering is key and domain specific. Forget brute force approaches like DL.

Also most AI practicioners in the field seem to ignore that doctors dont need to know who is sick but who is actionable. Its a completely different game.


I am a medical oncologist in a similar position to you. I agree completely. There is a lot of hype over 'AI' and big data, and plainly ridiculous statements about replacing doctors etc.

The technology is fascinating, and there has been a lot of interesting and clever innovations. But the value add for healthcare is pretty small in the end. The value add for the actual patient is even more miniscule. There is some misunderstanding about what doctors actually do and what patients want doctors to do. Fundamentally, a patient wants to see a doctor that is going to take responsibility for their medical problem, and ideally their health. When this is achieved, the outcomes are usually optimal under constraints, and the clinician and patient are both satisfied. 'AI' as it is known today could never do this. Even if someone unveils superhuman general AI tomorrow, it wouldn't be able to do this.

There are various niche applications of machine learning which employ interesting technology in image interpretation, and this will be a useful addition to the suite of tools already present, particularly where non-specialists need to look at a picture or radiological image, and for screening.

But otherwise I expect that in 20 years time we will still be saying that EMR data is too difficult to deal with, and uptake of machine learning based software will be very low. We still don't have useful automated reporting of electrocardiograms, for example. So even sophisticated curve fitting seems to have trouble fitting these literal curves in a way that impacts clinical care.


First, it took me a very long time to really, fully get that AI is not fundamentally different from a simple linear regression (infering a rule from data).

I had a similar revelation. I sat through an AI for health presentation and basically asked “ok, so you take a data set then try and find a set of rules that accurately describes...like a linear regression?”

As you said, it’s more sophisticated than that, but in essence, yes, it’s fitting a curve to data.


A big part of the problem is the names we've chosen - Artificial Intelligence and Machine Learning. A more accurate, though less sexy, name would have been "Mathematical Pattern Recognition". We can do amazing things with classifiers but we shouldn't fool ourselves into thinking it represents "intelligence".


IMHO what's still lacking is the feedback loop.

Current systems are limited to ingesting input and providing output.

A big part of medical diagnosis, however, is to do follow-up exploration based on results of a previous examination.

This is a big part of "intelligence" that's still missing entirely from all approaches that I'm aware of, i.e. the ability to ask further questions/request data based on preliminary results from previous inputs.


Brains and intelligence are pretty much just pattern recognition as well - "neurons that fire together wire together"


Brains in isolation, yes.

But intelligence isn't just a "brain in a jar" situation. Intelligence requires interaction with the environment - you'd never be able to tell whether a Boltzmann Brain is intelligent from just observing it, for example.


Also because linear regression is a form of AI. (Anything done by a computer rather than a human is.)


While I don't think AI should replace humans in describing medical images, it can be used to check if they might have missed something. Such AI-based description should be provided only after the human finishes analyzing the image, to avoid lazy technicians just copying algorithmic output. The goal doesn't have to be increasing accuracy and not doing biopsies, it might be reducing number of false negatives.


Then technicians will just put whatever diagnosis in the relevant text field and let the "AI" do their job (if the "AI" is deemed good enough). I've been working in healthcare for 15 years, and I don't have a single doubt that that's what would happen. Conversely, if the "AI" is deemed not good enough, it will be business as usual and nobody will so much as glance at the "AI" results.


My idea was:

1. Technician writes down their diagnosis

2. They submit it to the system

3. AI comes with its own analysis

4. Technician sees the outcome, they can update their assessment

5. Everything is saved into the system

If one of technicians has too much errors in their initial assessments, it should raise a concern.


> 4. Technician sees the outcome, they can update their assessment

Will result in exactly what I described above.

> If one of technicians has too much errors in their initial assessments, it should raise a concern.

People will refuse AI oversight if there are associated sanctions. People will make every effort to game the system. Following that, you'll be left with:

a. Pay techs more, so they accept the new working conditions.

b. Fire all techs and make do with a (potentially suboptimal) AI system.

Yes, this is very much gate keeping at work.


> I also realized that it is extremely difficult, if not impossible, to use data from EMR out of the box. Medical data is dirty, intrinsically, because humans are complex things that do not fit easily in little boxes. Hence, if you want quality data, you have to think your data collection in advance, and motivate all your fellow doctors to check the little boxes correctly. For many years (we are talking about big data, no ?) Of course, there is some exceptions, but most of the time the data cleaning process is extremely hard to perform (however, if a dedicated team of people with medical knowledge is concentrated on this work, things could be different. I had to clean the data myself).

Paper charts everywhere. Post it in some files.

Wonder how much ML could help with mammogram. If you ever read one of these, you mostly use symmetry and check the previous ones to spot what changed.


Honest question. If a Datascientist with 20 years experience said "I also dabbled with eye surgery for 3 years" and their authoritative conclusion was that modern eye surgery is effectively trivial. How would you respond to that?


Tangential: I've always had an interest for medicine. A decade or so into software dev and I'm not convinced I want to stay. I've heard a lot of doctors say they wouldn't do it all over again. I'm wondering if med school makes any sense. Maybe my idea of medicine is too influenced by Hollywood, but it does look very interesting. Would you study medicine again given the chance to start over?


This depends greatly on where you will be studying, where you intend to practice, and your matrix of personal and financial dependencies.

Working conditions in medicine are universally poor, frankly. The flexibility during education and training is close to zero. You probably will need to move somewhere else as part of training or work. This can uproot your life and family depending on obligations. Some fields have very prolonged training periods and are very competitive (eg surgical subspecialties). There is a lot of bullshit in medicine, due to heirarchies in large institutions, rigid and nonsensical regulations, issues with billing etc... you have to have the personality to stomach this. Conversely, there is also the fact that when you are in the room with the patient and the door is closed, it is just you trying to help another person, despite everything. This is sacred, and can be intensely rewarding even in dire circumstances, an exercise in mutual gratitude you just won't find elsewhere.

The other thing I would say is that some fields in medicine are probably not something you should spend your whole life doing, at least not full time. It is just too emotionally exhausting, no matter what you do to deal with it. Not saying it is impossible to do it for 40 years, but it is a risk, you definitely sustain some kind of damage. I suspect this is responsible for 2/3rds of the seemingly outrageous negligence/misconduct cases you hear about.


Thanks for the reply. A part of me believes it would be worth going through med school just to get to "save" one life. I just don't get that gratitude feeling as a developer. I mean sure, you can work on software that could be used by millions of people, software _does_ change and affect the world, but it does not really compare to medicine. A git commit can be immensely important but there's something about medicine that makes it very appealing. On the other hand, computers do as they're told and rarely complain, stink, whine ;)

I am strongly attracted but I feel maybe it's mostly illusory.


> First, it took me a very long time to really, fully get that AI is not fundamentally different from a simple linear regression (infering a rule from data).

I'm quite surprised by this. Doesn't each AI tutorial start by stating that very thing?


I assumed that basic ML was similar to the statistics I already knew, and that deep learning was inherently different. It had to be different, given how people talked about it. It is just an illustration of how the fuss made about AI at this time impacted researcher's minds with no expertise in ML. This is going down, fortunately.


   AI is not fundamentally different from a simple linear regression
it is fundamentally different because it is complex nonlinear regression (unless you are using a one layer linear network which no one does)


> I also realized that it is extremely difficult, if not impossible, to use data from EMR out of the box.

This is my biggest complaint with the EMR systems I've used and I've always wanted to improve this. I wonder if fellow doctors would be okay with using a simple structured language to describe data in an EMR.

For example:

  Height: 175 cm
  Weight: 70 kg

  Ethnicity: white
  Age: 40 years
  Creatinine: 0.9 mg/dl
An inference engine could use that data to calculate lots of things. Simple stuff like body mass index and creatinine clearance. The patient could be automatically classified in all possible scores given available data.

Doctors already do this work, we even input this exact same data into calculator apps. The innovation would be recognizing this data in the EMR text and doing it automatically. I think it would be a huge gain.


Doctors and technologists of various stripes have been working on this since before the dawn of digital computers. Let's take height as an example. Let's look at metadata needed along with that data point: 1. When was it taken? 2. How was it taken? 3. Who took it? 4. What measurement system does "cm" refer to? 5. How does the concept of height relate to other clinical concepts?

https://www.hl7.org/fhir/observation.html

https://www.hl7.org/fhir/observation-example-body-height.htm...

The simple stuff like BMI, BSA, etc. have been calculated for a long time without anything fancy like an inference engine. The challenge is that the surface area of different calculations needed outstrip the supply of people who can interpret the clinical question, identify the source data and encode them. A better approach is needed and is what folks are working towards.


There's a lot of effort being put forward to extract data from unstructured reports, with some pretty nice results from what I've seen. Convincing every doc to use this type of format is likely to be impossible.


Question about this: they talk about how datasets are not representative, my question is, compared to what?

I’m guessing by the politic of this article the person is from the US, do they want people to look at data representative of US population? That seems pretty narrow-minded.

My country has almost no black people (probably <1% though I don’t know that official stats on race are even collected) and it would be extremely expensive for us to include black people at US levels (13%) in all research.

Perhaps they are advocating we use world demographics but that would be logistically impossible for basically all researchers around the world.

Am I misunderstanding the author or does the article seem pretty ethnocentric? (is that the right word?) US-centric?

Wouldn’t it be better to just qualify/label the demographics of the research data (we used all black people, all white people etc). They talk as if there is some golden ratio we should all be following but that just isn’t the case.

In any case, I don’t think it is useful to shame researchers that are just doing the best they can with the data available cause the data they have is useful to someone.


For better or worse, most of these articles are US-centric because that's where most of the R&D money for health ML is.

The far more reasonable approach is to make sure your data contains as many demographics as possible (not just race) from your actual patient population. If there happen to be gaps, then put in at least a reasonable effort to fill them instead of shrugging and saying "it's out of our control". That, along with a per-demographic breakdown of important metrics and your point about qualifying demographics in the data (which is already done in most medical publications, including many using ML) would already be a huge improvement on what most people do now.

Ironically, it's the big tech companies that have the hardest time with this because they want to make generally deployable projects, yet don't have access to as much data as many healthcare orgs do. Frankly, I don't have much sympathy for them: a lot of this is trust issues from self-inflicted damage.


> make sure your data contains as many demographics as possible (not just race) from your actual patient population

By this you mean researchers should mirror their patient population as closely as possible (be it socioeconomic, gender, race etc) in whatever region they may operate in (which may not scale perfectly worldwide but will serve patients in that region well)?


It's a hard problem to work around which is rooted in the data available. I published this paper while I was at Google: https://www.nature.com/articles/s41591-019-0447-x

The only data we were able to get at the time was mostly white patients. We talked to many hospitals but many were/are reluctant to share anonymized data for research. I'm not at Google so I'm not sure the status of the project now, but there was a real attempt to try and gather more diverse data. Unfortunately there were a lot of obstacles put up by those who have the data (hospital systems).

Fundamentally, it seems to me like there just aren't as many lung cancer screening scans out there for non-white patients as there are for white patients. How do we get around this? How do we improve on the situation? I fundamentally believe that machine learning in the long term can make medicine more accessible to more diverse groups, but not if we shoot it down out of fearmongering right away.

I agree that bias is a problem, but part of what needs to happen to get more diverse data is simply having more data available. There is real promise in this technology and if we have a one dimensional view of it ("is it or is it not dangerous because of bias/privacy") then we will fail to get past the initial humps related to fear and distrust.


As a fellow practitioner, I entirely agree. Actually, reading this article made something click for me regarding the oft discussed and denigrated “bias in AI” always brought up in discussions of the “ethics of AI”: there is no bias problem in the algorithms of AI.

AI algorithms _need_ bias to work. This is the bias-variance trade off: https://en.m.wikipedia.org/wiki/Bias–variance_tradeoff

The problem is having the _correct_ bias. If there are physiological differences in a disease between men and women and you have a good dataset, the bias in that dataset is the bias of “people with this disease”. If there is no such well-balanced dataset, what is being revealed is a pre-existing harmful bias in the medicinal field of sample bias in studies.

If anything, we should be thankful that the algorithms used in AI, based on statistical theory that has carefully been developed over decades to be objective, is revealing these problems in the datasets we have been using to frame our understanding of real issues.

Next up, the hard part: eliminating our dataset biases and letting statistical learning theory and friends do what they have been designed to do and can do well.


> AI algorithms _need_ bias to work. This is the bias-variance trade off: https://en.m.wikipedia.org/wiki/Bias–variance_tradeoff

To be clear, statistical bias is in fact distinct from the colloquial term ‘bias’ most people use - but they can be interpreted similarly if given the proper context (which you did)


In machine learning the "bias" that relates to the bias-variance tradeoff is inductive bias, i.e. the bias that a learning system has in selecting one generalisation over another. A good quick introduction to that concept is in the following article:

Why We Need Bias in Machine Learning Algorithms

https://towardsdatascience.com/why-we-need-bias-in-machine-l...

The article is a simplified discussion of an early influential paper on the need for bias in machine learning by Tom Mitchell:

The need for bias in learning generalizations

http://dml.cs.byu.edu/~cgc/docs/mldm_tools/Reading/Need%20fo...

The "dataset bias" that you and the other poster are discussing is better described in terms of sampling error: when sampling data for a training dataset, we are sampling from an unknown real distribution and our sampling distribution has some error with respect to the real one. This error manifests as generalisation error (with respect to real-world data, rather than a held-out test set), because the learning system learns the distribution of its training sample. Unfortunately this kind of error is difficult to measure and is masked by the powerful modelling abilities of systems like deep neural networks, who are very capable at modelling their training distribution (and whose accuracy is typically measured on a held-out test set, sampled with the same error as the rest of the training sample). It is this kind of statistical error that is the subject of articles discussing "bias in machine learning".

Inductive bias has nothing to do with such "dataset bias and is in fact independent from dataset bias. Rather, inductive bias is a property of the learning system (e.g. a neural net architecture). Consequently, it is not possible to "eliminate" inductive bias - machine learning is impossible without it! The two should absolutely not be confused, they are not similar in any context and should not be interpreted as in any way similar.


How do we improve on the situation?

Given economic realities and racist history (consider what happened in Tuskegee as one example), in the US you would need to provide free screenings to poor people under circumstances that convinced people of color they can trust you while signing the documents to let you have their data.

This is a fairly high bar to meet and one most studies are probably making zero effort to really meet.

I'm part Cherokee and I follow a lot of Natives on Twitter due to sincere and legitimate interest in my Native heritage, but the world deems me to be a White woman so I am sometimes met with hostility simply for trying to talk with Native people while looking too White to be trustworthy. Prior positive engagement with specific individuals seems to carry little weight and be rapidly forgotten. The slightest misstep and, welp, "she's an evil White bitch, here to fuck over the Natives -- like they always are!"

I'm not blaming people of color for feeling that way. I'm just saying that's the reality you are up against.

As someone who spent some years homeless and got a fair amount of "help" offered of the "God, you clearly are an idiot causing your own problems and need a swift kick in the teeth as part of my so-called help" variety, I really sympathize with such reactions.

White people often have little to no understanding of the lives of people of color and little to no desire to try to really understand because really understanding it involves understanding systemic racism in a way that tends to make Whites very uncomfortable. It veers uncomfortably close to self-accusation to honestly try to see how the world is experienced by such people.


Note that lung cancer screening is covered my Medicare and thus already free for anyone over 65 who smoked a pack a day for 30 years (or equivalent aka more in less time).

My understanding is that there are many reasons that screening is not deployed more widely but the fact that it requires a 40 minute discussion with a physician, and those physicians in communities in need have very limited time.

Then there is the issue of getting people to show up and take part in preventative care which is itself tricky.

In any case, it was not something we were in a position to do much about as a small AI research team. Where I work now there is also a focus on trying to address this issue by reaching out to more hospitals to gather more diverse data, but there are still a lot of roadblocks to sharing data we have to work through and it’s a very slow process.


I vaguely recall some article about bathtubs being given to poor people in Appalachia who had no running water (in like The Great Depression of the 1930s). They would put them on the front porch and use them to store coal, which got mocked by others as them being "ignorant fools" who didn't understand what a bathtub was for rather than seen as evidence that bathtubs are essentially useless for bathing if you lack running water.

If we nominally have free care available to everyone but there are systemic road blocks that make it essentially impossible for most people of color to access, this is one of those things that falls under "White-splaining."

"Oh, you just do x and it's free" only x is nigh impossible to accomplish if you aren't a fairly well off White person is one of those things that falls under "systemic racism that Whites don't really want to understand."

There's a classist forum that charges $5 for membership and claims this is merely to prevent bots from signing up and is not intended to keep out poor people and all you have to do is ask and they will give you a membership for free if it's a financial hardship. And then the mods make sure to be openly assholish to poor people so poor people won't ask.

When I went to gift a free membership for a "sock puppet" account to an existing member who had said in MeTa she couldn't afford one but needed one for privacy reasons, the mods were quite assholish to me about the entire thing every step of the way in every way possible, including telling me after I had paid "She could have a free second account now for that purpose just for asking" -- something they also hadn't volunteered to her when she said in MeTa she wanted one and couldn't afford it.

It's important that it was in MeTa because that's the only part of the site the mods are required to read all of, so you can't say they just didn't see it. They saw it and declined to inform her "Oh, that's also free for the asking if it's a financial hardships for you. That policy is not only for initial accounts. If you need a sock puppet for privacy and safety reasons, just message us." And then offered to refund me my $5 that I had paid to gift her the account while I was still homeless.

They also did not offer to hook me up with a second account for free. I had eventually paid for a second account for myself while homeless and they didn't offer to refund me $5 at that time either.

I had used the ability to leave anonymous comments a few times and they messaged me to let me know I was a bad girl and a fuck up who was misusing the system as most people only ever left one or two anonymous comments in the entire lifetime of their membership. Nowhere was there any instructions that you should only do that once or twice. It was just the social norm that most people who participated a lot and had privacy concerns had a second sock puppet account for that purpose.

Rather than going "Oh, she's extremely poor and can't afford a second account because she's homeless" they treated me like I was misbehaving. I had no idea I was doing anything "different" until then in part because I was shunned socially because of the extremely toxic classist environment that was openly hateful to me where the mods actively encouraged other members to bully me.

People of color are painfully well aware that the rules are often de facto different for them. People of color often are not notified that X can be had for free or are oblivious to the ways in which it's not really free if you don't already have access to a great deal of infrastructure that Whites have access to on a routine basis and people of color often simply do not have that infrastructure already in place, much like people in Appalachia who can't take a bath even if you give them a free tub because their shack has no running water.

Saying "It's already free...if you can check this box that requires a personal jet to check off" means it's not actually free to most people. It's only free to the current Haves.

Such policies mean that a lot of "freebies" in the US amount to perks for the mostly white Haves, not basic healthcare for all people, regardless of socioeconomic status or skin color.


Yeah... I was just trying to explain what the challenges are in hopes that you have a better understanding of what it will take to fix it. For example, the requirement that you have a physician explain things I think should be relaxed as much as is feasible. I'm not blaming people who are poor for not having access to healthcare. Also... I'm not white.


I'm just talking. That's it.

Have a good day.


I certainly see and empathize where you are coming from.

However, I would like to add that it kind of makes sense that you’d have more white people with scans available. Focusing on the USA for a second (and note that this likely applies elsewhere too, since screening programs are really only in full force in developed countries which, surprise surprise, are predominantly white). Non white patients don’t get screened as much. Non white patients don’t go to the doctor as much. Non white patients are inherently fewer than white patients.

I agree that finding a good way to get anonymized data is going to help in future endeavors, but we do need to keep in context the players involved in getting and using that data.

And of course the ultimate goal, to improve health regardless of race, social class, wealth, etc.


> Non white patients are inherently fewer than white patients

Look at global population statistics. While there are no official global figures for ethnicity, we can make some simple inferences based on continental distribution [1]:

North America + Europe combined (17.19%) is barely as much as Africa (17.2%), and this is ignoring the fact that a good part of the North American population is non-white. There is nothing "inherent" about there being less non-white patients. The issue is inbalanced access to health care and screening programmes, but that is not inherent.

This is without even mentioning that Asia accounts for almost 60% of the global population.

[1] https://en.wikipedia.org/wiki/Demographics_of_the_world#2020...


I think I am agreeing with you, based on your comment on imbalanced access to healthcare and screening programs. I’m saying the same thing in that data collection for ct scans is really only happening in countries that are predominantly white, not that it isn’t possible for other countries to implement programs and collect that data for training purposes.

Edit: unless of course you have found large databases that suggest my intuition is wrong?


> The only data we were able to get at the time was mostly white patients. We talked to many hospitals but many were/are reluctant to share anonymized data for research

I joked to someone from a small country with public healthcare that the best thing they could do was release as much anonymized high quality data and essentially get every machine learning algorithm tuned for their population for free.

It's like adding your code to a popular CPU benchmark.


Honest question: does it really matter for lung cancer? Is there much difference between races in this particular field?


How would you know without the data? There are plenty of medical conditions with wildly divergent rates and pathophysiologies based on human genetics.


I don't think the burden is on us to prove a negative.

It'd be way better to think of it in terms of genetic markers instead of races. Race in general practice is a social construct based on colour, appearance and culture. It's a level of abstraction away from the actual genetic data that we don't need with our level of technology.

We could dig right into the genes and throw away our outdated notions. That's where the actual useful detail is. Who cares about the color of the person if they have the same amount huntington repeats you know?


"Race in general practice is a social construct based on colour, appearance and culture"

That is a very dangerous lie. If you are a doctor I pray for your patients that you are not taking a colorblind approach because their lives could be at stake. It just so happens that the way people look correlates with the diseases they get and sometimes don't get. Sometimes you can't sequence the person's genome because it's too time consuming. Sometimes we don't even fucking know which genes are responsible.

Your example of huntington's is quite frankly the outlier, and a facile example (you know it is because that's what they teach in 8th grade biology class). And even that's a shitty example because the severity only roughly correlates with number of tandem repeats. Any given person can tolerate a load of misfolded huntingtin, and their tolerance is probably governed by a raft of other conditions, that are familial and do track with race (venezuelans are for some reason more sensitive to huntingtin repeats). Other conditions, too, transthyretin amyloidosis is more common among japanese and finns and really rare among africans (except for one variant that causes congestive heart failure; but in finns and japanese it presents as liver failure), etc etc etc.


I agree with you, we shouldn't ignore the very real strong correlation for use in medicine right now and in the future.

I reckon I've thrown you with the general practice comment, which I intended to mean "everyday use" and not as a "general practitioner of medicine".

Clearly this is something you feel very strongly about but I think you'd get further with less belligerence. I feel like I've unknowingly wandered into the office of reviewer #2 and he's flown off the handle at something I am unwilling to match the level of aggression on.


The aggression is not to convince you. It's to help point out how dangerous your naive reading is to other people. Don't take it personally.


Good luck navigating the world as an asshole. Let me know if it pays off for you.

I've taken it very personally. I suspect others in your life do too.


I'm doing pretty good actually. Hope that ruins your day.


I spoke with one of the doctors who designed the criteria for determining whether a lung module found not through screening is cancer. He mentioned that they very nearly added a different criteria for Asian women, but were too worried about the potential backlash.


> Fundamentally, it seems to me like there just aren't as many lung cancer screening scans out there for non-white patients as there are for white patients.

Just to qualify, you mean for the USA alone? It seems to me that part of the challenge is recognizing that the research needs to take place beyond just Western countries, or acknowledging it where such research is already occurring. Understandably many people would not be so comfortable with Google accessing patient data from around the world, so the next challenge is how diverse and global data can be protected so that important medical research can take place without any compromise of privacy.

The challenge is hard but surely not impossible, as this was the approach taken by the AstraZeneca-Oxford (and others) which conducted part of its covid vaccine trials in South Africa to test efficacy on non-white populations.


At the time we were conducting this research lung cancer screening existed mostly in Europe, China and the U.S.

Note that if we had to conduct a 5 year multi site lung cancer screening trial ourselves in addition to doing the research, there would be basically no way of getting private funding for that. Those trials are very, very expensive and take several years to reach a conclusion.

Add to that the potential optics of Google “experimenting” in developing countries and the blowback risk from that...


Non clickbait headline: Medicine has some data collection biases (if your aim is to represent US demographics).

Long existing non-ML methods suffer due to this data collection bias but again for some reason the author seems to put AI in a special mysterious place on a pedestal. They use innuendo and anecdote to make assertions without evidence to back them up as systemic problems unique to AI. Innovation will never happen if new technology has to perform perfectly, it only has to perform better than existing methods.


I'm of two minds about this article. It does a reasonable job of enumerating the issues with naively deploying ML in a healthcare setting. However, these articles are becoming a dime a dozen and there is little actionable talk on how to discover or mitigate these issues at a level that practitioners can use.

To your point about the bar for new tech, I agree that singling out AI/ML is a cheap shot and more speculative FUD without concrete evidence. That said, we have seen no shortage of hucksters and self-aggrandizing members of the "move fast and break things crowd" trying to treat medicine as a beginner-level Kaggle challenge. This has become particularly egregious and noticeable during the pandemic [1]. The respective lack of medical and technical literacy among programmers/"data scientists" and healthcare providers/admins is just more fuel for the fire.

[1] https://www.reddit.com/r/MachineLearning/comments/fni5ow/d_w...


Feels like a rant about health care bias that was updated to include some mention of ML, but is still mostly about issues that existed previously.


The current effort towards medical ai is heading in the wrong direction: were trying to make machines adapt to the field while we should be trying to adapt the field to make it available to machine-aided reasoning. Problem: almost nobody understands both medicine and machines well enough to bridge the abysmal communication gap separating AI/CS and medical professionals.


I think your reason is dubious at best and thoroughly impractical. Machines aid our work. We dont work to aid machines. The problems in medicine are hard because biology is hard. I dont think we understand the depth of knowledge we have yet to uncover. Not really. We intuit it but we don't know.


The medical field is currently supported by clinical intuition much more than by hard data, or anything included into the common definition of "science". We should absolutely work to make medical information systems available to machines. Actually, "AI" won't work well until this happens.


One problem is mistaking statistical analysis for learning, data for knowledge


It's only a problem if it doesn't work.

I'm happy with imperfect protein folders that beat SOTA by 100% and with DALL.E drawing the radish-in-tutu-walking-a-dog and a harp-snail on request. I'll be happy also with the slightly unexplainable medical diagnosis that still beats the average expert in that field. And getting good unbiased data for these algorithms is going to happen eventually.


This includes some true things about data collection issues, but I cannot agree with the main thesis that ML algorithms are about power. If anything, they shift power to the patients because now the decisions can be checked and questioned at many levels. An algorithm will not send you away because it is to tired too take your complaints seriously.

So, using algorithmic decision making should be as bias free as possible, but there is no way that across the board they will be more biased than humans are now. If people care about marginalized communities, they should push with everything they got for, not oppose, ML decision making.


This reminds me how ML forms of AI are indirectly forbidden around here for government use, due to the lack of transparency.


"The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency." -- Bill Gates

In the lifetime of my adult sons the world stopped being predominantly agrarian and rural and hit a point where more than half of all people on the planet live in cities. It wasn't hugely long ago -- a hundred or two hundred years -- that most people lived in little villages or tribes and didn't travel all the far and knew most people they dealt with.

The local doctor -- or medicine man -- was often one of the older, best educated and wisest locals. He tended to come to your home with a little black bag and in the course of walking through your house to check on you if you were ailing and not able to get out of bed, he saw a great many things about your life without having to ask.

This informed his conclusions about what was wrong and about how to treat it. And it did so in a way that was largely invisible to the recipients of care.

Doctors likely often didn't explain that the house was filthy or the spouse was obviously abusive. Topics like that tend to be socially unacceptable and people don't like being criticized in that way, but if someone smarter and more experienced and better educated and wiser walks through your life and then prescribes something "for your health" and he has a track record of fixing the problem, you do as you are told because you were told it.

And then modern medicine invented a lot of diagnostics and what not and office visits by the patient replaced home visits because we haven't invented a Tricorder that can replace a little black bag and let you bring all that diagnostic power with you.

Human health is no longer treated like the logical outcome of all your life choices and your physician is no longer the wisest person you know giving you good advice that takes into account a great many factors you never talked about with him. People get treated like specimens in a petri dish in a way that implicitly denies the fact that their physical state of health is the sum total of all their life choices.

In tribal cultures, medicine men were typically people who tended to both spiritual and physical health. The two were not viewed as separate from each other.

Medicine has become commercialized in a way that doesn't really serve the interests of the patient and if you try to point that out you are likely to be written off as some paranoid fruitcake and conspiracy theorist.

There are a lot of good things about modern medicine, but there are also a lot of systemic issues and this article is correct to point out that AI tends to magnify those sorts of things.

Last, health is best understood as a moving target in 4D. Data capture does a poor job of approaching it that way and I'm not aware of any programs that are well equipped to do a good job with that.

Human doctors were historically put on call for up to 24 hours at a time as part of their learning process in part so they would see a patient's condition evolve over time while the doctor was still young and healthy enough to endure this grueling process. Having seen it for a time as part of their training, they retained that knowledge when they were older and could recognize a stage of a moving target.

I don't know how much that is still done, but I don't think we really frame AI in that way. I don't know how we would get there from here either. I still haven't managed to learn to code, what with being too busy with my own health issues all these years.


It's a lot like the emergence of scientific forestry as described in "Seeing Like a State" - instead of local knowledge and care/attention to individual circumstances by a generalist, the field has become standardised and based around things which can be easily measured.


Exactly, the article blames ML, but these issues of concentration of power and of knowledge are at least as old as State-istics !


> Human doctors were historically put on call for up to 24 hours at a time as part of their learning process in part so they would see a patient's condition evolve over time while the doctor was still young and healthy enough to endure this grueling process. Having seen it for a time as part of their training, they retained that knowledge when they were older and could recognize a stage of a moving target.

Closer to 36 hours at a time, I am sad to report


> Human doctors were historically put on call for up to 24 hours at a time as part of their learning process in part so they would see a patient's condition evolve over time while the doctor was still young and healthy enough to endure this grueling process. Having seen it for a time as part of their training, they retained that knowledge when they were older and could recognize a stage of a moving target.

Isn't it more because, unlike most other fields, the medical field systematically refuses to learn best practices to hand-over information? So they prefer to have a heavily sleep-deprived resident they can abuse rather than a better way to document and to pass information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: