I do think AI has a chance to make healthcare, for cancer and other diseases, a lot more proactive.
Even though we know prognosis is much better for cancer, and many other diseases, if you catch it early, we do essentially nothing to catch it early. My understanding is that this is because:
1. Administering regular MRIs, blood panels, etc. is expensive, in terms of the initial data collection
2. It’s also expensive, in terms of getting healthcare professionals to analyze the results
3. People often get the analysis wrong, in terms of both false negatives and false positives
4. False positives can lead to even more scans, analysis, etc., costing even more money
It does seem possible to me that specialized AI could get much better than humans at interpreting this data, doing it very cheaply (solving problem 2) with far fewer false negatives and false positives (solving 3 and 4). And it’s even possible that AI powered robotics gets great at collecting data in the first place, bringing down the cost of problem 1.
Basically, “AI invents cures for different types of cancer” seems like a moonshot, but “AI makes proactive medical scanning cheap and effective, thus greatly improving cancer outcomes” seems like a real possibility.
While we have some proactive screening for some types of cancer, the status quo for many types of cancer/patients is “wait until the cancer has spread enough that the patient is experiencing significant symptoms, with no systematic way to detect cancer early.” This is clearly not great. We’re accepting this for practical reasons today, but I do think AI has a significant chance to greatly improve the status quo here.
There are also downstream consequences of ordering tests aside from cost; not all tests are harmless. As an example, regular screening for prostate cancer isn't recommended as much. Partially because it is often so slow-growing that people often die of other causes before the cancer even begins to cancer, and because the definitive test is a biopsy which is somewhat invasive. Rates of complications are relatively low, but it becomes a cost-benefit consideration (again, irrespective of cost) of if those risks are worth catching something that you may not even want to bother treating.
Sure, of course you would make practical decisions about what kinds of tests to administer, and not proactively administer the tests that have significant negative side effects.
For the other point, personally I don’t really buy the argument of “it’s better not to know you have cancer X, because it might end up being low impact.” If we had excellent regular screening, yes detection of low impact cancers would become a lot more common, but I think people’s perception of them would change too. If it became a common thing for cancers to be detected, but the detection could reliably say “this is likely low impact, we should just keep an eye on it but not treat it”, this would be a lot less scary. It would become normalized IMO. Cancer diagnoses are partly so scary right now because we’re often mostly catching cancers that have progressed and are causing symptoms, so the public perception is rightly “cancer diagnosis = very scary.”
> Sure, of course you would make practical decisions about what kinds of tests to administer, and not proactively administer the tests that have significant negative side effects.
The harm is from investigating the screening test result and not the test itself.
> If it became a common thing for cancers to be detected, but the detection could reliably say “this is likely low impact, we should just keep an eye on it but not treat it”, this would be a lot less scary
This is already the case for some like prostate cancer and certain lymphomas.
> Cancer diagnoses are partly so scary right now because we’re often mostly catching cancers that have progressed and are causing symptoms
The most aggressive cancers are also the least likely ones to be diagnosed by screening due to growth rates, screening intervals and diagnostic test limitations.
The harm can be from the initial screening too. The lifetime risk for complications from routine colonoscopy is around 1.6%. The lifetime risk of colo-rectal cancer is 4-5%.
So already before investigating the result, there's a very real consideration whether increasing the number of colonoscopies is likely to be a net benefit.
Colonoscopy is confusing because it is both screening and diagnostic/investigating.
Most if not all of the complication/risk (perforation and major bleeding being the ones of note) is from the polypectomy / biopsy part of the colonoscopy.
The path to being able to say "low impact, don't worry" will be quite rocky and possibly involve a lot of painful treatment for patients. If you have a very different detection surface, you would not initially know what is all low impact, for example.
We found out about my daughter's type 1 diabetes purely by accident. I found a blood glucose test kit doing some spring cleaning and asked the family to gather around to check our sugar. It was really just a joke but I thought it would be fun. Queue three results around 100 and one at 270. We tested again the next day and it was 290.
Finding type 1 diabetes this way in a young teenager was so absolutely out of the norm that a major children's hospital had no idea what to do with her. They admitted her because it was protocol but it was completely unnecessary and we had to explain how it happened at least ten times while we were there.
Yeah that’s wild! I do think proactive medical screening is something most medical systems have mostly given up on, other than in very targeted ways, for very specific diseases in very specific high risk populations. But I don’t think this is because it’s a fundamentally bad idea, I think it’s more that it’s impractical right now. It does seem to me that AI has a chance to make it practical.
If you screen a lot of people for a lot of things, you will find a lot of things, but not all the findings will mean something or require action. The initial ramp up of huge "unwarranted" screenings will create a lot of pain until we/AI figures out when something warrants attention.
The problem with that is that even "essentially no risk of false positives" starts adding up when you do millions of tests every year.
If those tests are done on demographics where the chance of a true positive is also very low and the difference in risk profile between catching it during such screening vs. waiting until the patient discovers it is not very significant, it can take a very low rate of complications before it becomes a problem.
But, yes, we do limit that, and that is a major reason there are very few mass screening programs.
Then the selection of tests might get very small and we simply don't even know what all might be relevant if doing billions and billions of tests on a lot things - a lot of possible weird things to trip us up.
It is a fundamentally bad idea to do without very specific understanding of the risks. E.g. many programs intended to expand mass screening for breast cancer were reconsidered after it became clear that it was not a given that they would provide a net benefit, because it takes a very low level of risk from screenings and subsequent follow-ups before mass screening becomes harmful when applied to groups where it is likely to save many lives.
years ago doing similar thing found out my type2 diabetes when i was 24. It has been a life saver since i am able to manage it before any complication.
> Even though we know prognosis is much better for cancer, and many other diseases, if you catch it early
This is, to some extent, misleading.
I mean, earlier treatment is beneficial, but there's a significant confound. All else being equal, if a cancer is less aggressive and slowly growing it is more likely to be detected early.
Put in other terms, the cancers detected earlier by screening are a very different population of cancers detected late and with progression.
Probably hyperbole but a colleague told me about a 80/20 distribution, that a decreasing amounts are spent on substantial life extension or quality of life improvement in the west as the pop ages.
The basic good old medical care invented 100 years ago, while dizzying amounts are spent on prolonging lives for very, very few years, often very late in life - efforts that are very close to - in effect to have done nothing, ie. almost performative.
I don’t know about that, speaking to oncology as I work in a NCI designated cancer center (i.e. somewhere that spends dizzying amounts) and it skews younger than you might think these days.
I’m not sure what you mean by “very, very few years”. As a hypothetical would prolonging life for ~3-7 years in a 40-50 year old be considered “almost performative” to you?
“Good old medical care” often means 3-6 month survival for these patients.
yes. the amount we spend to keep people alive who have little to no hope of ever recovering is immense. of course it is cruel and leads to myriad bad outcomes if you were to even attempt to have a discussion about trying to change that (it is the slippery-est of slippery slopes)
there's probably no way to actually do anything concerted about it without turning society into Logan's Run but having gone through it with a grandparent and a parent, it is clear something is broken at the end of life
Probably. Look at the people in the hospital - they’re old. Inpatient costs are astronomical, and seniors with poor social supports end up hospitalized at great expense with issues where root cause are easily prevented… like dehydration.
Being old is a fairly long part of life nowadays. Old is not the same as hopeless or almost dying.
My grandma had a melanoma at the age of 74, which is "old" by most human standards. It was located on her earlobe and an operation helped her get rid of it.
She then lived to be 90, most of that extra time either fully or partially self-sufficient. Only in the last months in her life she really deteriorated.
Basically, she gained almost a fifth of her life by that single operation performed when she was already old.
That’s awesome. I’m not suggesting that older folks not get care.
But because of way our system works, we’ll happily pay $300k to hospitalize an otherwise healthy 70 year old who is dehydrated and develops serious problems that could be solved by an aide or helper that would cost $20-30k.
> I mean, earlier treatment is beneficial, but there's a significant confound. All else being equal, if a cancer is less aggressive and slowly growing it is more likely to be detected early.
Wow! That makes so much sense! I had never considered this!
Sure, but my understanding is that for many types of cancer, detecting that specific cancer early does make a big difference. It can be the difference between a single, minimally invasive surgery to remove a tiny rumour that hasn’t spread, that can be effective even without chemo/radiation/etc., and stage 4 cancer that has spread a tonne where even with extensive chemo/radiation/etc., your chances aren’t good.
> Sure, but my understanding is that for many types of cancer, detecting that specific cancer early does make a big difference.
The problem is, this is hard to measure. We know that "detected early" correlates with better long term outcomes. But "early" means "smaller and with less spread" which in turn is strongly correlated with "growing slower and spreading less".
We've had unpleasant surprises where e.g. extending screening to earlier ages detects more cancers but doesn't decrease the number of people dying from that type of cancer because of these confounds.
How frequently do you want to screen? Monthly? Weekly? Some also have no known effective treatment - maybe some super early detection helps, but maybe not.
Cancer isn’t one thing and AI is an important tool that will accelerate treatment and drug development.
My late wife detected a mole that was melanoma in 2019. She was within months of being cleared for observation in 2023 when two brain tumors were detected. Despite the best of care, she was gone in 6 months.
If her initial treatment had been in 2024 instead of 2019, it’s 80% likely she would be around for another decade or more. That’s how fast new treatment options are coming to market, and data analysis with AI and other tech is improving it. New trials are using platforms like Moderna to provide custom vaccines that should reduce treatment side effects.
While the hyperbole of the media is annoying, the impacts of new tech to identify genetic vulnerabilities in cancers is near miraculous.
I was speaking specifically towards screening more and detecting earlier. They have utility, but recent evidence seems to indicate that it's not nearly as much as the public assumes.
No worries. I share it frequently here because I think the personal connection underscores the import, which sometimes gets lost. It's easy to think about "cancer" in the abstract, and sometimes we miss that it's a mother, a wife, a friend -- I know that I did.
And at the end of the day "cancer" is a category, not a thing. Sometimes (prostate cancer) early detection and intervention is bad, as the cure is worse than the disease! Other times (ovarian cancer), accidental early detection while looking for something else entirely, as symptoms don't present until you've hit Stage 4 typically.
Can you elaborate a little on what's new? Someone close to me had a melanoma scare on almost the same time frame, and had a lot of difficulty finding doctors who would take her seriously.
In the case of my wife, she would have been given a round of nivolumab or keytruda. These are immunotherapies that enable your immune system to kill the cancer cells.
You have to advocate very heavily. With melanoma, I wouldn’t mess around and seek at a minimum second opinions from the nearest major cancer center.
False positive and negatives are unlikely to be improved by AI significantly. They are mostly based on a trade off between catching more, and making sure those you catch are accurate, and a limitation of the test itself. Sure, AI might marginally make some tests better by interpreting more variables more reliably, but it’s going to be marginal rather than solved.
For example, for PSA a blood test for prostate cancer, it doesn’t matter how much AI you through at it, it’s just not a great test. It’s elevated outside cancer commonly, and is normal in a significant percentage of prostate cancers. Just have to deal with its limitations.
Huh? That’s the whole point of cancer screening which we do a lot of in the West. The benefit of which remain hotly debated. New tests are also constantly being researched.
That initial statement may have been a bit too strong, but I did clarify it later in the same comment:
> While we have some proactive screening for some types of cancer, the status quo for many types of cancer/patients is “wait until the cancer has spread enough that the patient is experiencing significant symptoms, with no systematic way to detect cancer early.”
Maybe it depends on where you are, but where I am (Vancouver BC, Canada), the above is true. Proactive cancer screening is quite limited here. I believe it's limited to screening for cervical, breast, colon and prostate cancer, plus lung cancer for 55+ year old smokers who smoked for at least 20 years. And for even those specific cancers that are screened for, availability is limited by risk factors like age, e.g. you can't get screened for colon cancer until you're 50, that sort of thing.
There are so, so many other types of cancer and non-cancer diseases/conditions that we do not screen for at all. Plus, even for the cancers we do have screening for, it's often not frequent enough to catch more aggressive variants early - a lot of these screenings are only once every ~2-5 years. The idea of, say, proactively taking MRIs, blood panels, etc. on people, looking for early stage cancer (and other conditions) throughout the body is not something that's available. You can't even get an annual physical with a family doctor anymore, there's only screening for a handful of specific diseases, and only once you reach certain ages/risk factors.
Cancer screening starts a bit earlier for women, due to higher risk of breast and cervical cancer, but if you're a man under 50 in BC, you're really never getting any sort of medical test done ever (even simple things like blood panels) unless you go in to a doctor's office for a specific condition. I have MANY friends and family members who've been diagnosed with cancer in Canada, and almost none of it has been caught in regular screening, because the screening is so limited, its almost always been caught by the cancer spreading enough that the person goes to their doctor due to symptoms.
> Plus, even for the cancers we do have screening for, it's often not frequent enough to catch more aggressive variants early - a lot of these screenings are only once every ~2-5 years.
Screening intervals are based on doubling time.
> The idea of, say, proactively taking MRIs, blood panels, etc. on people, looking for early stage cancer (and other conditions) throughout the body is not something that's available.
You can pay out of pocket for a “screening MRI” in BC but from my clinical practice the yield is dubious.
> You can't even get an annual physical with a family doctor anymore,
Evidence has shown the physical is useless.
> there's only screening for a handful of specific diseases, and only once you reach certain ages/risk factors
Screening needs pretest probability and a diagnostic test with sufficient accuracy. It simply does not exist for most cancers. Trials are underway for new tests like cfDNA but in 2024 there aren’t any validated options.
I have the utmost respect for the work that Jeremy and team and Fast.ai have accomplished but in this post Rachel is railing against the free market, bugs in software and preexisting biases in humans to create an incredibly cynical and unrelated cocktail.
There are many things broken about our world. We should work hard to fix them. But the promise of AI as a research tool to create the kinds of breakthroughs that humans aren’t capable of is undeniable.
Footnote: I think the context of her thinking is very much around the categorization and management of patients, which doesn’t necessarily relate to “AI will cure cancer”.
Rachel and I are well aware of the promise of AI as a research tool. I created the first company that focussed on deep learning in medicine. Rachel has a PhD in math and is now doing a masters in immunology, and is trying to help figure out to bring AI to medical research.
The point of the article is that AI as a research tool is insufficient to result in improvements to patient outcomes on its own. The article includes, for instance, the example that better MRI interpretation doesn't help those people that are being refused an MRI.
Rachel and I quit our jobs and spent years, entirely for free, helping make AI more accessible to more people. We did that because we think AI is great! Pointing out that helping patients requires more than just AI is not anti-tech, it's pro-human.
We can care about both the technology and the context in which it operates.
Like the OP, I also greatly respect the work Rachel and you did for AI.
Nevertheless, I can't help but think that you are seeing this issue to negatively.
"The point of the article is that AI as a research tool is insufficient to result in improvements to patient outcomes on its own." This seems unlikely if you consider this question as-is. Past technological improvements has made healthcare better overall without requiring societal changes necessarily. Take mRNA vaccines, a technology that has improved Covid-19 outcomes tremendously. Sure, certain groups have less access than other groups, but surely even marginalized groups are better off overall because of the existence of these vaccines. Health is not a zero-sum game.
And I think this negative attitude also misses the potential of AI by default. Yeah it sucks that not everyone gets MRI access, but those that do will benefit, including marginalized groups. I guess it feels wrong to some people to express a positive sentiment at an unjust state of affairs, but improved diagnosis and treatments translate into lives saved.
You also have to compare AI to the status quo. Sure, AI will have biases, but so do humans (as Rachel points out!) and the decisive question is whether AI has less biases (similarly to how safety of self driving cars is judged). With AI you at least have a chance of analyzing the decision making, and making it objective. We should be extremely excited about this possibility!
Thanks Jeremy. Makes sense. What you all have accomplished is incredible, particularly in democratizing AI via your learning materials. Fast.ai is required learning for our team.
Yeah, I was kind of baffled by this - she starts by talking about the problems with automated systems, but when she gets to the part about the medical system she starts with the example of human doctors not listening to their patient. Not to say AI is inherently better than humans in this kind of situation, but it's very strange to make the case that AI in medicine is bad because human doctors have bias.
I expected the article to be about how people misunderstand the difference between knowledge and intelligence. No matter how smart AI is, it can't just magically invent a cure for cancer - it has to gather the knowledge of how things interact and what effects drugs have.
Still, AI does have lots of potential to improve things there. My wife is a Research Associate in a biotech startup, and she could basically be replaced by humanoid robots to run experiments (probably don't even have to go that far - there are cloud labs already) and AI to run analysis. The analysis will be much faster and the robots can work 24/7, so you could really increase throughput of experimentation.
I dislike these articles that end up being written with a leftist bias that AI is already being used to [purposely] control and marginalize people. There are definite inequalities in society including in medicine and these biases I am sure make it into training data. However, while we should be watchful, I don't think there is a conscious decision by "the man" to purposely put certain groups down using AI. AI researchers themselves and doctors are a pretty diverse group.
There is only a single mention of anything even vaguely like what you're reacting to, which is the specific claim of an "intentionally faulty calculation as part of the computational RoboDebt program" in Australia. To jump from this specific (and AFAIK correct) claim to a "conscious decision by "the man" to purposely put certain groups down using AI" seems like quite a leap.
The article largely deals with unfortunate side effects of a combination of feedback loops, implicit bias, economics, and lack of technical understanding in the broader community -- quite the opposite of any "conscious decision".
Well, I think the preface leads with this statement, which does seem to indicate a fundamental bias in the thesis that’s overly political:
Second, AI is used to disproportionately benefit the privileged while worsening inequality.
The article talks a lot about the challenges of care delivery and how there appears to be systemic breakdowns in how patients are listened to and how demographics seems to lead to worse outcomes. These are all serious issues. It further states essentially that AI, at least learning based AI, learns what it is trained with and most training data indirectly encodes the various social biases that influence the data collection or what is collected in the data. The is true too.
However neither of these have to do with AI curing cancer. They seem more statements that AI won’t solve all social ills, which is absolutely true. But these don’t speak to given a positive cancer diagnosis can AI provide a route to curing an individuals cancer. I suspect the answer is “maybe,” but none of the social and political points made are why. It’s because cancer is very complex and we need a vector that AI can generate some solution in to treat any specific cancer. Since there are many many types of cancer and many many variants of those types, as well as per individual cancer genetic variability, it seems unlikely “AI will cure cancer,” but I think it’s very likely AI will make cancer treatment much more personalized, discover many new therapeutic agents, and accelerate human driven research. It is already used in generic immunotherapy, mRNA design, and other treatments. As tools and techniques become better, as well as our understanding of how to apply it, AI will help a lot.
Just a few snippets that use language typically used by those on the further left with a bit of an agenda. I am kind of center-left myself, so I agree with some of the sentiment, but I tend to think that a lot of the issues they ascribe to purposeful, scheming evil actors are often just defects in how our society works on a macro scale.
- "AI is used to disproportionately benefit the privileged while worsening inequality"
- "the goal is to increase corporate and government revenues by denying poor people resources"
- “It is a pattern throughout history that surveillance is used against those considered ‘less than’, against the poor man, the person of color, the immigrant, the heretic. It is used to try to stop marginalized people from achieving power.”
- The same pattern is found in the role of technology in decision systems.
- "The goal of many automated decision systems is to increase revenues for governments and private companies"
- "here is already a clear pattern in which AI is used to centralize power and harm the marginalized."
I don't think there's anything wrong with pointing these issues out, even if you think AI is merely an accelerant of existing social issues. And it's not necessarily a question of how left you are to believe so. Ultimately what's important is the outcome, and if the outcome is an increase in marginalization, then I think something should be done. Stopping AI is probably suboptimal but the alternative that preserves the rights of the marginalized is getting everyone to agree that society should be more fair, and in that context can you really blame people for espousing a belief they find more actionable or practical?
You’re just not clued into the lingo. This person is an ideologue and is trying to hide it somewhat but is failing to completely omit the use of language that reveals the leftist ideology they have been swimming in.
Few people on HN can read this kind of critique without short circuiting but it’s a valid critique and I would bet a good sum that if you looked into this author you’d find lots more direct evidence of promulgation of leftist ideology (critical theory)
I don’t know what you could be saying other than that caring about poor people or welfare recipients is a leftist meme. That was pretty much all of the “leftism” in this article.
The fact that the poster focuses on the term ("leftism") rather than on the substance is itself a red flag imo. It puts the discussion on a weird tripolar spectrum: you're either leftist, centrist, or rightwing (of which the poster is likely to posit themselves as a center, to avoid being called biased). I don't think it's terribly productive, and unless the article was calling for Bolshevik Revolution, I don't mind a certain bias. The fallacy here is that it's possible to come in with objective lens--it isn't.
No, there is no fallacy here. You’re just ignorant. There are several signifying phrases and terms used here that are commonly used by people who have absorbed critical theory ideology.
So by memes you’re referring to words and grammatical constructions less than ideas? My comment, which I don’t think you ever responded to (nothing wrong with that just saying), was calling out the fact that you didn’t directly refute the ideas present in the article and are instead fixated on their presentation. If you attacked the ideas directly, I feel you’d only find a few ways to attack them, and that the primary idea you’d be attacking would be sympathy for the poor and such. I think it’s a cop out to avoid directly criticizing popular (on this site) ideas by attacking their presentation.
Nothing wrong with being ignorant. Unless you’re claiming you’re not ignorant and just accept the memes as non-biased. In which yeah your brain would be pretty ravaged if that were the case (tho I’ve never liked Musk’s characterization personally)
Yes, you probably also have been led to think that these memes are just "being normal" but these are deliberate changes to the language which originated in academic programs of critical theory. See for example: https://cssp.org/2020/03/recognizing-race-in-language-why-we...
What you're saying is vague to the point of being malicious. You're poisoning discourse by labeling ideas as undesirable, but so broadly that you end up trying to smear the author instead.
Like most tools, it will be used by both oppressors and the oppressed around the world. The concern I have is that the prohibitive cost of building foundational models means that it will disproportionately benefit those with extreme resources at the expense of those without.
Also, for the French example, it actually has nothing to do with AI.
The problem was that the calculation for housing (not food) welfare benefits changed and the migration to the new software that came with it went poorly.
There's a lot of media swirling around right now that's remarking upon the fact that the AI movement especially at it's most evangelical end strongly resembles what would otherwise be called a cult.
I don't think it's a coincidence we're seeing the largest numbers ever of people leaving organized religion at the same time we're seeing so many communities like the one that's grown around AI spring up that are essentially religion without the historical baggage.
"AI will cure X" is equivalent to "Human intelligence will cure X", except that we expect AI to get there first because it's denser and can be replicated orders of magnitude more quickly and cheaply. You can levy a couple of counterpoints- that humans will never cure cancer either or that this kind of AI is impossible- but those are magical pessimism moreso than the alternative is magical optimism.
Some things AI will probably just do better, even if there aren't any paradigm-shifting breakthroughs with cures and medicines.
For example, reading an MRI or other medical scan correctly goes a long way toward curing cancer. Reading it incorrectly wastes precious time as problems are ignored or mistreated with the wrong methods. I knew someone whose bone cancer was mistreated as a rotator cuff injury for a little over a year due to the fact that an inexperienced and probably overworked doctor did not correctly identify it in the slew of tests and scans the patient had taken.
In the future, it is likely that AI will always read these scans and test results more accurately than a human physician, leading to higher remission success rates. This will happen fairly quietly and behind the scenes, even if AI doesn't invent the magic cancer pill.
> we expect AI to get there first because it's denser and can be replicated orders of magnitude more quickly and cheaply
There is no evidence that AI that's useful enough to help make research breakthroughs can be replicated orders of magnitude more quickly and cheaply. Whilst that's a true fact about software, AI is not just software -- it requires a lot of hardware. And currently it's far less efficient at using that hardware for general purpose problem solving than humans are.
To the extent that machine learning models are a black box, it is ‘magic’.
Of course, at the lowest levels it’s entirely understood but it’s the emergent properties that give many the feeling of magic — and I would argue quite reasonably so.
…for people who use ‘magic’ to refer to things that accomplish complicated tasks whilst abstracting it all away so that it looks easy (much like how I can copy and paste between my iDevices on a LAN and it magically works), anyway.
> Of course magic can cure cancer because… it’s magic!
If you take magic to mean something supernatural, then yes. It’s essentially of the same form as ‘god is perfect; a god that exists is greater than a god that doesn’t; therefore god exists’-type arguments.
The real test is to replace AI with "humanity, given 1000 years". If the statement is reasonable after replacement then it is possible. If it still sounds unlikely then it merits deeper inspection. Will humanity cure cancer given 1000 of study and medical development? It's quite possible, and AI will probably help. So it's not unreasonable to claim that AI can cure cancer (eventually).
The article completely misses the essential mechanism behind the advancement of science, like cures for disease. Advancing science is not about making better use of the info you have. It's about gathering NEW info you didn't have before and then using it to propose a novel mechanism of action or a hypothesis leading to a better theory. But AI cannot gather new info. Only better sensors can do that. Without gathering better info, even an infinite increase in smarts can't move the needle in science.
If the info you need lies in genomics but all you have is images, no amount of cognition can bridge that gap.
AI has seen some success in detecting disease (diagnosis) but none at all in creating new treatments or cures. Future use of AI likely can help guide or optimize an existing therapy (e.g. chemotherapy) by detecting or discriminating feedback faster than a human can. But invention or discovery? No. AI as we know it today has shown no ability to advance the knowledge frontier beyond what the facts fed in by its teacher, nor any signs it ever will.
First, this article is about medicine, not science, so I'm not sure that the critique is well targeted.
Your critique might apply to an article written about the use of AI/ML in science, in which case it would be uninformed - algorithms like DeepVariant, DeepConsensus, and AlphaFold are fundamental AI-enabled tools for gathering and interpreting new information from existing sensors that changed the state of the art and are advancing science and enabling cures today. AI-enabled tools are also improving information management and literature search for scientists, because advancing science usually is about making better use of the info you have - a lot of scientific breakthroughs today are made by analyzing data that has already been gathered (like Genbank or UK Biobank or All of Us data).
One related publication that I thought would have fit well into the comment about disparate pain treatment is this[1] from Ziad Obermeyer's group. They found that when they trained a model using patient pain and knee X-rays, much of the disparity in symptoms could be accounted for by findings from the X-rays themselves. It's a nice example of where using the patients' symptoms and objective data may actually outperform current medical standards, which fits with her participatory comments in the final paragraph.
> It's a nice example of where using the patients' symptoms and objective data may actually outperform current medical standards
I think you’re jumping the gun here, this paper was a hot topic when it was published. Patient symptoms combined with objective data is already the medical standard.
Note that:
1. KLG is not a measure of pain but of OA radiographic severity.
2. KLG 3-4 is not a prerequisite for surgery.
From the article:
> While radiographic severity is not part of the formal guideline in allocations for arthroplasty (which only requires evidence of radiographic damage), empirically, patients with higher KLGs are more likely to receive surgery.
TKA patients skew to higher grade for many reasons, one being that studies have shown KLG 2 patients who undergo TKA are more likely to experience dissatisfaction (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8344222/).
There are a lot of “ifs” in this paper which did not examine whether KLG 1-2 but ALG-P 3-4 patients benefit from TKA over conservative mgmt or other surgical interventions. It’s also unclear whether this better selects patients for TKA than KLG 1-2 + pain scores and other clinical variables.
All this shows is that KLG is a poor correlate for pain, which is known and not what the score is designed/used for.
I'm not a radiologist so I could well be overinterpreting. However, if so, I am not sure that I am alone. This study published in Nature Medicine was hailed by radiologists as one of the "notable successes in using explainability methods to aid in the discovery of knowledge" [1].
Your sober assessment seems valuable, and would make for an interesting letter to the editor.
Not sure what a letter to the editor would accomplish. The nature paper only interpreted radiographs and the only claim of the authors was basically that the model is a better predictor of pain than KLG.
Your comment misinterpreted this as “using the patients' symptoms and objective data” (when they only used objective data) and added “may actually outperform current medical standards” which was not the claim as current medical standards already consider patient symptoms in addition to objective data, as stated in the article reference to the TKA guideline.
When I report a joint xray I’m not assessing the patient’s pain level, they can be asked that.
> Your comment misinterpreted this as “using the patients' symptoms and objective data” (when they only used objective data)
This represents an important misunderstanding of the methods of the paper. The model was trained using images (objective data) and the pain score (patients' symptoms). From the methods: "A convolutional neural network was trained to predict KOOS pain score for each knee using each X-ray image."
Also with respect to the author's claims, from the paper's abstract:
> Because algorithmic severity measures better capture underserved patients’ pain, and severity measures influence treatment decisions, algorithmic predictions could potentially redress disparities in access to treatments like arthroplasty.
You think I'm misinterpreting, but I still think that the paper is more important than you're giving credit.
Inference on the validation set is xray -> pain score. It does not incorporate patient symptoms to make the prediction. In real life a surgeon incorporates the xray + patient symptoms/pain score.
Skipped a step: the model needs to be trained, which requires the patient symptoms as the target for weight updates. I think that you simply misread my original comment.
Perhaps I got lost but I am discussing your original statement of “using the patients' symptoms and objective data may actually outperform current medical standards” which relates to the model predictions/inference not training.
In this context we are talking about a pain predictor from an xray which is neat but not the point of KL grading.
The comparator, current medical standards you reference, would be a model outperforming surgeon assessment in conjunction with radiographic findings. Not the predictive value of KL grade.
In my not so humble opinion, AI will help medicine and will accelerate the design of new cancer therapies. Curing all possible forms of cancer almost seems like a red herring, given how fragile the human cells are, and how these cells generally become more fragile with respect to becoming cancerous as we age. However, better drug therapies will eventually make cancer a more manageable condition for a large fraction of the population and hopefully for everyone in the long term. I am sure the realities of the current health care system will be dramatically different than those of healthcare in 100 years from now. I agree with Rachel that patient led medical research is a very important area that is greatly aided by AI and will have an impact in medicine.
The article, while talking about some very real problems, seems to be completely unrelated to the question of whether or not AI is likely to have that capability in the near/mid future (which is what the title implies).
No, the article discusses how “AI” and other machine learning and algorithmic systems are used in practice - to surveil, dispossess, and harm those in whomever the powerful deem as out groups, and how this is done under the guise of idealist appeals about mass benefit, akin to “curing cancer.”
Exactly. There's lots of things AI/ML is being used for in medicine besides image analysis for diagnostic purposes. It's helping laboratory researchers analyze their data and pick which hypotheses to investigate further, for example.
If I understand the article, it’s that if you keep everything the same and simply replace ai for human doctors, then you won’t have better outcomes. Basically, you sometimes need multiple diagnostics and actually need to get scans and medical imagery to detect issues. These are all directly improved by AI. AI should make diagnosis cheaper and more effective, with less bias. MRIs should cost 1/10th as much as we increase the throughput of machines with AI (as several people are working on) and reduce the need for expensive human labor.
It’s the lowering of cost that feels like the revolution in healthcare. AI should enable nearly free ways of mining noisy signals in the body could catch issues. Smart toilets, mirrors, scans, etc. all help.
> MRIs should cost 1/10th as much as we increase the throughput of machines with AI (as several people are working on) and reduce the need for expensive human labor.
I work in the emergency department of a busy hospital. MRIs are pretty labor intensive to perform and take 15m to an hour. We are not going to get to a point where we are regularly scanning people with MRIs without clear symptoms of CVA etc. They require techs to run, you aren't going to remove much "expensive human labor" other than making initial radiology reads faster. The article makes the distinction that someone needs to decide to order those time-consuming, expensive scans, and that is where the point of failure is right now, in that we sometimes write patients off as delusional. AI can't help in situations where we have no data.
I keep coming back to the fact that if you asked even GPT 3 how to solve climate change, it would give you a perfectly good answer. We have this idea that AI will “solve climate change” at some point in the future. What we really mean is “give us a different answer to climate change that has zero cost”.
Climate change is not currently being solved because of politics and existing systems, not because of a lack of intelligence.
There are so many similar situations in medicine. Intelligence is not enough- you also need a system capable of acting with that intelligence.
AI could be transformational, but not without systemic change to support it.
Climate change is easy to solve: build fusion reactors which are 50x cheaper per kW to deploy than solar. Deuterium is cheap, after all.
> Climate change is not currently being solved because of politics and existing systems, not because of a lack of intelligence.
It's a technical problem that's sociopolitical because we don't have a Pareto-improving technology to solve it with. Like 50x fusion reactors. Not a complete solution, but with it, the political will to shut down remaining emissions is easy to muster.
Some people think we'll have AIs soon for whom "design me a fusion reactor which is 50x cheaper per kW to deploy than solar" is the sort of input which gets the requested output. I am skeptical of this. But it isn't an incoherent thing to believe.
Where cancer is concerned the situation is much less clear.
> Some people think we'll have AIs soon for whom "design me a fusion reactor which is 50x cheaper per kW to deploy than solar" is the sort of input which gets the requested output. I am skeptical of this. But it isn't an incoherent thing to believe.
IMO, pretty much magic wishful thinking.
I'm enthusiastic about AI. But it's not magic. The main problem here is as Feynmann said, "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled".
In this case the problem is that a fusion reactor is a real, physical machine intensely dependent on the real world in a million ways. From the actual physics of fusion, to manufacturing capabilities, to the capabilities of the various sensors, actuators, processors, etc needed to control the reaction.
You can't magic that up with AI. There's no way for it to figure out that we're subtly wrong about some fact about fusion and that we can get 100X better by just doing things differently. A hypothetical GPT20 would still need to actually perform real physical experiments to gain such knowledge, because it'd be nowhere in our books or the internet for it to ingest it.
> Climate change is easy to solve: build fusion reactors which are 50x cheaper per kW to deploy than solar.
That just kicks the can down the road. If our general strategy (aim for exponential growth in all things) remains the same, cheaper power will mean we use more of it. Even if we completely phase out CO₂-releasing energy sources, the waste heat of our industrial processes would eventually dominate the power received from the sun. Even if we solve that… deuterium, like oil, is non-renewable. Doesn't matter how cheap it is: there are only two dozen trillion tonnes of it in the ocean, and once we've run out, we've run out.
Projected oil demand is currently measured in trillions of tonnes per century. Deuterium is only seven or eight orders of magnitude more energy-dense than oil. If our power use continues to grow exponentially, we won't get close to the five billion year mark before running out: ten thousand years would be pushing it.
My dad’s an orthopaedic trauma surgeon. A friend was curious if his knee injury outcomes could be predicted from the radiologist’s report. So I supplied that plus X-rays and description of condition and situation that caused it to ChatGPT-4 (the early days version) to see if it would describe everything correctly and say what actions should be taken and then sent that to my dad for validation. He said it was spot on.
He then asked what it would say for the prognosis. Also again spot on.
So there’s things it can do well, for certain. I’m of the opinion that if we manage to scale this machine it will create novel science.
In any case, the two complaints expressed in the article ( inappropriate comprehension of patient problems and inequality) are both actually better with LLMs. The patience and understanding of an LLM cannot be beaten. Once we fix the context window, and once again unshackle this machine from ridiculous chains that safetyists have put it in, we will improve patient care.
Maybe they won't cure cancer, but a host of problems will be taken care of.
Cancer is here to stay as long as organic DNA is the source code of molecular biology. The combination of data gathering and AI will help accelerate every step of drug development to make cancer more manageable, but we still need to think in terms of 50+ years for serious progress. https://key.bio
The 'what AI is trying to solve is not the problem' viewpoint of this article is also in the same vein, in that current AI discourse does not account for our society and systems.
Unfortunately as a non-specialist techno-optimist, I can't help but view it as a problem to be solved.
I don't see any reason to believe that AGI or even existing LLMs will be worse in these respects than the doctors of dumb algorithms mentioned in the article. If anything, I'd expect AI models to have potential to help mitigate the kinds of issues described.
Alphafold is not mentioned in the article, I didn't notice anything especially related to it mentioned in it either. I think Alphafold is a pretty insightful as a mental model, for thinking about how AI might help with curing cancers.
Actually many people think removing humans completely when they think of AI medicine. But I believe in AI that can provide helpful suggestions and narrow down the causes.
Here's an example of youtuber [0] who got helped by Ada[1] which suggested she might have endo given her description of symptoms which countless doctors failed to narrow down.
Better statistical models will help us identify diseases, and maybe molecules we can use to fight those diseases.
But "AI" is not magic sauce you can slather on to a problem to get a solution. It's going to take statistics-savvy medical professionals, and medicine-savvy data professionals, working together. We've complicated things to bring solutions within reach that might've not been within reach before, so we're going to have to be smarter about how we apply these new tools.
Saying "AI will cure cancer" is like saying "science will cure cancer" or "doctors will cure cancer". In general, all three statements are true about technological progress.
> Second, AI is used to disproportionately benefit the privileged while worsening inequality.
That's bullshit. Computers and other gadgets always start out as toys of the rich, but they quickly trickle down to everyone else. Compare the multi-thousand dollar cell phone of the 80s use by the rich to the multi-dollar cell phones of today used by billions. I don't mind if billionaires spend their money on AI now so that the R&D results in cheap AI in a decade that can be used by everyone.
The author also goes and lists people harmed by computer problems without provide a benchmark of the number of people similarly harmed by human problems.
Open source models seem to be lagging by about a year. But for the end user with a powerful enough computer, you can do GPT 3.5 level work for no additional cost right now.
Technology is an amoral form of power that accrues to those who have access to it, as well as have the most access to it. In unequal societies, this accrual is to those who already have wealth, political power, etc. If the opposite was true, as you claim, the most technologically advanced countries would have seen decreasingly inequality in the period you fit. In reality, they’ve only seen the opposite.
This article seems to avoid talking about the core improvement AI will make to radiology - more accurate analysis. It mentions that patients may not get an MRI in the first place...but at least for those who do, surely there will be an improvement. The referral/bias issue will exist no matter what the technology is used. garbage article.
many “AI will never be able to do X” are extrapolations from past versions of AI diagnosis, and don’t entertain a bit of speculative extrapolation over what AI could truly be. it will be so much more than simply analyzing scans.
I think these general statements are fine. I believe that AI will cure long COVID and ME/CFS soon.
I have already seen AI predict these illnesses with outstanding accuracy with no known biomarkers identified, drugs and molecules being found to neutralize it, and even new parts of the virus being identified to target.
I feel like this is the most exciting time to have one of these illnesses, cancer included. We may indeed find a cure for everything. Or at least everything we amply fund with billions of dollars.
when people say AI will cure cancer they mean AI will find some novel idea that no-one tought about. the article is more about how AI will fail at medical practice.
Or merely that it'll be used to accelerate what we can already do to the point that it's revolutionary. Other computer advances have done the same, like the ability to finally create Folding@Home at a level that it'd matter.
It's easy to say "X will do Y" when X is a force multiplier for things we already do.
The fact that it could also come up with new ideas is a bonus on top of that, making it even easier to say.
I can’t understand why people all write these articles about something fast evolving. Five years ago people would hang thought “AI will make art” laughable and now some very successful movie producers are shuttering projects because they think they’ll be using it soon.
Today’s AI won’t cure cancer, but we have no idea what the AI of 20 or 200 years from now will be able to do.
This article had nothing to do with AI (and that was the point). It’s largely a regurgitation of the same polemics you’ve read about “capitalism” a thousand times in the last decade with AI in the title just to get you to click.
"AI will..." is either the som of human hopes and dreams, or the sum of human fears. I wonder if industrialization was anything like this (though I don't think ML will end up being comparable to the industrial revolution).
Saying you'll cure cancer with AI without specifying the type of cancer is like declaring you'll solve crime with Batman. Way too broad to be useful, but makes a good headline.
When we say "She has cancer" we mean "a cancer". (Hopefully she doesn't have all of them!). Why couldn't the sentence "AI will cure cancer" mean "a cancer" as well?
"She has cancer" is a specific event, so we logically deduce that it is "a type of cancer".
When you say "AI will cure cancer" that is not a singular event, so you assume cancer in the plural. You would have to say "AI will cure some forms of cancer" if you didn't mean the plural here.
Asking Stable Diffusion for a picture of the chemical structure of a drug to cure even one specific unsolved cancer… I'd be surprised if that ever works (but given how crazy the rate of change has been, only 2σ of surprise).
I kind of feel like that's exactly why AI is helpful here:
- Grab a cancer (or virus, bacteria, etc.)
- Sequence it
- AI will develop a custom therapy for that cancer
In broad strokes, it's not hard to develop a therapy for any specific cancer or other disease in a specific individual. There are several broad strategies:
- A targeted, custom phage to kill a bacteria (or extrapolate to killing a type of cells)
- A custom vaccine to make your body make antibodies specific to a disease
- And so on....
This is a ≈2 year research effort to do in each case, and perhaps a ≈10 year validation effort, not to mention regulatory. By that point, the patient is dead, or AIDS has mutated a few dozen times, and regardless, you need a massive research team to do so. And to do so, you've spent many million dollars on a research team that whole time.
"AI will cure [X]" consists of AI doing the same thing instantly. I go to a doctor. My chronic disease is sequenced. My specific immune system is encouraged to attack that specific disease. I'm cured.
(And yes, we each have a very different immune system; see MHC for an example of how and why)
How? You’re hiding a ton of complicated work in these 2 words
> AI will develop a custom therapy
This statement suggests you really don’t know what you’re talking about with regards to AI.
AI doesn’t develop treatments magically. Work needs to be don’t to curate a dataset of treatments and diseases, BUT even then AI can’t create new treatments for existing untreatable cancer as we don’t have any data to go off of.
At that point, a team of doctors might as well analyze the data themselves (probably using a more specific kind of ML technique)
You’re too cavalier in hand waving away the real work by saying things like “AI will do this. Ez. 2 years”
I'd only call something a hallucination if the AI claims the existence of data that doesn't actually exist.
Simply making an informed guess and extrapolating to data outside the training set (whether that informed guess is is correct or incorrect) is not hallucination.
> How? You’re hiding a ton of complicated work in these 2 words
DNA sequencing has been following a Moore's Law style curve. It is cheap and easy now.
> > AI will develop a custom therapy
>
> This statement suggests you really don’t know what you’re talking about with regards to AI.
>
> AI doesn’t develop treatments magically. Work needs to be don’t to curate a dataset of treatments and diseases, BUT even then AI can’t create new treatments for existing untreatable cancer as we don’t have any data to go off of.
No one is suggesting it can. AI is very good at pattern-matching. There is a cookbook of techniques here:
1) Create a phage which is very good at injecting into a specific type of cell
2) Create antibodies which can latch onto a specific type of cell, virus, or cancer, so the immune system can attack them
3) Create a vaccine, which is much the same as the above
None of these are hard in of themselves. What is hard is that there isn't a virus called "AIDS" or "flu" or "cold," but a very, very large family of viruses. Ditto for cancer and bacteria. This is the exact type of pattern matching problem ML excels at. Curing a specific virus isn't hard; what's hard is because of all the variations. That kind of adaptation is exactly what ML excels at.
Once covid was sequenced, the actual creation of a vaccine took -- literally -- a couple of days (of work by the world's best scientists). What took much longer was validation, regulatory approval, getting manufacturing up, etc.
> You’re too cavalier in hand waving away the real work by saying things like “AI will do this. Ez. 2 years”
You're attacking a strawman here. Step zero of this process will be:
- Collect a dataset of bacteriophage DNA and of bacteria they're good at attacking (this is a massive undertaking)
- Something very similar with DNA and antigens (much of this exists / has been done, but was a huge undertaking; see "protein folding")
This is a few years in itself. That's when we can start to begin training an AI. There are many other similar-sized steps along the way. "AI will cure cancer" doesn't mean "AI will cure cancer tomorrow." However, I can see all the steps along the way, and no fundamental hurdles.
It's like the Apollo Program or the Manhattan Project on day 1. Yes, it's a major undertaking, but there's every reason t believe it will work. That's exciting.
So far, aside from calling me an idiot, no one in this thread suggested where the flaw in the above lies (and none of the comments suggested the poster had any understanding to do so). I responded to your comment since it was closest.
Cancer is a mutation. Much of the most promising recent work I've read on therapies focuses on:
1) Understanding the specific mutations
2) Helping the immune system find way to identify, and therefore attack, those specific cancer cells
More of the work focuses on t-cells, but otherwise, it's not too dissimilar from the work on infections.
I should know better than to discuss medicine on a SWE forum. Every post here starts with an insult. Not a single post contains any technical detail, nor even clues that people even understand the words I'm using (t-cell, MHC, etc.). It's like arguing with a cross between a five-year-old and a teenager who knows better.
You may be done here, but you are still misunderstanding the medicine. You are making exactly the same errors as mentioned in the article - generalisation.
Cancer is not "a" mutation, and that is the whole problem.
You are talking about personalised neoantigen-specific t-cells as a generic cure for cancer, while ignoring the fact that not all generations of a cancer express neoantigens, or even the same neoantigens.
Gosh. You're right. To solve that, we'd need something which could somehow predict the overall set of neoantigens present from the sequences of a few branches of a cancer, and not only that, to be able to rapidly adapt therapies to the antigens present. It sounds like we'd need some kind of algorithm which can do very complex pattern-matching and generalizations.
I can't imagine us ever coming up with something to help us with the fact I'm ignoring. It totally doesn't sound like the sort of thing deep learning would be at all good at.
> "I can't imagine us ever coming up with something to help us with the fact I'm ignoring"
That fact you are ignoring is the whole point about why engineered t-cells cant be a cure. No neoantigens means nothing to target. And your solution is to just wave your hands and say "deep learning will figure it out"...
i am puzzled why people think there is true intelligence in AI. IMHO it is advanced interpolation or morphing similar to the photo morphing tools of the 90s. if the data is not in the model you can’t get it out. you can only get interpolations out of it of data in the model. the more sophisticated the interpolation the more people think there is intelligence. maybe i am missing something but this is what i perceive.
> if the data is not in the model you can’t get it out
That's trivially also true for humans, you can't get out of your brain something that isn't in it.
If you meant "external training set", then it's false for both AI and humans, as demonstrated by e.g. AlphaZero getting superhuman performance despite zero examples of human games of Go or Chess.
IMHO AI will be able to generate solutions close to the best solution for a problem, with the remaining distance to be covered by humans.
the question is whether the cost of using AI to generate a solution, and then fixing it manually, will be lower, or higher, than building the solution using humans.
as we have seen before with wizards and other tools to generate solutions, the cost of fixing the solution is often times higher than it is to just build it from scratch manually.
also, as humans use AI more and more, there will be less and less material to use for training that is high quality, and AI will train on itself, leading to a rapid decline in quality of the output.
Yeah. In this current hype cycle phase the general population thinks that ai is chatgpt and that chatgpt can do anything. Both statements are completely wrong but people will believe it
Even though we know prognosis is much better for cancer, and many other diseases, if you catch it early, we do essentially nothing to catch it early. My understanding is that this is because:
1. Administering regular MRIs, blood panels, etc. is expensive, in terms of the initial data collection
2. It’s also expensive, in terms of getting healthcare professionals to analyze the results
3. People often get the analysis wrong, in terms of both false negatives and false positives
4. False positives can lead to even more scans, analysis, etc., costing even more money
It does seem possible to me that specialized AI could get much better than humans at interpreting this data, doing it very cheaply (solving problem 2) with far fewer false negatives and false positives (solving 3 and 4). And it’s even possible that AI powered robotics gets great at collecting data in the first place, bringing down the cost of problem 1.
Basically, “AI invents cures for different types of cancer” seems like a moonshot, but “AI makes proactive medical scanning cheap and effective, thus greatly improving cancer outcomes” seems like a real possibility.
While we have some proactive screening for some types of cancer, the status quo for many types of cancer/patients is “wait until the cancer has spread enough that the patient is experiencing significant symptoms, with no systematic way to detect cancer early.” This is clearly not great. We’re accepting this for practical reasons today, but I do think AI has a significant chance to greatly improve the status quo here.