Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always believed that the AI/LLM/ML hysteria is misapplied to software engineering... it just happens to be a field adjacent to it, but not one that can very well apply it.

Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.



Somehow, LLMs always seem to be "more likely to get this right" for fields other than one's own (I suppose, this being HN). The term "Andy Grove Fallacy" coined by Derek Lowe (whose articles are frequently posted here, the term being referenced in a recent piece[1]) comes to mind...

[1] https://www.science.org/content/blog-post/end-disease


I figured the fallacy you were talking about was the one Michael Crichton describes about reading a newspaper article on a topic he knows about vs one he doesn't, but it turns out that's called the "Gell-Mann Amnesia effect." [1]

> You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. [...]

> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

1. https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect


(My spouse was an ultrasound tech for many years.)

The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.

A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.


Techs take the scans but you need a Dr. to interpret them. Thats where AI can come in.


This is one of the many places where computer people simplify other professions.

Legally yes, the rad is the one interpreting it, but it's a very active process by the technologist. The ultrasound tech is actively interpreting the scans as they do them, and then using the wand to chase down what they notice to get better shots of things. If they don't see something the rad won't either, so you need that expertise there to identify things that don't look right, it's very real time and you can't do it post hoc.


Did anyone suggest robots do ultrasounds? Who is simplifying it? Having literally just done one: the tech came in, basically said nothing and took a bunch of pictures and then Doc came in and interpreted the results.


People are suggesting that AI interprets the images, which is a fundamental misunderstanding of the process, because the tech is making choices while taking the pictures of what the images should be of. You can't wait until the pictures are taken and given to the rad before the interpretation can begin, it has to be happening during the whole process. The question then is what is the place of the AI in that process? What is it automating?


The rad literally gets static images. They don't get a whole debrief from the tech. So they're in the same position an AI would be.


At my spouse's primary clinic the techs did debrief the rads in person, and at other sites they include notes with the images, so they do get supplementary information from the techs. I suppose AI would make the rads superfluous because between the tech and the AI they don't have anything to do, but the majority of the effort will remain, and I still wouldn't trust it giving AI's predilection for just making things up when there's missing information.


When AI writes nonsensical code, it's a problem, but not a huge one. But when ChatGPT hallucinates while giving you legal/medical advice, there are tangible, severe consequences.

Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.


100% agree ‘chat bots’ will not be a revolutionary technology, but other uses of the underlying technology will be. General robotics, pharmaceuticals, new matter… and eventually 1st line medicines and law sure, but I sure don’t want doctors to vibe diagnose me, or lawmakers to vibe legislate.


[Insert "let me laugh even harder" meme here]

That would be actual malpractice in either case.

LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.

As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.

While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.


I don’t know what “Fractal volume of data” means exactly, but I think you’re underestimating how much more complicated biology is than software.


Well that is not how it is applied in the article at all




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: