Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends on what you mean by 'see'.

For example, let's say I'm looking at a chest x-ray. There is a pneumonia at the left lung base and I am clever enough to notice it. 'Aha', I think, congratulating myself at making the diagnosis and figuring out why the patient is short of breath.

But, in this example, I stop looking closely at the X-ray after noticing the pneumonia, so I miss a pneumothorax at the right lung apex.

I have made a mistake radiologists call 'satisfaction of search'.

My 'search' for the patient's problem was 'satisfied' by finding the pneumonia, and because I am human and therefore fundamentally flawed, I stopped looking for a second clinically relevant diagnosis.

An AI module that detects a pneumothorax is not prone to this type of error. So it sees something I did not. But it doesn't see something that I can't see. I just didn't look.



This is definitely a thing.

https://www.npr.org/sections/health-shots/2013/02/11/1714096...

I'm skeptical to the claim that AI isn't prone to this sort of error, though. AI loves the easy answer.


AI is overloaded. An LLM loves the easy answer, but that's not what is underlying an image classification model.


> I have made a mistake radiologists call 'satisfaction of search'.

Ah, now I have a name for it.

When I've chased a bug and fixed a problem I found that would cause the observed problem behavior, but haven't yet proven the behavior is corrected, I'm always careful to specify that "I fixed a problem, but I don't know if I fixed the problem". Seems similar: found and fixed a bug that could explain the issue, but that doesn't mean there's not another one that, independently, would also cause the same observed problem.


It's also called inattentional blindness.

https://en.wikipedia.org/wiki/Inattentional_blindness




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: