Hacker News new | past | comments | ask | show | jobs | submit login

> I think it still remains to be seen what a 1T+ parameter transformer trained specifically for radiology will do

Does image processing of something like this scale with parameters?

It makes sense that language continues to scale as the vector space is huge. Even models that generate images scale as the problem space is so large.

But here there are only so many pixels in the image and they are a lot more uniform. You likely can't have 1T images to train on, so wouldn't the model just overfit and basically memorize every image it has ever seen?




You are right, but I meant it more as a total radiologist, scans the images and can put it into medical context.


Unfortunately, VLMs don’t seem to work. Even the best ones (eg, Gemini 2.5 Pro) hallucinate to a laughable degree. I’ve been seeing them make up large-scale features in images that absolutely no human would ever mistake.

What does work is narrow AI trained to do very specific tasks (such as medical image segmentation). But I don’t think that type of AI will benefit much from massive scaling.


If the meat machine can learn it, then the silicon machine can learn it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: