Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that’s not really how super resolution works, it’s closer to how diffusion models hallucinate details inspired by their training set.

I roughly think about it being like you have a lot of high res moons lossy compressed into model weights until they are basically distilled into some abstract sense of “moon image”-ness “indexed” by blurry image patches. Running the network then “unzips” some of that detail over matching low resolution patches.

(Using quotes to denote terms I am heavily abusing in this loose analogy)

Edit: Importantly though I think this isn’t that different from what you are describing in terms of whether this is potentially misleading customers about the optics on their device, because it is inserting detail not actually captured by the camera sensor.




Super Resolution is only part of the process. It says they then apply "Scene Optimizer’s deep-learning-based AI detail enhancement engine".

If the model and its weights contain detail not in the photo being taken, then it's tantamount to having high res images of the moon stored on camera and composited into the image. And if it doesn't then it's not the moon being displayed.

Not that it's necessarily bad, but it could be if it fools someone into thinking they're buying superior electro/optics. Enough such that it warranted the line "Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon".


I think it's actually worse than compositing in a high-resolution moon image. With AI enhancement, the details will look believable but may be completely inaccurate.


> If the model and its weights contain detail not in the photo being taken, then it's tantamount to having high res images of the moon stored on camera and composited into the image.

This is what is happening. I agree it’s tricking people into thinking it’s all optics and that’s kinda bad.


The "it finds an image in its database" understanding of diffusion models doesn't work for SD because it can interpolate between prompts, but since there is only one moon, and it doesn't rotate, and their system doesn't work when the moon is partially obscured, there is really no need to describe it as anything more complex than that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: