Where does one draw the line between "improving" an image and materially changing an image?
If this was not a moon-enhancer, but just a sharpener – which we have used on images for decades – would there be any fuss? Probably not. If this was an AI model trained to do better sharpening than the algorithms we've used historically, would there be any fuss? Maybe, but probably less than with this.
Short of literally superimposing fixed images of the moon on top of people's photos, which this is not doing, isn't this just a natural progression of the same sorts of image enhancements we've been doing previously?
Perhaps the issue is in the ML optimisation function? An image sharpener is optimising for a supposedly generic change, whereas this model is optimising for a particular subject that carries semantic meaning for humans. Similarly, an enhancer that improves contrast and sharpening on human faces might be fine, but a model that uses your other photos of your friends and family to improve their faces in new photos may not be, because it has the potential to change meaning if it gets it wrong?
> Short of literally superimposing fixed images of the moon on top of people's photos, which this is not doing,
it is 95% doing exactly that. taking your blurry moon, putting what it knows about the real moon on top of it, and doing an incredibly precise photoshop job to merge them together. it's using real world photos in a hardcoded database to substitute information directly into your photos, just in a very sophisticated way.
it's completely gross and I would want no part of this on my camera.
This is already a question in astronomy. At least in astrophotography (think Instagram accounts or computer backgrounds) there are virtually no images that don't involve combining individual frames and programmatically combining the best x% to enhance the image (increase saturation, sharpness, etc.) and sometimes color is artificially added in.
Seems like whats missed in this is a lot of people not understanding that "regular" pictures of the moon from astrophotographers/ NASA are actually big composites of multiple images that have gone through pretty intense processing - next to none of it is done with single-frame data, and the fact phones can mimic this at any level is pretty neat imo.
the difference is that these processes, i.e. stacking multiple images, are meant to find out as much signal as possible from a noisy set and if they bring extra signal they are all declared in the sources.
It’s worth pointing out that there can be a distinction between the type of information in signals and the type of information in filters. I think people feel disturbed because they think the filters here are overfitted, therefore are just parroting out memorized patterns of the moon. But if it were a filter that generalized to everything, people would just say “wow the camera sensor is amazing”.
The irony is that the backlash here will just guarantee that they put more effort into making this generalize better, so it’s received as less gimmicky. Whether that’s just putting an enhancement neural network in optical form at the ccd level or something else is anyone’s guess. People say they don’t want detail that’s not there, but to some extent the amount of detail is always being infused with assumptions and heuristics at every level of the imaging pipeline.
Sharpening algorithms might be statically defined rather than a bunch of weights, but they take a blurred image and create new data in it through approximations and heuristics defined in the algorithm.
I agree that it feels like there should be a difference, but I can't pin down what that actually is.
> create new data in it through approximations and heuristics defined in the algorithm
I guess the difference is where the algorithm gets it's input data from. Just sensor data or does it draw from a neural network that memorized a bunch of images / data store of images too.
You can argue what's the difference between this and Huawei replacing the image of the moon when their phone detects it with a hi-res one it has in storage - it's an "algorithm" too.
Are you using just the data provided to draw conclusions? Or do you include extra data from elsewhere to get your conclusions?
It begs the question of why we don't apply the same scrutiny to astronomers and cosmologists, who use the same technique on a much larger scale. It's not like anyone "took a picture of a Black Hole," and yet there were hundreds of newspaper headlines suggesting that's exactly what happened. Virtually every "photo" produced of "space" is created through an "imaging" process, which in recent years frequently involves fairly intense processing steps using ML algorithms that are subject to errors and biases which could substantially alter the end result. But the scientific community, and especially the media, largely takes these "images" at face value, despite them having little basis in any "actual" snapshot of reality. I don't really see how what Samsung is doing is any different.
lol it looks like we posted similar things at basically the same time - I'd also maybe add that it's noteworthy that this isn't what Huawei was doing where they just overlayed a stock photo of the moon. What Samsung is doing is enhancing the actual data your phone's sensor is getting, which makes it much more like astrophotography processing than just some gimmick/hack.
If this was not a moon-enhancer, but just a sharpener – which we have used on images for decades – would there be any fuss? Probably not. If this was an AI model trained to do better sharpening than the algorithms we've used historically, would there be any fuss? Maybe, but probably less than with this.
Short of literally superimposing fixed images of the moon on top of people's photos, which this is not doing, isn't this just a natural progression of the same sorts of image enhancements we've been doing previously?
Perhaps the issue is in the ML optimisation function? An image sharpener is optimising for a supposedly generic change, whereas this model is optimising for a particular subject that carries semantic meaning for humans. Similarly, an enhancer that improves contrast and sharpening on human faces might be fine, but a model that uses your other photos of your friends and family to improve their faces in new photos may not be, because it has the potential to change meaning if it gets it wrong?