...for this particular satellite imagery. It's a rather limited overview of color correction in general.
> On top of this, what we see with our eyes is very different from the raw data captured by a scientific instrument or digital camera.
Yes, this is why photographers who avoid postprocessing because they're "purists" who like "unenhanced, natural images" sometimes deliver unnaturally bland images. The visual system phenomena at play are high dynamic range, simultaneous contrast, chromatic adaptation, and perceptual uniformity. Correcting for these things effectively often requires a separate treatment of each (but software lets you do that quickly). The article shows a rather crude approach – there are techniques available that give better results in the same time.
See for example Dan Margulis's well-regarded "Modern Photoshop Color Workflow".
Yup, I'm also a huge fan of "Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace" by the same author. For photos LAB colorspace provides a great way to get images closer to how we actually perceive them.
If you want a really good example, the northern lights are a good one. The majority of the color is something you cannot see with your eye because it's too dim for the cones to pick up. A camera doesn't have this limitation.
I don't think the aurora is a good example for reason you cited. The idea that we only see the aurora in shades of gray is false. I've sometimes been surprised by some extra color that the camera picks up but at least at upper latitudes the aurora's colors are bight enough to be easily distinguishable by the eye.
The tricky part with photographing auroras is getting the white balance right, as the unusual light source often confuses automatic color correction.
> Yes, this is why photographers who avoid postprocessing because they're "purists" who like "unenhanced, natural images" sometimes deliver unnaturally bland images. The visual system phenomena at play are high dynamic range, simultaneous contrast, chromatic adaptation, and perceptual uniformity.
I know next to nothing about photo processing but I would assume that because watching a photo print in a random environment is not the same thing as being immersed by the scenery and lighting in the photo, the perception is influenced by that very fact and reproducing photographer's own impression of the scene purely by means of physically correct measurements and their reproduction in form of a picture is impossible; ergo, alterations are necessary.
I don't even understand why this means to be said.
Anybody who has ever tried to take a photo must have seen what a difference it makes with the direct impression. You see your black friend with your own eyes, but then in the picture he's just a flying smile; is it really cheating to modify the values?
Its not even "unnaturally bland images". It's just unnatural images, because nobody sees like a camera lens.
Alright, this is theoretical (as in, I've never done it), but you can use adjustment layers to perform what the article describes non-destructively (i.e. leaving the original image untouched). Then, you can stick an image of the entire color space where the background image was, and you'll basically get:
result = adjustments(color_space_image)
Which is a color LUT and can translate every color in the original image to the new color. Photoshop can load LUTs directly (I'm just not sure how to create them), and you can incorporate them into your own programs by just looking up the pixel in the original color space and the adjusted one.
This has been a content-free comment by your friendly rambler.
That would depend on the atmosphere causing a predictable and uniform distortion no matter what angle the sun was at, weather patterns, humidity levels, etc.
I've been playing with LUTs a bit in the video realm, and find myself having to recolor everything afterwards, because the LUT only gets you 20% of the way there.
For moviemaking, the analogous process ("color grading") is starting to be done live at the time the shots are recorded, on a digital screen connected (in some cases wirelessly) to the camera. The color grading parameters are stored as metadata attached to the digital video files and can then be applied and/or tweaked later in an editing suite.
I was expecting quite a lot more, this is simple, too much actually. I would have liked to see more on topics such as histograms, how to read them, work with them, mention of subjects such as lift-gamma-gain (or GOG), reference to high-end grading systems, vectorscopes, RGB parade, etc...
> On top of this, what we see with our eyes is very different from the raw data captured by a scientific instrument or digital camera.
Yes, this is why photographers who avoid postprocessing because they're "purists" who like "unenhanced, natural images" sometimes deliver unnaturally bland images. The visual system phenomena at play are high dynamic range, simultaneous contrast, chromatic adaptation, and perceptual uniformity. Correcting for these things effectively often requires a separate treatment of each (but software lets you do that quickly). The article shows a rather crude approach – there are techniques available that give better results in the same time.
See for example Dan Margulis's well-regarded "Modern Photoshop Color Workflow".