Smartphone cameras for the past ~8 years have been using multiple frames to improve noise performance, resolution and dynamic range. Sometimes as many as 128 frames are stacked, over many seconds, especially in low light. Gyro and optical flow data is also used to help align frames, and noise models to decide how to blend them (and avoid ghosting when things are changing in the frame).
How exactly does one fit 128 frames of lossless data from a 12 megapixel sensor into an 18 megabyte file?
Or is ProRAW not as RAW or as lossless as the name would imply?
ProRAW is definitely not RAW, the amount of white balance correction you can do is far more limited. It think it's closer to capturing the different HDR layers to allow the dynamic range to be tweaked more in post.
ProRAW is just DNG with some additional metadata, and DNG can store actual raw sensor data, but Apple stores the semi-processed image after debayering and multi-exposure stacking yeah. The benefit is that it's 12-bit linear data so you get a decent amount to work with, and their automatic tonemapping is stored as metadata so it can be non-destructively tweaked or disabled in post.
Ex-Semi-Professional Photographer here...Yes, I think Apple is very misleading, if not being fraudulent, saying you can shoot in "RAW" format.
"RAW files contain uncompressed and unprocessed image data, allowing photographers to capture practically every detail they see in their viewfinder. The RAW file format stores the largest amount of detail out of any raster file type, which photographers can then edit, compress, and convert into other formats. Learn more about the benefits, drawbacks, and best ways to work with a RAW image."
If Apple is giving you JPEG-XL they are compressing the file, so it is not RAW.JPEG-XL is lossless compression.
As an ex-professional photographer , you should be very aware that RAW does not mean uncompressed or lossless.
Many professional cameras offer lossy and lossless compressed RAW, and have for over a decade now and most are processed to some degree before it’s even written.
RAW hasn’t meant literally raw in a very long time.
Even the regular lossless RAW setting can sometimes be "not quite RAW" when the camera dynamically lowers the bitrate it captures at. My camera does anywhere between 12 bits and 14 bits RAW depending on the shooting settings. Many newer cameras have baked-in NR that cannot be disabled (e.g. Canon R5), even on the lossless RAW setting
You can shoot real RAW with the iPhone hardware, just not with the system camera app. If you use something like Halide or Photon Camera, they have proper RAW options that give you a traditional noisy RAW.
“Noisy RAW”? What? It gives you a RAW file. Period. But this proves my point that Apple does not capture a RAW file, it is pretty highly processed. I want to do my editing, and I want to compress it my self if i want.
Apple does offer the ability to capture a lossless RAW file with raw sensor data, just not with their app. It’s exposed through their API for 3rd party apps to use. It’s harder to get a good result in low light or high contrast scenarios from a single exposure of such a small sensor, but many people prefer it to the processed ProRAW files.
The ProRAW format that the default app offers has a different use case: it combines the automatic advanced processing of multiple exposures into a file format that offers more flexibility for post processing than a standard photo does. With the new lossy compression option it gets more attractive for some users that are willing to trade in some quality for space. I don’t see the problem in offering that option next to the lossless compression.
Lossless compression is something I’m not going to argue about, as there is literally nothing lost by doing that and many professional cameras do the same thing, though with different algorithms. Well, unless your argument is that it’s not yet compatible with many photo editors, because that’s a fair point.
> If Apple is giving you JPEG-XL they are compressing the file, so it is not RAW.JPEG-XL is lossless compression.
Are you unaware that most DNG files are compressed using Lossless JPEG? Including ones that are storing raw sensor data rather than debayered data like ProRaw?
Lossless compression is still compressed. RAW photos are by definition not compressed. I do not care if you take out redundant bits, you are still taking them out.
"Lossless compression uses an algorithm to shrink the image without losing any IMPORTANT data." and the algorithm decides what is important.
>I do not care if you take out redundant bits, you are still taking them out.
Why does it matter? Why does that make Apple deceptive? Lossless compression is as good as no compression, the only difference is the decompression time.
You could store the uncompressed RAW file on a filesystem with built in (lossless) compression, or you could store the losslessly compressed RAW file on a filesystem with no compression. Once you have opened the image in your editor or photo viewer, the bits in that application's memory that you are editing or viewing are identical.
Moreover RAW does not mean 'uncompressed'. It doesn't even mean losslessly compressed. Often RAWs use lossy compression. Both Canon and Nikon RAWs can.
I am confused. I assume you use camera that does not compress raw files. which vendor camera is that? Nikon compresses raws, Canon compresses raws, Sigma compresses raws, Sony compresses and processes (see "star eater") raws, most cameras process known defective pixels and subtract dark frame from long exposures before saving raw...
Is it some kind of industrial camera you are talking about maybe?
You just said yourself that raw files cannot be compressed... now you say that Nikon has your permission to compress raw files? What is the point you are trying to make?
(Also, try reading the thread you linked and reconcile it somehow with your previous claim that losless compression of raw files is unacceptable)
There is no such definition. Most cameras’ raw files nowadays are losslessly compressed. The compression being lossless means that the bits that were taken out can be reconstructed identically.
It seems you might be confused as to what lossless compression means?
No, lossless compression uses an algorithm to shrink a file without any loss. It can be restored to the same bitstream.
What you quoted is a confusing phrase: "visually lossless" which is in fact a lossy compression. It's like People's Republic which is neither a republic nor is it the People's, it's just a dictatorship.
Finally, JPEG-XL supports both lossy and lossless compression.
If it's lossless then there's zero loss and can be returned to 100% original file. However if algorithm decides what's important than it's by definition not lossless. Wouldn't be surprised to get something like that from Apple, they love to twist words into something they're not for marketing reasons. Like when they say that non-pro Iphone 15 has 2x telephoto while it does not have telephoto lens at all, it's just a digital zoom which all other phones have.
Maybe it’s just one frame per file? Also, you could probably achieve some pretty good compression with multiple frames because the frames are probably very correlated, right?
So if it's one Raw sensor frame, it will be noisy, and you won't get any of the computational photography benefits. It will end up looking like a photo from a 10 year old phone.
You will get a little compression storing multiple frames, but not much, because the photon shot noise is independent for each frame - and in lossless images, the noise (which does not compress according to information theory) often has more entropy than the ideal image (which compresses well).
Or you could store one post-processed frame, after you have combined 128 frames and done the computational photography stuff. But at that point, it isn't exactly raw data anymore.
> noise (which does not compress according to information theory)
That's true in theory/isolation, but in practice we know the noise has much lower maximum amplitude than the original image. I.e. if you take a difference between two frames, almost all of it will require less bits per pixel than the signal. (And that allows compression in practice)
"So if it's one Raw sensor frame, it will be noisy,"
That is not why noise is created in digital photographs. And Noise or Grain in a photograph is not always a negative.
"Noise in photography is the arbitrary alteration of brightness and color in an image. The onset of this random variation generates what is called “noise”or “grain”, which is basically formed by irregular pixels misrepresenting the luminance and tonality of the photograph. These pixels are visible to the eye due to their large size."
A signal RAW image does not have to have noise. Noise can be created by incorrect exposure and low light.
>Or you could store one post-processed frame, after you have combined 128 frames and done the computational photography stuff. But at that point, it isn't exactly raw data anymore.
Yes, and Apple should not be calling any images it captures from an iPhone "RAW".
> A signal RAW image does not have to have noise. Noise can be created by incorrect exposure and low light.
You always have noise. It's more noticeable in low light, but it never goes away. It's due to physics - there's real world noise in photons hitting the sensors and read noise in the A/D converters behind them. That's long before the image is captured into any format - RAW doesn't help you here.
There is always noise. In fact, in absolute terms, there is more of it as the amount of light increases. It just increases more slowly than the signal, so the signal-to-noise ratio increases.
> Are there any cameras out there that output real raw data from the senors?
I suspect not, because the data output rate from a high end sensor can easily be 32 Gbps, and I don't think any phones have fast enough flash storage to write data at 32Gbps.
You have to process the data in mostly-realtime or lose the data. Storing it isn't an option.
Full sized cameras do use a buffer to deal with burst activity, take series if shots - process - save to slower storage. Also, modern cameras start switching to CFexpress/SDExpress (which are NVMe)
> Version 8.0 was announced on 19 May 2020, with support for two PCIe lanes with an additional row of contacts and PCIe 4.0 transfer rates, for a maximum bandwidth of 3,938 MB/s
Full-sized mirror/mirrorless cameras do that. Sensor is HUGE, optics is very fine and huge as well. Due to how universe works, you can't get similar results with tiny sensor and optics
Based on article [1], ProRAW is a half way between plain RAW output from sensor and baked in jpeg. It uses computed multiple-frame result, but stores more data, allowing you to recover details from shadows and do other things you typically use RAW for. Debayering is performed (nobody cares about that anyway), multiple frames are combined but higher bit-depth is preserved (12-bit instead of 8-bit), it also contains separate "tone map" which describes HDR gains (for each pixel?). Disclaimer, zero experience with ProRAW, just glanced at that article, but used RAW on DSLR in the past.
Kinda sidebar here, apologies, but a question for any DSLR/Mirrorless owners out there:
I've always been frustrated by the reach of phone cameras and finally got a mirrorless and 100-400 lens. It is an aps-c so the equivalent max focal length is 640mm which has been great for wildlife or some long views out on a hike. The sensor is 32.5MP which knocks the socks off my old iphone 12 pro, however this new 16 (or maybe it started with 15?) is 48MP.
Now I know its going to be unlikely we see a phone camera with the reach - but what about landscape photography? Would a 16 pro on a tripod be able to compete with real camera gear. I assume you can focus stack or bracket with ProRAW? This hobby is expensive and just trying to decide if I invest in a lens (my only other one is the kit lens) for landscape.
Final output would be for a 4k tv (samsung frame is nice for that) or possibly print
No. Tiny sensor, can't use grad NDs, wide angle camera even worse, relies on intense computational methods for dynamic range that will destroy the specific light and time character of landscapes.
Making prints, especially large prints, requires high technical quality images that phones can't provide.
Lenses are definitely a factor in “quality” of photos, so some situations are impossible/hard to recreate with mobile phones.
What mobile sensors will never achieve is fast shots in the dark. It’s physically not possible. While “night modes” are impressive, even with long exposures you will not get the same quality as an actual DSLR with a tripod.
Megapixels aren’t important unless you crop, zoom digitally or print.
With real camera gear that costs as much as the phone? Depends, in many situations it will probably beat real camera, because of all the computational stuff. Especially if the output is 8Mpix.
Once shooting conditions get tricky (moving trees/water, high dynamic range with glare and shadows, night sky etc), it of course will be possible to select camera gear that will beat the phone.
They can, but you have to do it yourself instead of leaving it to the phone. For a specific foto it’s ok, but I doubt any one wants to sit after each day to process the photos one took during the day
It's not.
a) compression can be lossless.
b) RAW is not about storing literal photons ADC measurements. It always has "some" processing as those always go through an ISP. We can obviously discuss which processing is the cutoff point and it will differ for different applications, but typically this would include things like clipping, sharpening, or denoising. And even some pro DSLRs would remove row noise or artifacts in supposedly "RAW" files!
If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.
>If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.
No. No it is not at all. Are you a photographer? I am not talking about processing before the photo is saved, I am talking abot the compression of the save file.
Are you trying to tell me that these are the same?
RAW
"A camera raw image file contains unprocessed or minimally processed data from the image sensor of either a digital camera, a motion picture film scanner, or other image scanner. Raw files are so named because they are not yet processed, and contain large amounts of potentially redundant data"
JPEG-XL
Lossless compression uses an algorithm to shrink the image without losing any IMPORTANT data.
Lossless compression is not about importance of data. Lossless is lossless, if the result of a roundtrip is not EXACTLY IDENTICAL then it is by definition not lossless but lossy.
Maybe you're confusing with "visually lossless" compression, which is a rather confusing euphemism for "lossy at sufficiently high quality".
JPEG XL can do both lossless and lossy. Lossless JPEG XL, like any other lossless image format, stores sample values exactly without losing anything. That is why it is called "lossless" — there is no loss whatsoever.
Yes, I am an (amateur) photographer for the last 27 years, from film, DSLRs, mirror less, mobile. And I worked on camera ISPs - both hardware modules, saving RAW files on mobile for Google Pixel, as well as software processing of RAWs.
"Lossless Compressed means that a Raw file is compressed like an ZIP archive file without any loss of data. Once a losslessly compressed image is processed by post-processing software, the data is first decompressed, and you work with the data as if there had never been any compression at all. Lossless compression is the ideal choice, because all the data is fully preserved and yet the image takes up much less space.
Uncompressed – an uncompressed Raw file contains all the data, but without any sort of compression algorithm applied to it. Unless you do not have the Lossless Compressed option, you should always avoid selecting the Uncompressed option, as it results in huge image sizes."
Why make the distinction if there is no difference?
Apple is COMPRESSING the image. Period. RAW photos can be compressed, but if they are then they are "RAW Compressed" Files, not "RAW" files.Apple is not saying you are shooting RAW Compressed, it says you are shooting ProRAW photos, which is slick marketing because everyone thinks they are shooting RAW photos but ProRAW is not RAW. The iPhone 12 gave you a choice to shoot RAW or ProRAW, but my iPhone 13 ProMax only allows the ProRAW option. I have no option to avoid Apple processing my photos anymore.
It is semantics but words matter. If something is off with the compression algorithm or the processing how would you know?
More, if the difference did not matter, why does Sony go out of the way to explain the difference?
If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.
> Why make the distinction if there is no difference?
There is a difference, which is that the compressed lossless version is smaller and requires some amount of processing time to actually be compressed or uncompressed. But there is zero difference in the raw camera data. After decompression, it is identical.
> If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.
It is the same. You can check each and every bit one by one, and they will all be identical.
No, but it’s also a painting instead of a digital file, so different considerations apply (maybe the copy wouldn’t be strictly identical, maybe the value is affected by “knowing that Van Gogh is the one who applied the paint to the canvas” or by the fact that only one such copy exist), and this is therefore a false analogy.
If you copy the number written on a piece of paper to another piece of paper, is it the same number? Yes, it is, and a digital photograph is defined by the numbers that make it up. Once you have two identical copies of a file, what difference does it make which one you read the numbers from?
Or are you arguing that when the camera writes those numbers to the raw file, it’s already a different image than was read from the sensor? After all, they were in volatile memory before a copy was written to the SD card.
Regardless of how people are feeling about Apples products, any news is better than no news so I'm sure their not to hurt by the media having mixed reviews. They operate their own way regardless of media. I'm sure if the demand for the products slowed down they may stop producing yearly incremental changes but enough people are out there upgrading from older models (I have an 11 pro and this years 16 looks very tempting to me for example). Perspective.
How exactly does one fit 128 frames of lossless data from a 12 megapixel sensor into an 18 megabyte file?
Or is ProRAW not as RAW or as lossless as the name would imply?