I'm not very familiar with telecom or radio protocols, but I think the extra information in an LTE packet (the parity bits + redundant data bits) might be critical to that sort of error correction working. The raw PCM coming out of an optical drive doesn't provide that kind of metadata, so at best I can count how many values appear at each position and try to guess on that. If the values after 20 rips look like this:
CD1 position 0xFFFF
A0: XXXXXXXXXXXXXXX
A1: XXX
A2: XX
CD2 position 0xFFFF
A0: XX
A1: XXX
A2: XXXXXXXXXXXXXXX
then I'm not sure how any sort of combining could usefully recover the original value.
You have to look at the effect of the noise on the statistics of the bits, since those are what's physically changing, and not the PCM output after all the error correction layers.
> You have to look at the effect of the noise on the statistics of the bits,
Yes. Soft combining (in systems designed for it) isn't even done on bits, it's done on the actual analog (well, a digitised version of the analog value, sometimes termed a "log likelihood ratio") values of the signal in question -- before any quantisation happens.
This isn't to say that you can't apply the techniques here (where stuff obviously gets quantised unless you can find a way to get the raw signal from the photodiodes), but you need access to bits with the least amount of error-correction involved (or find a way to infer what the statistics of the original raw bits are from what the decoding machinery outputs).
That assumes the noise is uncorrelated though (which the noise on a camera sensor generally would be). If the noise is correlated then averaging doesn't help.
There is lots of noise you don't get removed by averaging, ie the base noise of the sensors. For those you usually (for amateur astrophotography) take four sets of pictures (usually about 20); with the lens removed and cap on the mount, lens mounted with lens cap, lens mounted into telescope but telescope cap on and lastly telescope aimed at target but a thick cloth over the opening.
All those compensate for the various ways light may enter the camera (lens mount, lens frontmount, telescope housing, telescope lenses and mirrors) and let you subtract those out after averaging your result.
I really don't think that's the same thing. Taking multiple shots of the same subject is just taking a shot with a really long shutter time, except in slices, a little bit at a time.
What OP was talking about was something akin to using the noise in a photo of one subject to reduce noise in another photo of a completely different subject but with the same camera.
In astro-photography, this is quite common. You take a series of normal images. You then cover the the lens, and take the exact same exposure. After a series of processing, the noise pattern from the dark frames is subtracted from the normal images. It works quite well.
FWIW, some cameras can use that technique to help de-noise long exposure shots. The control image is a long exposure with the shutter closed, and that image (which should be completely black) is subtracted from the long exposure shots.
It's only the same as a really long shutter time if the subject is stationary, and in the case of Earth-based astrophotography it never is, because atmospheric turbulence is always moving the image. If you take multiple separate shots you can combine only the good ones:
No you actually can; it's called "soft combining": https://en.wikipedia.org/wiki/Hybrid_automatic_repeat_reques... and it's a crucially important feature in high-performance air interfaces (like LTE).