> Does WebP provide a savings against jpeg for these photos?
A deceptively hard to answer question!
Why is it hard?
Because both jpeg and WebP (which is really just a VP8 intra frame) can represent images with a variable amount of bits.
But it gets even trickier! How do you measure "savings"? Certainly you can produce 2 images with the same (or similar) bitrates with both webp and jpeg, but how do you say one is better than the other? How can you lower webp's bitrate until it's "quality" measure is the same as jpeg?
Quality metrics such as PSNR, SSIM, and VMAF all exist to try to give a "quality" metric. However, they all have their own flaws that allow an image compression format to get worse subjective quality will improving their objective score (For example, codecs that optimize for PSNR tend to be more blurry than codecs that target SSIM. Grass ends up looking like big blobs of green).
In fact, because x264 was often being beat by other codecs in those metrics they went out of their way to add "cheat" modes for the encoder! You can tell x264 to target PSNR or SSIM :D. Neither are the default.
Just some fun thoughts. Subjectively, I'd say WebP and the newer AVIF or HEIF does a better job than Jpeg (to my eyes). However, I could see why others might disagree.
> Subjectively, I'd say WebP and the newer AVIF or HEIF does a better job than Jpeg (to my eyes). However, I could see why others might disagree.
They can do great with a clean original. But with an original that was already carefully mastered in JPEG format, with quality set to the bare minimum to get a good output... well, lossy compression is lossy, and when there isn't any margin for more loss, the results won't be good. (It's also true that since different lossy algorithms are, well, different, the interaction of two algorithms can be truly awful to look at).
Unfortunately, there are very few image editing pipelines which let one master in anything other than JPEG, PNG, or GIF.
I have a feeling it won’t be better, but for lossy-1 to lossy-2 conversion, has anybody tried using some ML algorithm to ‘recover’ the lost information from the lossy-1 image and then using the lossy-2 algorithm? It would give that algorithm an input that looks more like what it’s designed for.
Adding JpegXL to discussion. For still images it's about the same as AVIF, for high-compression samples AVIF have upper hand, for high-quality samples JpegXL feels better. Also, JpegXL is much faster, simpler, allows progressive encoding
A deceptively hard to answer question!
Why is it hard?
Because both jpeg and WebP (which is really just a VP8 intra frame) can represent images with a variable amount of bits.
But it gets even trickier! How do you measure "savings"? Certainly you can produce 2 images with the same (or similar) bitrates with both webp and jpeg, but how do you say one is better than the other? How can you lower webp's bitrate until it's "quality" measure is the same as jpeg?
Quality metrics such as PSNR, SSIM, and VMAF all exist to try to give a "quality" metric. However, they all have their own flaws that allow an image compression format to get worse subjective quality will improving their objective score (For example, codecs that optimize for PSNR tend to be more blurry than codecs that target SSIM. Grass ends up looking like big blobs of green).
In fact, because x264 was often being beat by other codecs in those metrics they went out of their way to add "cheat" modes for the encoder! You can tell x264 to target PSNR or SSIM :D. Neither are the default.
Just some fun thoughts. Subjectively, I'd say WebP and the newer AVIF or HEIF does a better job than Jpeg (to my eyes). However, I could see why others might disagree.