Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. Where have you seen that adoption will be swifter with JPEG XL instead of, say, AV1/AVIF?

(Speaking as someone who's seen several open, licensing-unencumbered image/video/audio formats fail to get traction with a majority of browsers).



This presentation covers why AVIF isn't a great replacement for JPEG-- mostly that it's slow, complicated, and lacks a progressive mode.

https://www.slideshare.net/cloudinarymarketing/imagecon-2019...


Why would the lack of a progressive mode matter? How is a progressive mode better than a "loading..." spinner?


I'd by far rather see several full article-appropriately sized sideline image at only progression step 1 or 2 out of 5 or 6, than several empty boxes all showing spinners while the images load in.

Plus I'd love the ability to say "I'm on a low bandwidth, $$$ per megabyte network, stop loading any image after the first iteration until I indicate I want to see the full one" because you almost never need full-progression-loaded images. They just make things pretty. Having rudimentary images is often good enough to get the experience of the full article.

(whether that's news, or a tutorial, or even an educational resource. Load the text and images, and as someone who understands the text I'm reading, I can decide whether or not I need the rest of that image data after everything's already typeset and presentable, instead of having a DOM constantly reflow because it's loading in more and more images)


Progressive mode is better than a loading spinner in the same way that PWAs are better than a loading spinner: By getting mostly usable content in as little time as possible to the user you decrease perceived wait, you decrease time to interactive and you increase perceived loading speed (even though time to full load might be the same or slightly increased).


Progressive photos always irritate me. The photo comes on my screen all blurry and out of focus and I'm disappointed that the moment I thought I had captured didn't turn out. Then, 3 seconds later, the picture changes and gets slightly better. Then I'm hopeful, but disappointed again. Then i think, "maybe it's not loaded yet", so i wait and hope. Then 2 seconds later it changes again. Is it done? Is it loaded now? Will it get better? Is my computer just dog slow? How long should I wait before I assume it's as good as it's going to get?

I know it's a small thing and doesn't really matter, but I don't like progressive photos.

Edit: This is just one context. There are plenty of other contexts where progressive is very useful.


On the other hand, it also doesn't constantly change the DOM, moving you across the page because images above what you're reading are getting swapped in and now you're looking at the paragraph above the one you were reading.

and again.

oh, and again.

and-


Fixing jumping pages doesn't require progressive image formats. All browsers soon support setting default aspect ratio for all images: https://twitter.com/jensimmons/status/1220114427690856453


The progressive mode in Jpeg and JPEG XL is quite different, because the quality is so much better your perception of it changes. Where Progressive Jpeg are literally useless before it finish loading, JPEG XL provides decent quality.


I'm curious to see it. My irritation is really just when I actually care about the picture. When I'm specifically browsing my library, for instance.

When it's the background for some webpage, or some other photo I'm not focused on, the progressive quality is probably a good thing.


In the common case that you don't actually care about the picture, you decrease actual wait, not just perceived. Progressive mode lets you ignore it before it's fully loaded.


true progressive mode actually gives you something usable before it's fully loaded, to the point where "not fully loading" can be entirely acceptable. If all images load "first iteration" first, you have a stable DOM that won't need to reflow in any way as each image goes through the rest of its progressive layers. And having a nice setting that says "only load up to X on slow networks" and "require image click to load past first iteration" would be godsend, even on gigabit connections. If I could bring the size and load time for modern webpages back down to "maybe a few 10s of kb" and "immediate" rather than "2.9MB and a second or two", I'd happily turn those options on.


> Progressive mode lets you ignore it before it's fully loaded.

Huh? It's obviously easier to ignore the spinner than to ignore a low-res image. You've seen the spinner before.


Progressive mode doesn't need JavaScript.


Did you confuse terminology? Web progressive is sort of an antonym to graphics progressive.


How so? Progressive JPEGs load in multiple passes. The first pass trades fidelity for speed, successive passes add quality over time. Seems pretty much in line with what PWA is all about.


> By getting mostly usable content in as little time as possible to the user you decrease perceived wait

See, this is the issue. Progressive images aren't "mostly usable content"; they're vague, ugly blobs.


It entirely depends on what progressive steps you define / where you stop loading.


Imagine this: you set your browser to download only the first N bytes of each image, showing you a decent preview. If you want more detail on a particular image, you tap it, and the browser requests the next N bytes (or the rest of the file, if you prefer).

And to enable this, the site only needed to create one high-resolution image.

Seems like a victory for loading speeds, for low-bandwidth, and for content creation.

I think FLIF looks incredible.


Agreed, but is it likely? Does any browser implement the even simpler feature "Show a placeholder for images. Tap the placeholder to load the actual image"?


Systems that render low res can download the prefix of the file and show an image, and stop there. Many modern image formats support this for precisely this reason. If you then want higher quality for some particular image, you can download more, not having to start over with a separate image file.


The talk is by the FLIF author. One of the big marketing points for FLIF is its progressive mode. Of course every other codec will be criticized for not having one.


The theory goes that it’s better to show the user something resembling the final image than showing them a generic loading spinner.


>Interesting. Where have you seen that adoption will be swifter with JPEG XL instead of, say, AV1/AVIF?

Well, it's not finalized as of yet (though it is imminent), so rate of adoption is just pure guesswork at this stage. However, things I deem necessary for a new image codec to become the next 'de facto' standard are:

royalty free

major improvements over the current de facto standards

Both AVIF and Jpeg XL tick these boxes, however Jpeg XL has another strong feature which is that it offers a lossless upgrade path for existing jpeg's with significantly improved compression as a bonus.


So you're suggesting that the advantage JPEG XL has is that it will compress existing JPEGs better than FLIF or AVIF?


Yes, it losslessly recompresses existing jpeg's into the jpeg XL format, while also making the files ~20% smaller, key point being lossless. Thus it is the 'perfect' upgrade path from jpeg which is the current lossy standard, as you will get better compression and no loss in quality when shifting to Jpeg XL.

This being a 'killer feature' of course relies on Jpeg XL being very competitive with AVIF in terms of lossy/lossless compression overall.


I'm assuming this is bidirectional? You can go back from XL to jpeg losslessly as well? If thats the case, I'm having trouble imagining a scenario where you're not correct; it'd be an utterly painless upgrade path


Looks like yes [1]

>A lightweight lossless conversion process back to JPEG ensures compatibility with existing JPEG-only clients such as older generation phones and browsers. Thus it is easy to migrate to JPEG XL, because servers can store a single JPEG XL file to serve both JPEG and JPEG XL clients.

[1] https://jpeg.org/jpegxl/index.html


>You can go back from XL to jpeg losslessly as well

I don't think so, but I don't quite see the point unless you are thinking of using it as a way to archive jpeg's, but in that case there are programs specifically for that, like PackJPG, Lepton etc.


JPEG XL integrates https://github.com/google/brunsli , and when used in that mode, you can go back to the original JPEG file in a bit-exact way.

There is also a mode that only preserves the JPEG image data.


How is it possible to have one-way only lossless compression?


Decompressing and recompressing a zip gives a lossless copy of the actual data, but there's no way to reconstruct the same exact zip you started with.

The same thing can be done with image data. For something like jpeg you can keep the coefficients but store them in a more compact form.

For what it's worth JPEG XL claims that it's 'reversible', but I'm not sure if that means you get your original jpeg back, byte for byte, or you get an equivalent jpeg back.


> Decompressing and recompressing a zip gives a lossless copy of the actual data, but there's no way to reconstruct the same exact zip you started with.

I don't think getting the same JPEG is the goal here but getting a JPEG that decodes to the same image data.


If you can get all the pixels back exactly, getting the original jpeg back would only require a very small amount of extra data.

But it might be more hassle than it's worth it to create the code and algorithms to do so.


Hi, we indeed support byte for byte reconstruction of the original JPEG codestream.


Interesting!

Btw, if memory serves right, Google Photos deliberately gives you back the same pixels but not the same JPEG codestream bits under some circumstances. That's too make it harder to exploit vulnerabilities with carefully crafted files.


Is it possible to get a JPEG back without any loss when recompressing a JPEG to another JPEG (after discarding the original, ie JPEG -> bitmap -> JPEG)?


If you can go both directions, you can store the more efficient jpegXL format but still have perfectly transparent support for clients that don't support jpegXL.

If you can't produce the exact same original jpeg, then you can still have some issues during the global upgrade process -- eg your webserver database of image hashes for deduplication has to be reconstructed.

A relatively minor problem to be sure, but afaict if jpegXL does support this (which apparently it does), the upgrade process is really as pain-free as I could imagine. I can't really think of anything more you could ask for out of a new format. Better & backwards+forward compatibility


Sounds like it will just take an existing JPEG and reduce the size, but not re-compress it - so even though the original JPEG is lossy, there will be no additional loss introduced, whereas another format not based on JPEG would require a re-encode pass that would lose additional information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: