When you have a connected grid, it needs to ensure the entire grid operates on the same frequency. Production and consumption affect the frequency, so they have to be perfectly balanced. This is a hard problem when it needs to be done at scale of entire countries or continents.
Traditionally this has been managed basically in an analog way by having huge fossil- or nuclear-powered steam turbines that spin at the desired frequency. Apparently a spinning mass is so good at stabilizing grid frequency, that massive flywheels are being added to the grid to replace the role of the fossil turbines.
Our old power grids make solar and wind production merely follow the existing grid frequency, rather than dictate one, so at least in our grids pure solar couldn't work. I don't know if there's a solid technical reason for it, or that's just how it's been set up.
This proposal is much simpler — it encodes slices of the image with filters and zlib block boundaries that make them naturally independent and obviously correct, and just adds a chunk saying "yeah, it's safe to use that".
The restart marker proposal just adds more configurability on top of that, which IMHO isn't needed. It makes implementation more fiddly. And I'm not sure why they're talking about recovering from broken gzip when spec-conforming encoder shouldn't generate broken gzip.
Restart markers aren't necessarily much more configurable than this proposal. Segmentation type 1 is basically identical to this proposal, except that the fixed number of scanlines in each segment is computed slightly differently. Segmentation type 0 just adds an offset array on top of that, but I can't tell if it's just to enable seeking, or if it's to allow multiple IDAT chunks within a single segment. If the latter is the case, I guess that would make it somewhat more complicated, though the author has suggested limiting the proposal to one type or the other.
Meanwhile, the error-recovery part isn't about a broken DEFLATE stream, it's about avoiding specially-crafted files that would produce one image if decoded using the restart markers, but another image if decoded without them. To prevent the usual issues with different tools yielding different results, this is made an error. But the PNG spec suggests that errors in ancillary chunks shouldn't ever be fatal:
> However, it is recommended that unexpected field values be treated as fatal errors only in critical chunks. An unexpected value in an ancillary chunk can be handled by ignoring the whole chunk as though it were an unknown chunk type. [0]
Therefore, the extension tells decoders to restart with sequential decoding in that case, instead of bailing out entirely.
Animated AVIF is widely supported, and can represent GIFs losslessly.
BTW, Chrome vetoed the idea of supporting muted video files in `<img>` like Safari does, so we've got this silly animated AVIF that is a video format that has been turned into a still image format that has been turned back into a video format, which takes regular AV1 video data but packages it in a slightly different way, enough to break video tooling.
You'd use lossless blocks for really simple pixel art that the GIF was made for. For GIFs made from video clips, you can apply regular video compression and decimate their size.
Advancements in compression algorithms also came with advancements in decompression speed. New algorithms like tANS are both compressing well, and have very fast implementations.
And generally smaller files decompress faster, because there's just less data to process.
But how does the ecological benefit of space savings compare with the extra power consumption from compressing and decompressing?
And will people take more pictures because of the space savings leading to more power consumption from compressing and decompressing the photos?
Is this just greenwashing by Apple?
But I have now decided to take my photos off of Apple's servers as well as to take way way less photographs, if any. The climate of my near future is way more important than a photograph of my cat.
You have an invalid assumption that extra power is spent on better compression or decompression. It generally takes less energy to decompress a better-compressed file, because the decompression work is mostly proportional to the file size. Energy for compression varies greatly depending on codec and settings, but JPEG XL is among the fastest (cheapest) ones to compress.
Secondly, you have an invalid assumption that the amounts of energy spent on compression have any real-world significance. Phones can take thousands of photos when working off a tiny battery, and most of it is spent on the screen. My rough calculation is that taking over a million photos takes less energy than there is in a gallon of gas.
Apart form that, compression cost is generally completely ignored, because files are created only once, but viewed (decompressed) many many times over.
Smaller files save storage space. Storage has costs in energy and hardware.
Smaller files are quicker to transfer, and transfer itself can be more energy intensive than compression. It's still small in absolute numbers.
Serving images from CDNs is not the only use case for browser JXL support. Browsers can perform client side processing of images before upload. They even host fully featured image editors like https://photopea.com. And sometimes you do want to see an original image from your camera in your browser, like say on the web version of Google Photos.
The EU charging infrastructure is pretty good now.
Tesla has no advantage here, except maybe good car navigation integration. Most Tesla Superchargers are still 400V v2 which are slower than 300kW/800V Ionity and Fastned chargers.
New features are often launched with partner companies, and then all the PMs have a real life example to go point at to convince their conservative stakeholders to adopt the new thingy. Looking at your power usage is not an apple thing, it's a power company thing to implement.
The way Apple product management works is that everyone is always demoing stuff up the chain and ultimately to Tim. They’re reverentially referred to as “Tim demos” internally. If Tim likes it, you get tons of resources.
The downside may be that in the pathological case you’re implementing stuff just for Tim. Another downside is you can’t really demo e.g. high-quality backwards-compatible SDKs to an exec.
The upside is you end up with a pretty coherent and un-fickle set of products. Unlike Google, Apple doesn’t have a roster of five (?) competing chat apps that it constantly changes.
Traditionally this has been managed basically in an analog way by having huge fossil- or nuclear-powered steam turbines that spin at the desired frequency. Apparently a spinning mass is so good at stabilizing grid frequency, that massive flywheels are being added to the grid to replace the role of the fossil turbines.
Our old power grids make solar and wind production merely follow the existing grid frequency, rather than dictate one, so at least in our grids pure solar couldn't work. I don't know if there's a solid technical reason for it, or that's just how it's been set up.