did decently (but not in top 10). I got a lot of the same linkedin comments with "we even gave you some reviews for free to show we are serious". Said no to them and that turned into retribution.
started getting negative comments https://postimg.cc/n9tDDB0S . had to stay up all night to reply to negative comments with link to my github showing the source :(
for some reason they all deleted themselves (or got removed). not sure.
you folks should really try the lenovo yoga series. especially this year's yoga slim aura edition (which has the intel lunar lake chips). The ipex-llm extensions are fairly stable and work very well ( https://github.com/intel/intel-extension-for-pytorch )
and the build quality of the laptop is exactly like the macbook air - i have both.
The discussion is mostly around battery life. How's the battery life on your Yoga, how much power does it lose on standby, and what operating system do you use?
windows.
all day battery life. holds up pretty well to the macbook air 15 at similar workloads. macbook does get 1-1.5 hour more battery life.
The lunar lake is an insane chip.
Those writing the new Rust decoder are largely people who worked on the standard and on the original C++ implementation, + contributions from the author of jxl-oxide (who is not at Google).
Sheesh. Google isn't trying to kill jxl, they just think its a bad fit for their product.
There is a huge difference between deciding not to do something because the benefit vs complexity trade off doesn't make sense, and actively trying to kill something.
FWIW i agree with google, avif is a much better format for the web. Pathology imaging is a bit of a different use case, where jpeg-xl is a better fit than avif would be.
Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).
It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.
JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)
I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.
My $0.02, since the gap here on perception of the situation fascinates me:
JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.
If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.
(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> JPEG XL as a technical project was a real nightmare
Why?
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.
In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.
> they'll have 0 idea why this has so much mindshare
Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?
Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.
We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
If you look at CDNs, WebP and AVIF are very popular.
> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.
The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.
[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.
> AVIF is better at low to medium quality, and JXL is better at medium to high quality.
BTW, this is no longer true. With the introduction of tune IQ (Image Quality) to libaom and SVT-AV1, AVIF can be competitive with (and oftentimes beat) JXL at the medium to high quality range (up to SSIMULACRA2 85). AVIF is also better than JPEG independently of the quality parameter.
JXL is still better for lossless and very-high quality lossy though (SSIMULACRA2 >90).
>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.
It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.
Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.
What I have so far needs a lot of work and is flaky. Everyday it is getting tighter and better.
Microsoft pulled out the lifecycle management code from Puppeteer and put it into Playwright with Google's copyright still at the top of the several files. They both use CDP. I'm using the Chrome extension analogue for every CDP message and listener. I need a couple days to remove all the code from the Page, Frame, FrameManager, and Browser classes and methodically make a state machine with it to track lifecycle and race conditions. It is a huge task and I don't want to share it without accomplishing that.
For example, there is a system that listens for all navigation requests in a Page's / Tab's Frames in Playwright. Embedded frames can navigate to urls which the parent Frame is still loading such as advertising resources, all that needs to be tracked.
There are a lot of companies that are talking about building solutions using CDP without Playwright and I'm curious how well they are going to handle the lifecycle management. Maybe if they don't intercept requests and responses it is very straight forward and simple.
One idea I have is just evaluate '1+1' in the frame's content script in a loop with a backoff strategy and if it returns 2 then continue with code execution or if it times out fail instead of tracking hundreds of navigations with with 30 different embedded frames in a page like CNN. I'm still tinkering. Stagehand calls Locator.evaluate() which is what I'm building because I haven't implemented it yet.
Yes the key is we don't intercept requests and responses, that saves 60% of the headache of lifecycle management.
We do exactly what you described with a 1+1 check in a loop for every target, it pops any crashed sessions from the pool, and we don't keep any state beyond that about what tabs are alive. We really try to derive everything fresh from the browser on every call, with minimal in-memory state.
very cool! software never gets done ....so would have loved to see it (and contributed to it).
But totally respect that. Would love to see it when ur done!
for context, i contribute to an opensource mobile browser (built on top of chromium), where im actually building out the extension system. Chrome doesnt have extensions on mobile! would have loved to see if this works on android...with android's lifecycle on top of your own !!!
> so would have loved to see it (and contributed to it)
Hopefully, I can get it to a point where developers look at it and want to contribute. I'm pretty disciplined writing clean organized code even if it is hacking of PoC, on the other hand, all the thousand tests that run when pressing a button in the UI are created by AI and overall are a mess.
With the code, the biggest problem is lifecycle management (tracking all the Windows, Pages, Frames, and embedded Frames), however, it is only 4 files and can be solved with a thought out state chart. There are event listeners attached to Frames that aren't being removed under certain conditions. If I run the tests, they will work. If I start clicking links, switching tabs, ect., the extension will fail requiring a reload.
> would have loved to see if this works on android
It is dependent on chrome.* APIs like chrome.tabs.*. If I had to summarize Puppeteer / Playwright, they do two things, track Page / Frame lifecycle and evaluate JavaScript expressions using "Runtime.evaluate", mostly the latter because it gives access to the all the DOM APIs, i.e. () => { return window.document.location }.
I don't know if Android Chrome has similar functionality or if you are able to build it. Nonetheless, if you have a way to evaluate code inside the content script world, either the MAIN or ISOLATED, you might only need a limited set of features to manage and track Pages, Frames, Tabs, ect.. If your interest is browser automation you might not need a lot of devtools features or can later add them.
this is very cool ! i contribute to an opensource mobile browser (github.com/wootzapp/wootz-browser). would love to have it work in Westworld if it makes sense for you folks.
i build an opensource mobile browser - we create ai agents (that run in the background) on the mobile browser. and build an extension framework on top so u can create these agents by publishing an extension.
we hook into the android workmanager framework and do some quirky things with tab handling to make this work. its harder to do this on mobile than on desktop.
bunch of people are trying to do interesting things like an automatic labubu purchase agent (on popmart) :D
lots of purchase related agents
there is a proxy to check this - investor quality.
Every high quality investor - including YC - forces an options pool. The post-money SAFE created by YC accounts for an options pool ("The Post-Money Valuation Cap is post the Options and option pool existing prior to the Equity
Financing").
high quality investors will in most cases, decline a fundraise if there is no options pool - since it signals that the founders are not serious about the most valuable asset of any startup.
The Asus ROG Flow Z13 with 128gb unified memory and the AMD Ryzen AI Max amu would be my first non-M4 laptop pick. Surprised how under-reported this device is.
Hope lenovo ships the amd max in a P1 type laptop. I have an almost 5 year old thinkpad P1gen2 with Core i9, 64GB, 2.5Tb disk, T2000 discrete GPU, 4K oled touch display running Linux. Something similar that runs LLMs faster would be nice. The GPU is limited by only 4GB. Also, something that does not run out of battery power in less than 2 hours.
recently posted my opensource enterprise browser on producthunt - https://www.producthunt.com/products/wootzapp-ai-enforced-en...
did decently (but not in top 10). I got a lot of the same linkedin comments with "we even gave you some reviews for free to show we are serious". Said no to them and that turned into retribution.
started getting negative comments https://postimg.cc/n9tDDB0S . had to stay up all night to reply to negative comments with link to my github showing the source :(
for some reason they all deleted themselves (or got removed). not sure.
reply