No need to speculate about this algorithm in particular. It's been a few years since every programmer has had access to LLMs (that's a huge sample size!). Some LLMs are even branded as 'Research'. Plus they never get tired like humans do. We should be swimming in algorithms like this.
Shipping Vulkan in production on Linux is a challenge. Chrome was dealing with it for a while. Recently, with Zed ported to Vulkan, we saw the variety of half-broken platforms and user configurations.
I'd recommend Blended to not close the door on OpenGL and instead keeping it as a compatibility fallback.
Another option might be to write a D3D render backend for Blender and run that on top of DXVK on Linux (which is probably the most robust and most tested 3D rendering path on Linux thanks to Proton - with the assumption that DXVK contains tons of workarounds for various Vulkan driver problems that had been encountered in the wild by Proton).
Oh irony. The article talks about exaggerated attention to success of individuals, like Zuck, being an issue in Silicon Valley. And yet the article itself talks about this AI Ad direction as something invented by evil Zuck.
This isn't really about Zuck. Ultra-convincing ads will happen soon regardless of what Zuck does.
Firefox rendering is based on WebRender, which runs on OpenGL. The internals of WebRender are similar to gpui but with significantly more stuff to cover the landscape of CSS.
Super excited to see this reviving! We need more exploration into Linux structure: this world of hundreds distros barely different by their desktop environments - is too borong.
I can't imagine any obvious reason I would miss Rust's 2018 edition, let alone 2024 edition, to implement linear algebra? People seemed happy enough in Fortran before I was old enough to go to school and I don't sense it's an application where I'd want async for example. A lot of other edition changes are nice when writing new thing but not helpful for an existing codebase. So, like, sure, it's 2015 edition but that's fine?
There's actually an advantage to using older editions, and that is that it lowers the MSRV (minimum supported Rust version). This is especially nice for libraries, while binary project usually can just use the latest edition.
In pinhole cameras, straight lines look straight. That's what a regular projection matrix gives you with rasterization.
With non-pinhole cameras, straight lines look curved. You can't rasterize this directly. 3D Gaussian splats have an issue with this, addressed by methods like ray tracing.
It's very useful to train on non-pinhole cameras, because in real world they can capture a wider field of view.
Now I'm wondering: in the era of software 2.0, everything is figured out by AI. What are the chances AI would discover this algorithm at all?
reply