> - easy storage management, the "rampant layer violation" of zfs we really need;
Except in zfs you have to think if you really want that device in that pool or that vdev. I use btrfs, slow and kinda unsafe, specifically because you just specify raid1c2/raid1c3/raid1c4 and it kind of survives c-1 dead disks (until you run out of disk space and everything goes to flames).
> - integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
We (kinda, for some very generous definitions of) have that in composefs? But I still sense even with that, you still want some resemblance of sanity in your indivual layers.
> Hardware Support: Nvidia users face particular challenges, and many specialized hardware configurations simply don't work reliably.
> The XLibre project continues this legacy with active community development.
Guess what? Nvidia users also "face particular challenges", and many normal hardware configurations simply don't work reliably under XLibre either. I guess Linus was right, don't use Nvidia.
The page describes it as if it was Wayland's mistake for causing Nvidia customers so much grief. It was Nvidia who insisted on basing Wayland on EGLStreams instead of on GBM like everybody else. Even the open source nouveau driver used GBM on Nvidia graphics. But even those problems are a few years in the past at this point after Nvidia decided to offer GBM support on their proprietary drivers. And consistent effort has been put into solving outstanding issues like screen glitches that were eventually resolved with explicit sync.
These complaints remind me of how Linux was criticized more than a decade ago for not not being compatible with proprietary applications, while foss applications worked just fine on proprietary platforms. While technically correct, it misses the whole point, seemingly rather intentionally.
Compression Attached Memory Module (CAMM) tries to be a middle-term solution for that, by reducing how crappy your average RAM socket is to latency and signal integrity issues. But, at this point, I can see CAMM delivered memory being reduced to a sort of slower, "CXL.mem" device.
As stated previously, the sockets reduce signal integrity, which doesn't necessarily make them "bad," but is why Framework wasn't able to used socketed ram to maximize the potential of this CPU.
Basically, they need to use LPDDR5X memory, which isn't available in socketed form, because of signal integrity reasons.
Which means you won't see an improvement if you solder your ram directly, I think mostly because your home soldering job will suffer signal integrity issues, but also because your RAM isn't LPCAMM and isn't spread across a 256 bit bus.
I believe the reason is, at the frequencies these CPUs are talking to RAM, the reflection coefficient[1] starts playing a big role. This means any change in impedance in the wire cause reflections of the signal.
This is also the reasoning why you can't just have a dumb female to female HDMI coupling and expect video to work. All of such devices are active and read the stream on the input and relay them on the output.
No, that's not true. Prompt processing just needs attention tensors in VRAM, the MLP weights aren't needed for the heavy calculations that a GPU speeds up. (After attention, you only need to pass the activations from GPU to system RAM, which is about ~40KB so you're not very limited here).
That's pretty small.
Even Deepseek R1 0528 685b only has like ~16GB of attention weights. Kimi K2 with 1T parameters has 6168951472 attention params, which means ~12GB.
It's pretty easy to do prompt processing for massive models like Deepseek R1, Kimi K2, or Qwen 3 235b with only a single Nvidia 3090 gpu. Just do --n-cpu-moe 99 in llama.cpp or something similar.
> I suppose it's possible that an OS could shim the dialog boxes for file selection, open, save, etc... and then transparently provide access to only those files
Isn't this the idea behind Flatpak portals? Make your average app sandbox-compatible, except that your average bubblewrap/Flatpak sandbox sucks because it turns out the average app is shit and you often need `filesystem=host` or `filesystem=home` to barely work.
That kind of thing (with careful UX design) is how you escape the sandbox cycle though; if you can grant access to resources implicitly as a result of a user action, you can avoid granting applications excessive permissions from the start.
(Now, you might also want your "app store" interface to prevent/discourage installation of apps with broad permissions by default as well. There's currently little incentive for a developer not to give themselves the keys to the kingdom.)
I now know I never know whenever "a UUID" is stored or represented as a GUIDv1 or a UUIDv4/UUIDv7.
I know it's supposed to be "just 128 bits", but somehow, I had a bunch of issues running old Java servlets+old Java persistence+old MS SQL stack that insisted, when "converting" between java.util.UUID to MS SQL Transact-SQL uniqueidentifier, every now and then, that it would be "smart" if it flipped the endianess of said UUID/GUID to "help me". It got to a point where the endpoints had to manually "fix" the endianess and insert/select/update/delete for both the "original" and the "fixed" versions of the identifiers to get the expected results back.
(My educated guess it's somewhat similar to those problems that happens when your persistence stack is "too smart" and tries to "fix timezones" of timestamps you're storing in a database for you, but does that wrong, some of the time.)
They are generated with different algorithms, if you find these distinctions to be semantically useful to operations, carry that distinction into the type.
AFAIK, it's the same even on POSIX, when you create a venv, "./bin/python" are symlinks and the pyvenv.cfg has a hardcoded absolute path of the current python interpreter the venv module was using at the time of creation. It really doesn't isolate the interpreter.
(also one of the reasons why, if you're invoking venv manually, you absolutely need to invoke it from the correct python as a module (`python3.13 -m venv`) to make sure you're actually picking the "correct python" for the venv)
I meant that the symlink behavior is default except on Windows (though it may just be on POSIX platforms and I think of it as “except Windows” because I don't personally encounter Python on non-Windows non-POSIX platforms.)
Do you pull the entire toolchain inside the makefile if it doesn't exist in the host? Or you're just using it as a glorified alias over the (rather arcane) podman container run/docker run calls?
Also, how do you solve the problem of actually bootstrapping over different versions of docker/podman/make? Do you have some sort of ./makew like how Maven uses to bootstrap itself?
You certainly can have Make pull in dependencies... But I would usually just have Make look for the dependencies and ask the user to install them.
You would want to mark the dependencies anyway, since when you update your compiler, IMHO, that invalidates the objects compiled with that compiler, so the compiler is a dependency of the object.
That said, I don't distribute much software. What works for personal software and small team software may not be effective for widely distributed software that needs to build in many environments.
Except in zfs you have to think if you really want that device in that pool or that vdev. I use btrfs, slow and kinda unsafe, specifically because you just specify raid1c2/raid1c3/raid1c4 and it kind of survives c-1 dead disks (until you run out of disk space and everything goes to flames).
> - integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
We (kinda, for some very generous definitions of) have that in composefs? But I still sense even with that, you still want some resemblance of sanity in your indivual layers.