no-panic uses link-time shenanigans to prevent panics in the compiled binary, but this isn't 100% reliable (as explained in the README, just pointing this out)
The user you replied to likely means something different: The priority of the event often depends on the exact contents on the event and not the hardware event source. For example, say you receive a "read request completed" interrupt from a storage device. The kernel now needs to pass on the data to the process which originally requested it. In order to know how urgent the original request and thus the handling of the interrupt is, the kernel needs to check which sector was read and associate it with a process. Merely knowing that it came from a specific storage device is not sufficient.
By the way, NMI still exist on x86 to this day, but AFAIK they're only used for serious machine-level issues and watchdog timeouts.
Generally, any given software can be done in hardware.
Specifically, we could attach small custom coprocessors to everything for the Linux kernel, and Linux could require them to do any sort of multitasking.
In practice, software allows us to customize these things and upgrade them and change them without tightly coupling us to a specific kernel and hardware design.
Exactly the point. We can compile any piece of software that we want into hardware, but after that it is easier to change in software. Given the variety of unexpected ways in which hardware is used, in practice we went up moving some of what we expected to do in hardware, back into software.
This doesn't mean that moving logic into hardware can't be a win. It often is. But we should also expect that what has tended to wind up in software, will continue to do so in the future. And that includes complex decisions about the priority of interrupts.
We already have specialised hardware for register mapping (which could be done in software, by the compiler, but generally isn't) and resolving instruction dependency graphs (which again, could be done by a compiler). Mapping interrupts to a hardware priority level feels like the same sort of task, to me.
They're probably referring to AMD Zen's speculative lifting of stack slots into physical registers (due to x86, phased out with Zen3 though), and more generally to OoO cores with far more physical than architectural registers.
We do register allocation in compilers, yes, but that has surprisingly little bearing on the actual microarchitectural register allocation. The priority when allocating registers these days is, iirc, avoiding false dependencies, not anything else.
When it comes to exchangeable lens cameras (DSLRs/DSLMs), as you increase the number of pixels you very quickly reach a point where you're limited by the optical performance of the lens instead of the sensor. Lots of systems offer a choice between a 24MP and a high pixel count camera (e.g. Nikon Z6/Z7), and you'll find that the high pixel count sibling requires very good lenses to actually achieve a meaningful improvement over 24MP. For these cameras, common wisdom says to stay with 24MP apart from certain niche use cases.
In other words, I wouldn't expect a improvement in capturing actual 48MP pictures in phone cameras, apart perhaps from pixel binning to a smaller size and similar techniques.
Disclaimer: I haven't followed camera tech very closely recently, and I'm not an expert. Take my opinion with a grain of salt.
Isabelle/HOL has Quickcheck, which is precisely what you think it is. Although it only works if the free variables in the thesis you'd like to prove are sufficiently simple AFAIK (think integers or lists). The more powerful alternative to Quickcheck is Nitpick, which is a purpose-built counterexample finder.
The workflow you'd like already exists one to one in Isabelle: You can simply type "nitpick" halfway through a proof and it will try to generate a counterexample in that exact context.
Of all the BSDs, I'm only a little familiar with OpenBSD through its OpenSSH fame, so I'm a little surprised to hear that a (probably) somewhat related project would use XML in a system call. In other words, if an OpenSSH release introduced XML as a wire format I'd assume it to be an April fools's joke. [1] But I guess OpenBSD and NetBSD are less related than I thought.
Free, Net, and Open BSD are all relatively early forks of the same 386BSD project from the early 90s. Each project has different goals and code has diverged between them, though they frequently pull in changes from on another's codebases.
FreeBSD is the most general purpose of the 3 and the most popular. It is also the only BSD out of the big 3 that doesn't utilize a global kernel lock, allowing for modern symmetric multiprocessing similar to Linux.
NetBSD is aimed at being extremely portable. It's sort of like the "Can it run doom" of the OS world. Just take a look at their list of ports: https://wiki.netbsd.org/ports/
OpenBSD is aimed at being secure. Exactly how realized this goal is is somewhat controversial. But regardless of that, security is the stated highest priority of the development team.
There's also DragonflyBSD, which was forked by Matt Dillon from FreeBSD following some personal and technical disagreements. It's since diverged pretty heavily from the rest of the BSD family. Given its very low market share in this category of already niche operating systems, it seems more like a pet project of Matt Dillon's, though I'm sure it has serious users.
When choosing a bowl for your potato salad, you need to keep some important properties of potato salad bowls in mind. Different bowls have different designs that make some better suited to storing potato salad than others. For example, a higher quality bowl might have been manufactured to a higher standard than a subpar bowl. You might also want to choose different bowls depending on the amount of potato salad you want to prepare. There are bigger bowls and smaller bowls, differing in size and thus in the amount of potato salad they can accommodate. A smaller bowl is great when you're aiming to cook for just a few people, while a larger bowl is ideal for bigger groups. Brand name bowls like the classic SALADBOWL(tm) are preferred by some potato salad fans, while others appreciate the great price offered by newer, less established brands. Keep in mind that while a cheaper bowl is more affordable, it can also break more easily, so it's not a bad idea to compare the warranty period offered by different manufacturers.
Disclaimer: Not a native speaker, I hope I got the annoying structure of those "what to look for when buying X" blog articles right...
Based on its name I think it's used in Arch Linux packages (at least makepkg says it is generating an mtree file at some point during the build process IIRC). However it appears mtree (the tool) isn't packages, so perhaps it's only using the mtree specification format?
Yeah, I think coming up with definitions of "understanding" or "reasoning" that GPT-4 and friends supposedly don't fulfill is moving goalposts.
To continue your line of thinking, when we add salt and pepper to a dish we've cooked, are we really doing it because we have a developed a thorough understanding of the human olfactory and gustatory systems [1] system, or because it tasted good previously when applied to similar recipes?
And when it comes to understanding basic math, perhaps the patterns are still a tad to complicated for LLMs. Maybe there are too many surprising rules that appear out of nowhere and break the learnt patterns. But children struggle with those rules as well when they first come across them. Think about division through zero - when coming across the rule for the first time, you might wonder why it exists, instead of e.g. defining the result to be infinity. The answer to that question (not just "because that's the way things work!") is not obvious at all, to be honest I wouldn't be confident enough to attempt giving an explanation here.
[1] I have to admit I had to look gustatory system up: it's the biological term for the system behind the sense of taste.
> ...admit I had to look gustatory system up: it's the biological term for the system behind the sense of taste.
Ah, that must be why I consume pies with gusto. Yup, Wiktionary confirms: "Borrowed from Italian gusto, from Latin gustus (“tasting”). Doublet of cost."
The anti-pattern described as Deref Polymorphism is not the same as using Deref with the newtype pattern, in order to allow using the wrapped type transparently. In the latter case, the Target type of the Deref trait is always going to be perfectly clear. In case of the antipattern described on the linked page, it is not perfectly clear what the Target type is going to be.
In short, a Deref impl for some type T signals that T represents some level of indirection, and following/dereferencing that indirection can always be done in an unsurprising and trivial manner.
Link: https://gtfs.org/documentation/realtime/reference/