Isn't this (covenants that basically have dead people in their graves reaching out to smite living people) exactly what the dead hand rule was created to prevent? This was a major part of the "defeudalization" that took place between the 17th-19th centuries in most of western Europe, as before then the nobility would entail their estates so as to keep them whole in the senior male line. It does allow for limited postmortem control, but practically not more than one human lifespan thereafter.
Is that true? The things people use native desktop applications for nowadays tend to be exactly those which aren't just neat content displays. Spreadsheets, terminals, text-editors, CAD software, compilers, video games, photo-editing software. The only things I can think of that I use as just text/image displays are the file-explorer and image/media-viewer apps, of which there are really only a handful on any given OS.
You could argue that spreadsheets and terminals are just text with extra features! I'm joking though, but web apps usually are more than just text and images too.
What are you talking about? That's not the issue for most people. For most people the issue is that if you don't have a job for long enough, the government will send people to throw you out on the streets to suffer and die.
I have tried various forms of non-work (including unemployment while unqualified for government aid), and the by far most mentally devastating thing I've done was to take an extended sabbatical where I really just did nothing but sit on my ass, play video games, watch netflix, and scroll social media for 8 months. Took me years to get my brain sorted again.
How do you insert a branch "every 10ms" without some sort of hardware-provided interrupt?
If your code is running in a hot loop, you would have to insert that branch into the hot loop (even well-predicted branches add a few cycles, and can do things like break up decode groups) or have the hot loop bail out every once in a while to go execute the branch and code, which would mean tiling your interior hot loop and thus adding probably significant overhead that way.
Also, you say "cached memory address" but I can almost guarantee that unless you're doing that load a lot more frequently than once every 10 milliseconds the inner loop is going to knock that address out of l1 and probably l2 by the time you get back around to it.
You put the check outside the innermost loop. Put it up one or two loops instead, and reason that the check runs frequently enough and also infrequently enough.
Also, don’t you have to hit a pthread cancellation point for pthread_cancel to take effect?
Those are way more expensive than a branch, but if you want the exact behavior, you could do “if (done) { break; } else { pthread_??? }”
> Also, don’t you have to hit a pthread cancellation point for pthread_cancel to take effect?
No, the whole point here is async cancellation - you don't test for it and you don't enter a cancellation point.
Excerpt from pthread_setcancelstate(3):
> Asynchronous cancelability
> Setting the cancelability type to PTHREAD_CANCEL_ASYNCHRONOUS is rarely useful.
> Since the thread could be canceled at any time, it cannot safely reserve resources
> (e.g., allocating memory with malloc(3)), acquire mutexes, semaphores, or locks, and
> so on. Reserving resources is unsafe because the application has no way of knowing
> what the state of these resources is when the thread is canceled; that is, did cance-
> lation occur before the resources were reserved, while they were reserved, or after
> they were released? Furthermore, some internal data structures (e.g., the linked
> list of free blocks managed by the malloc(3) family of functions) may be left in an
> inconsistent state if cancelation occurs in the middle of the function call. Conse-
> quently, clean-up handlers cease to be useful.
>
> Functions that can be safely asynchronously canceled are called async-cancel-safe
> functions. POSIX.1-2001 and POSIX.1-2008 require only that pthread_cancel(3),
> pthread_setcancelstate(), and pthread_setcanceltype() be async-cancel-safe. In gen-
> eral, other library functions can't be safely called from an asynchronously cance-
> lable thread.
>
> One of the few circumstances in which asynchronous cancelability is useful is for
> cancelation of a thread that is in a pure compute-bound loop.
What makes you say "the one area"? There are plenty of areas that have enough development friction / inertia such that the same principle applies. Even generally, I think the reason why people caution against reinventing the wheel isn't because it prevents innovation, but because it wastes time / incurs additional risk.
I agree with you. When I read that my first thought was "the one area"? Personally I think its the complete opposite, like really strongly. like really really strongly. I'm certain for at least 10 years now, once a week I think "I miss old desktop operating systems". Any of them. 7,vista,xp. snow leopard,leopard,tiger. I even stopped using Ubuntu when it went from Gnome 2 to Gnome 3 and other options at that time were pretty bad so I ended up getting back into mac's for my home desktop. I still use all 3 daily, but hate all of them.
> It sounds like the kernel’s allocations may only use one tag
What about the blogpost suggested this?
" ... always-on memory safety protection for our key attack surfaces including the kernel ..."
" ... always-on memory-safety protection covering key attack surfaces — including the kernel and over 70 userland processes — built on the Enhanced Memory Tagging Extension (EMTE) and supported by secure typed allocators and tag confidentiality protections ... "
Suggests to me that the kernel allocator uses a similar tagging policy as the userspace allocators do.
The other 15/16 attempts would crash though, and a bug that unstable is not practically usable in production, both because it would be obvious to the user / send diagnostics upstream and because when you stack a few of those 15/16s together it's actually going to take quite a while to get lucky.
Typically 14/15 since a tag is normally reserved for metadata, free data, etc. Linux kernel reserves multiple for the internal kernel usage since it was introduced upstream as more of a hardware accelerated debugging feature even though it's very useful for hardening.
It's more complicated than that, so I just use 15/16 to gesture at the general idea. E.g. some strategies for ensuring adjacent tags don't collide can include splitting the tags-range in half and tagging from one or the other based on the parity of an object within its slab allocation region. But even 1/7 is still solid.
Detection is 14/15ths of the battle. Forcing attackers to produce a brand new exploit chain every few weeks massively increases attack cost which could make it uneconomical except for national security targets.
Are these CEOs not "actual criminals"? Frankly, a CEO who knowingly allows his company to put poison (melamine) in the baby formula they produce -- killing several babies and hospitalizing *51,900* others -- is far more of a "criminal" than a simple mugger. Muggers can only hurt so many people, while major corporations have the capacity to cause harm on a society-wide scale.
But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up. It's not always easy to upgrade things incrementally: just look at IPv4 vs IPv6
> But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up.
The problem is knowing when to do it and when not to do it.
If you're even the slightest bit unsure, err on the side of not thinking a few steps ahead because it is highly unlikely that you can see what complexities and hurdles lie in the future.
In short, it's easier to unfuck an under engineered system than an over engineered one.
The best way to think a few steps ahead is to make as much of your solution disposable as possible. I optimize for ease of replacement over performance or scalability. This means that my operating assumption is that everything I’m doing is a mistake, so it’s best to work from a position of being able to throw it out and start over. The result is that I spend a lot of time thinking about where the seams are and making them as simple as possible to cut.
I agree with thinking a few steps ahead. It is particularly useful in case of complex problems or foundational systems.
Also maybe simplicity is sometimes achieved AFTER complexity, anyway. I think the article means a solution that works now... target good enough rather than perfect. And the C2 wiki (1) has a subtitle '(if you're not sure what to do yet)'. In a related C2 wiki entry (2) Ward Cunningham says: Do the easiest thing that could possibly work, and then pound it into the simplest thing that could possibly work.
IME a lot of complexity is due to integration (in addition to things like scalability, availability, ease of operations, etc.) If I can keep interfaces and data exchange formats simple (independent, minimal, etc.) then I can refactor individual systems separately.
Yes sometimes. But how can you know beforehand? It’s clear in hindsight, for sure.
The most fundamental issue I have witnessed with these things is that people have a very hard time taking a balanced view.
For this specific problem, should we invest in a more robust solution which takes longer to build or should we just build a scrappy version and then scale later?
There is no right or wrong. It’s depends heavily on the context.
But, some people, especially developers I am afraid, only have one answer for every situation.
IPv6 is arguably a good example of what happens when you don't do the simplest thing possible. What we really needed was a bigger IP address space. What we got was a whole bunch of other crap. If we had literally expanded IPv4 by a couple of octets at the end (with compatible routing), would we be there now?
In a place with even less IPv6 adoption, probably. It's not like there wasn't similar proposals discussed, and there's no need to rehash the exact same discussion again.
The problem quickly becomes "how do you route it", and that's where we end up with something like today's IPv6. Route aggregation and PI addresses is impratical with IPv4 + extra bits.
The main changes from v4 to v6 besides the extra bits is mostly that some unnecessary complexity was dropped, which in the end is net positive for adoption.
It can be hard enough to fix things when some surprise happens. Unwinding complicated “future proof” things on top of that is even worse. The simpler something is, the less you hopefully have to throw away when you inevitably have to.
IPv4 vs IPv6 seems like a great example for why to keep it simple. Even given decades to learn from the success of IPv4 and almost a decade in design and refinement, IPv6 has flopped hard, not so much because of limitations of IPv4, but because IPv6 isn't backwards compatable and created excessive hardware requirements that basically require an entirely parallel IPv6 routing infrastructure to be maintained in addition to IPv4 infrastructure which isn't going away soon. It solved too far ahead for problems we aren't having.
As is IPv4s simplicity got us incredibly far and it turns out NAT and CIDR have been quite effective at alleviating address exhaustion. With some address reallocation and future protocol extensions, its looking entirely possible that a successor was never needed.
C.f. https://en.m.wikipedia.org/wiki/Rule_against_perpetuities