OSI's session layer did very little more than TCP/UDP port numbers; in the OSI model you would open a connection to a machine, then use that connection to open a session to a particular application.
X.400 was a nice idea, but the ideal of having a single global directory predates security. I can understand why it never happened
On X.509, the spec spends two chapters on attribute certificates, which I've never seen used in the wild. It's a shame; identity certificates do a terrible job at authentication
Looking at a chart of the S&P 500 over decades, it's hard to see how anyone could make money by betting on corporations only being interested in short term results.
The S&P 500 is a list. 80+% of the companies on that list when it started are no longer on it.
Betting on business in general over long time periods tends to be a winning proposition. Betting on an individual company tends to be less of a winner in general.
Businesses do tend to fail eventually. Their business model become obsolete, the market for their product fades away, they strangle themselves with bureaucracy, they zig instead of zag. That isn't short-term-itis.
But you cannot grow a small company into a large one by concentrating on short term profits. The S&P 500 is composed of 500 large companies.
Imagine if you were selling MSFT short every quarter since they IPO'd in the '80s.
It was canceled essentially overnight by Compaq higher-ups, teams at Microsoft and Compaq learnt when they came to office. It was present on last release candidate before RTM, because it was essentially the only 64bit platform to actually fixup 32bit issues that prevented 64bit address space in earlier NT releases.
I think that this can change the semantics though; with the preceding check you can miss the shared variable being decremented from another thread. In some cases, such as if the shared value is monotonic, this is done, but not in the general case.
With a relaxed ordering I'm not sure if that's right, since the ldumax would have no imposed ordering relation with the (atomic) decrement on another thread and so could very well have operated on the old value obtained by the non-atomic load
All operations on a single memory location are always totally ordered in a CC system, no matter how relaxed the memory model is.
Also am I understanding it correctly that n is the number of threads in your example? Don't you find it suspicious that the number of operations goes up as the thread count goes up?
edit: ok, you are saying that under heavy contention the check avoids having to do the store at all. This is racy, and whether this is correct or not, would be very application specific.
edit2: I thought about this a bit, and I'm not sure i can come up with a scenario where the race matters...
edit3: ... as long as all threads are only doing atomic_max operations on the memory location, which an implementation can't assume.
> as long as all threads are only doing atomic_max operations on the memory location, which an implementation can't assume.
What assumes that?
If your early read gives you a higher number, quitting out immediately is the same as doing the max that same nanosecond. You avoid setting a variable to the same value it already is. Doing or not doing that write shouldn't affect other atomics users, should it?
In general, I should be able to add or remove as many atomic(x=x) operations as I want without changing the result, right?
And if your early read is lower then you just do the max and having an extra read is harmless.
The only case I can think of that goes wrong is the read (and elided max) happening too early in relation to accesses to other variables, but we're assuming relaxed memory order here so that's explicitly acceptable.
Yes, probably you are right: a load that finds a larger value is equivalent to a max. As the max wouldn't store any value in this case, also it wouldn't introduce any synchronization edge.
A load that finds a smaller value is trickier to analyze, but i think you are just free to ignore it and just proceed with the atomic max. An underlying LL/SC loop to implement a max operation might spuriously fail anyway.
edit: here is another argument in favour: if your only atomic RMW is a cas, to implement X.atomic_max(new) you would:
1: expected <- X
2: if new < expected: done
3: else if X.cas(expected, y): done
else goto 2 # expected implicitly refreshed
So a cas loop would naturally implement the same optimization (unless it starts with a random expected), so the race is benign.
Does it tho? Assuming no torn reads/writes at those sizes, given the location should be strictly increasing are there situations where you could read a higher-than-stored value which would cause skipping a necessary update?
Afaik on all of x86, arm, and riscv an atomic load of a word sized datum is just a regular load.
It doesn't need to be strictly increasing some other thread could be making other arbitrary operations. Still even in that case, as Dylan16807 pointed out, it likely doesn't matter.
If you are implementing a library function atomic<T>::fetch_max, you cannot assume that every other thread is also performing a fetch_max on that object. There might be little reason for it, but other operations are allowed so the the sequence of modifications might not be strictly increasing (but then again, it doesn't matter for this specific optimization).
If you actually read ITU T-REC X.200, which specifies the OSI model, you'll find that it doesn't match the modern internet at all. E.g., we don't have an OSI-style transport protocol at all (connections themselves aren't addressable independent of the SSAPs), TCP and UDP are actually layer 5, the presentation layer is protocol-specific, and pretty much the entire stack falls to bits if the network layer isn't packet switched.
There's a separate term for the bits of the OSI model that are actually relevant; it's called the IETF model.
I second your recommendation for trying fountain pens. I suffer from some form of arthritis, and fountain pens let me write again.
There are a variety of cheap ones available; I'm fond of the Platinum Preppy. They're cheap as chips, write nicely, and have a fine version that actually lives up to its name. The Lamy Safari is also popular, but I found it too chunky to be comfortable.
Very nice! I know a lot of people like the Platinum Preppy. I typically use a LAMY Al-Star, but I never post the cap on the end of the pen while writing—the pen has much better balance when the cap is on the table. I also really like my Pilot Metropolitan.
Many barcode scanners these days can scan QR codes. I have a NetumScan NSL5 that I got for €30 or so that can handle QR, DataMatrix, and even Aztec codes.
What this misses is that, in a properly-functioning organization, a team that reliably delivers gets a bigger pie to divvy up. If your organization isn't like this, then perhaps you should consider finding a new one.
Set the AllowedIPs wireguard setting (and/or the route, if you can set that separately) to one larger than your home network (i.e., if your home network is 192.168.1.0/24, use 192.168.0.0/23). Then, block wireguard packets from the internal network on your router. Then the tunnel will always be running; it just won't be used when you're at home because there's a more specific route
X.400 was a nice idea, but the ideal of having a single global directory predates security. I can understand why it never happened
On X.509, the spec spends two chapters on attribute certificates, which I've never seen used in the wild. It's a shame; identity certificates do a terrible job at authentication
reply