The US is adopting isolationist policies based on a nationalist ideology. The government is run by anti-intellectuals. The US economic policy is based on xitter rants, and flip-flops every week. The fickle vindictive ruler is personally attacking businesses that don't make him look good. It's clear that in the US the path to success is now loyalty. The president runs a memecoin.
It is not going to happen, this is just day-dreaming. Yes, I saw the news, but you can't compare a few tens of people wanting to leave the US for ideological reasons to millions of people that stay in the US because they can fare better and make more money or start new companies overnight because they have a great idea.
The US is not adopting isolationist policies. It's adopting more nationalistic policies, which is no different than how China has been running its economy (and politics in general) for decades. And specifically the four year Trump Administration is pursuing heavily nationalistic policies. There's no evidence the Democrats will keep much of Trump's policy direction, as certainly the Biden Admin and Trump Admin could hardly be more different.
Let me know where you see the US military pulling back from its global footprint. How many hundreds of global bases has the US begun closing? They're expanding US military spending as usual, not shrinking. The US isn't shuttering its military bases in Europe or Asia.
The US is currently trying to expedite an end to the Ukraine v Russia war, so it can pivot all of its resources to the last target standing in the Middle East: Iran. That's anything but isolationist.
Also, the US pursuing Greenland and the Panama Canal, is the opposite of isolationist. It's expansionist-nationalistic. It's China-like behavior (Taiwan, Hong Kong, South China Sea, Tibet).
I really like the WebGPU API. That's the API where the major players, including Apple and Microsoft, are forced to collaborate. It has real-world implementations on all major platforms.
With the wgpu and Google dawn implementations, the API isn't actually tied to the Web, and can be used in native applications.
The only reason I like WebGL and WebGPU is that they are the only 3D APIs where major players take managed language runtimes into consideration, because they can't do otherwise.
Had WebAssembly already been there without being forced to go through JavaScript for Web APIs, and most likely they would be C APIs where everyone and their dog are writing bindings insteads.
Now, it is pretty much a ChromeOS only API still, and only available across macOS, Android and Windows.
Safari and Firefox have it as preview, and who knows when it will ever it stable at a scale that doesn't require "Works best on Chrome" banners.
Support on GNU/Linux, even from Chrome, is pretty much not there, at least for something to use in production.
And then we have the whole drama that after 15 years, there are still no usable developer tools on browsers for 3D debugging, one is forced to either guess what rendering calls are from the browser or which are from the application, GPU printf debugging, or having a native version that can be plugged into Renderdoc or similar.
People pick the best option, while worse option can creep from being awful to just a close second, and then suddenly become the best option.
There's a critical point at which there's enough EV infrastructure to overcome objections, available cars become cheap enough, and then there's hardly any reason to pick gas cars that are slower, laggier, noisier, smelly, more expensive to run and can't be refuelled at home.
Sort of. While electric cars are great, the type of person who buys a $3,000 car cannot afford the cheapest electric car for about 10-15 years after that tipping point, even after you account for gas savings. So new cars are likely to switch suddenly, it still will be a decade before that catches up. The average car in the US is 12 years old.
Even the type of person who buys a 3 year old car cannot (will not?) afford a payments on a new car accounting for the gas savings. They will buy what they can get - but they also will influence the market as they are likely to be sensible (often a new car is not sensible) and so willing to pay extra for the EV, and this in turn will put pressure on the new cars since trade in value is very important to most people who buy a new car (which is sensible, but it is the banks forcing this on the buyers)
Maybe? I can see what you’re saying, but the real world can move as slow as sludge at times. These aren’t smartphones that are relatively easily produced, shipped, and purchased by users.
Second order effects like load on an aging power grid could easily cause speed bumps.
I hope you’re right, but I don’t know I could bet on it
HDR when it works properly is nice, but nearly all HDR LCD monitors are so bad, they're basically a scam.
The high-end LCD monitors (with full-array local dimming) barely make any difference, while you'll get a lot of downsides from bad HDR software implementations that struggle to get the correct brightness/gamma and saturation.
IMHO HDR is only worth viewing on OLED screens, and requires a dimly lit environment. Otherwise either the hardware is not capable enough, or the content is mastered for wrong brightness levels, and the software trying to fix that makes it look even worse.
Most "HDR" monitors are junk that can't display HDR. The HDR formats/signals are designed for brightness levels and viewing conditions that nobody uses.
The end result is a complete chaos. Every piece of the pipeline doing something wrong, and then the software tries to compensate for it by emitting doubly wrong data, without even having reliable information about what it needs to compensate for.
What we really need is some standards that everybody follows. The reason normal displays work so well is that everyone settled on sRGB, and as long as a display gets close to that, say 95% sRGB, everyone except maybe a few graphics designers will have a n equivalent experience.
But HDR, it's a minefield of different display qualities, color spaces, standards. It's no wonder that nobody gets it right and everyone feels confused.
HDR on a display that has peak brightness of 2000 nits will look completely different than a display with 800 nits, and they both get to claim they are HDR.
We should have a standard equivalent to color spaces. Set, say, 2000 nits as 100% of HDR. Then a 2000 nit display gets to claim it's 100% HDR. A 800 nit display gets to claim 40% HDR, etc. A 2500 nit display could even use 125% HDR in it's marketing.
It's still not perfect - some displays (OLED) can only show peak brightness over a portion of the screen. But it would be an improvement.
DisplayHDR standard is supposed to be it, but they've ruined its reputation by allowing HDR400 to exist when HDR1000 should have been the minimum.
Besides, HDR quality is more complex than just max nits, because it depends on viewing conditions and black levels (and everyone cheats with their contrast metrics).
OLEDs can peak at 600 nits and look awesome — in a pitch black room. LCD monitors could boost to 2000 nits and display white on grey.
We have sRGB kinda working for color primaries and gamma, but it's not the real sRGB at 80 nits. It ended up being relative instead of absolute.
A lot of the mess is caused by the need to adapt content mastered for pitch black cinema at 2000 nits to 800-1000 nits in daylight, which needs very careful processing to preserve highlights and saturation, but software can't rely on the display doing it properly, and doing it in software sends false signal and risks display correcting it twice.
CPUs evolved to execute C-like code quickly. They couldn't dramatically change the way C interfaces with the CPU, so they had to change the hidden internals instead.
For example, CPUs didn't have an option to hide DRAM latency with a SIMT architecture, so they've went for complex opaque branch prediction and speculative execution instead.
The way C is built and deployed in practice didn't leave room for recompiling code for a specific CPU, so explicit scheduling like VLIW failed. Instead there's implicit magic that works with existing binaries.
When there were enough transistors to have more ALUs, more registers, more of everything in parallel, C couldn't target that. So CPUs got increasingly complex OoO execution, hidden register banks, and magic handling of stack as registers.
Contrast this with the current GPUs that have register-like storage available that is explicitly divided between threads (sort of like 6502's zero page – something that C couldn't target well either!)
So that you learn that loaning is for giving temporary shared^exclusive access within a statically-known scope, and not for storing data.
Trying to construct permanent data structures using non-owning references is a very common novice mistake in Rust. It's similar to how users coming from GC languages may expect pointers to local variables to stay valid forever, even after leaving the scope/function.
Just like in C you need to know when malloc is necessary, in Rust you need to know when self-contained/owning types are necessary.
The biggest thing I’ve run into where I really want self-referential types is for work that I want to perform once and then cache, while still needing access to the original data.
An example: parsing a cookie header to get cookie names and values.
In that case, I settled on storing indexes indicating the ranges of each key and value instead of string slices, but it’s obviously a bit more error prone and hard to read. Benchmarking showed this to be almost twice as fast as cloning the values out into owned strings, so it was worth it, given it is in a hot path.
I do wish it were easier though. I know there are ways around this with Pin, but it’s very confusing IMO, and still you have to work with pointers rather than just having a &str.
The barriers may keep out low effort submissions*, but they also keep out contributors whose time is too valuable to waste on installing and configuring a bespoke setup based on some possibly outdated wiki.
* contributors need to start somewhere, so even broken PRs can lead to having a valuable contributor if you're able to guide them.
My non-techie relatives can't tell the difference between the local device password/passphrase and the iCloud/Apple ID password, so they'll enter them all until something works (I don't blame them, the UIs for these are unclear and inconsistent).
Apple used to make fun of Vista's UAC, but they've ended up with the same patchwork of sudden prompts, and even weaker UI.
Yeah, to be perfectly honest, I understand. I think TCC is meant to be the primary consent system, but there are others (such as the Authorization system, and the Service Management framework).
The EU is getting ready to brain-drain the US.