Many years ago, I standardized on Journal in Microsoft Outlook.
Well guess what. Microsoft created Notes and Journal bceame a "legacy app." It was not possible to migrate. The deprecation of the .PST file in Exchange Server left me no way to transfer when I lease-rolled to a new laptop.
Enter Notes. As in, "notes.txt", which is exactly the same idea as todo.txt described here. Works. If this text file ever becomes machine unreadable, file compatibility will be the least of our worries.
Favorited—I'll be coming back to absorb more, as my aging semi-fluency in engineering physics and SQL doesn't help much with the notation I last saw in the 1980s.
When I click the link for this story, Edge (stop laughing. Please.) pops up "uBlock Origin works on Microsoft Edge." (It's already there, Edge, but thank you).
Edge is based on Chromium, so would that mean this breakage will eventually apply to Edge as the Manifest changes, uhm, manifest to Chromium-based products? Or is this just a Google Chrome thing?
FWIW I keep Firefox around but I have to admit I like Edge's smooth sync of bookmarks and settings across machines and even different platforms. I switched about two years ago when Edge was clearly faster and lighter. It's no longer as lightweight and there are slowly accumulating annoyances coming mostly from some Microsoft Clippy-esque attempts to make some tasks "easier" (mostly via Copilot) but I still prefer it to Firefox. My former employer/retiree benefits site, for example, won't open at all in Firefox. I've considered other Chromium based browsers like Brave but haven't (yet) been sufficiently motivated to switch. (Give Microsoft some time, I expect they'll eshit Edge eventually).
Many Chromium-based browsers will keep Manifest v2 support for a while. But eventually the upstream Chromium codebase will diverge enough that it becomes too much work to keep it and they will be forced to drop it as well.
The manifest situation simply doesn't apply to Brave in relation to adblockers specifically. That is, Brave will function like uBlock without having to install uBlock as an extension - that's kinda the whole point of Brave (blocking ads / making them opt in only). That said, it is true extensions one may use that are affected by the manifest version change may be affected in Brave.
"A generation from now, this solar heater can either be a curiosity, a museum piece, an example of a road not taken, or it can be a small part of one of the greatest and most exciting adventures ever undertaken by the American people,” Mr. Carter said. Reagan removed the panels in 1986."
All you need to know about the respective legacies of Carter and Reagan.
Notably, the 32 panels Carter installed were thermal water heating panels.
This means that there was no innovative technology in them, and they represent a technology path that is significantly less space efficient and less useful vs PVs.
It is also worth noting (especially on a platform that believes in American excellence as much as HN does) that modern PVs really trace their history back to Martin Green, who did most of his work with Australian Japanese and Chinese researchers (since he was in Australia), so funding the projects of American scientists might have not yielded the best results anyway.
So in many ways, you could argue that Carter’s solar focus was symbolically great, but stronger US subsidies would just make the US look like Germany - expensive and inefficient PVs that are increasingly becoming a liability (though bless them and their utility customers for powering through and continuing to install new, more efficient equipment).
> [...] US look like Germany - expensive and inefficient PVs that are increasingly becoming a liability (though bless them and their utility customers for powering through and continuing to install new, more efficient equipment).
Well tried but factually wrong.
Thermal solar is a battle-proofed low-tech for water heating (or even residential heating) that does not require any expensive Gov subsidies or public money to be deployed at large scale.
It is currently common to find such panels in developing countries as a cheap way of providing hot showers to people without power grids. In countries where the notion of gov subsidies often does not even exist.
That has very little in common with the giant public money sink shitshow that is the German energiewende.
And they were not removed by Reagan. The roof had to be redone and they were simply not re-installed because they leaked so badly and weren't worth the time and expense.
“ Three of the panels are part of museum collections. One of the panels was donated by Unity College to the National Museum of American History in 2009. Another is on display at the Jimmy Carter Presidential Center. A third panel has been part of the Solar Science and Technology Museum in Dezhou, China since 2013.” [1]
Sounds like America chose “museum piece” from Carter’s options.
A worry about the incoming American president has nothing to do directly with the proliferation of sports gambling nor the harms it brings, but the sudden absence of formerly available data that might-just-might contradict the narrative of an industry that's unzipped its change purse and let Trump have at the mic stand inside (a horribly multi-mixed metaphor, but apt).
"That data set you got there, UC Consumer Credit Panel. Sure would be a shame if something happened to it, you know, if, say, somebody decided to publish a post tying bankruptcies to our donor's lil $300 billion enterprise here. Capiche?"
That's happened with climate data during his previous term, so expect more (of less).
The OG cloud email service, AOL, still revive it for testing now and then, from 1993.
Yahoo! account established July 17, 1996. I know the exact date because I remember a hyperlink blue headline across the top of the gray Yahoo! home page, "TWA Plane Explodes Off Long Island"
At my job at a telco, I had a 13 billion record file to scan and index for duplicates and bad addresses.
Consultants brought in to move our apps (some of which were Excel macros, others SAS scripts running on old desktop) to Azure. The Azure architects identified Postgres as the best tool. Consultants attempted to create a Postgres index in a small Azure instance but their tests would fail without completion (they were string concatenation rather than the native indexing function).
Consultants' conclusion: file too big for Postgres.
I disputed this. Plenty of literature out there on Pg handling bigger files. The Postgres (for Windows!) instance on my Core I7 laptop with an nVME drive could index the file about an hour. As an experiment I spun up a bare metal nVME instance on a Ryzen 7600 (lowest power, 6 core) Zen 4 CPU pc with a 1TB Samsung PCIe 4 nVME drive.
Got my index in 10 minutes.
I then tried to replicate this in Azure, upping the CPUs, memory, and to the nVME Azure CPU family (Ebsv5). Even at a $2000/mo level, I could not get the Azure instance any faster than one fifth (about an hour) of the speed of my bare metal experiment. I probably could have matched it eventually with more cores, but did not want to get called on the carpet for a ten grand Azure bill.
All this happened while I was working from home (one can't spin up an experimental bare metal system at a drop-in spot in the communal workroom).
What happened next I don't know, because I left in the midst of RTO fever. I was given the option of moving 1000 miles to commute to a hub office, or retire "voluntarily with severance." I chose the latter.
As someone who works with Azure daily, I am amazed not just at the initial consultant's conclusion (that is, alas, typical of folk who do not understand database engines), but also to your struggle with NVMe storage (I have some pretty large SQLite databases on my personal projects).
You should not have needed an Ebsv5 (memory-optimised) instance. For that kind of thing, you should only have needed a D-series VM with a premium storage data disk (or, if you wanted a hypervisor-adjacent, very low latency volume, a temp volume in another SKU).
Anyway, many people fail to understand that Azure Storage works more like a SAN than a directly attached disk--when you attach a disk volume to the VM, you are actually attaching a _replica set_ of that storage that is at least three-way replicated and distributed across the datacenter to avoid data loss. You get RAID for free, if you will.
That is inherently slower than a hypervisor-adjacent (i.e., on-board) volume.
> Anyway, many people fail to understand that Azure Storage works more like a SAN than a directly attached disk--when you attach a disk volume to the VM, you are actually attaching a _replica set_ of that storage that is at least three-way replicated and distributed across the datacenter to avoid data loss. You get RAID for free, if you will.
I've said this a bit more sarcastically elsewhere in this thread, but basically, why would you expect people to understand this? Cloud is sold as abstracting away hardware details and giving performance SLAs billed by the hour (or minute, second, whatever). If you need to know significant details of their implementation, then you're getting to the point where you might as well buy your own hardware and save a bunch of money (which seems to be gaining some steam in a minor but noticeable cloud repatriation movement).
Well, in short, people need to understand that cloud is not their computer. It is resource allocation with underlying assumptions around availability, redundancy and performance at a scale well beyond what they would experience in their own datacenter.
And they absolutely must understand this to avoid mis-designing things. Failure to do so is just bad engineering, and a LOT of time is spent educating customers on these differences.
A case in point that aligns with is that I used to work with Hadoop clusters, where you would use data replication for both redundancy and distributed processing. Moving Hadoop to Azure and maintaining conventional design rules (i.e., tripling the amount of disks) is the wrong way do do things, because it isn't required neither for redundancy nor for performance (they are both catered for by the storage resources).
(Of course there are better solutions than Hadoop these days - Spark being one that is very nice from a cloud resource perspective - but many people have nine times the storage they need allocated in their cloud Hadoop clusters because of lack of understanding...)
I would think that lifting and shifting a Hadoop setup into the cloud would be considered an anti-pattern anyway; typically you would be told to find a managed, cloud-native solution.
The cloud is also being sold as “don’t worry about data loss”.
To actually deliver on that promise while maintaining abstraction of just “dump your data on C:/ as you are used to”, there are compromises in performance that need to be taken. This is one of the biggest pitfalls of the cloud if you care more about performance than resiliency. Finding disks that don’t have such guarantees is still possible, just be aware of it.
I may have the "Ebsv5" series code incorrect. I'd look it up, but I don't have access to the subscription any longer.
What I chose ultimately was definitely "nVME attached" and definitely pricey. The "hypervisor-adjacent, very low latency volume" was not an obvious choice.
The best performing configuration did come from me--the db admin learning Azure on the fly--and not the four Azure architects nor the half dozen consultants with Azure credentials brought onto the project.
Ebsv5 and Ebdsv5 somewhat uniquely provide the highest possible storage performance right now in Azure, partly because they support NVMe controllers instead of SCSI.
However, the disks are still remote replicas sets as someone else mentioned. They’re not flash drives plugged into the host, despite appearances.
Something to try is (specifically) the Ebdsv5 series with the ‘d’ meaning it has local SSD cache and temp disks. Configure Postgres to use the temp disk for its scratch space and turn on read/write caching for the data disks.
You should see better performance, but still not as good as a laptop… that will have to wait for the v6 generation of VMs.
This matches my experience. Something about cloud systems makes them incredibly slow compared to real hardware. Not just the disk, the CPU is more limited too.
I fire up vCPU or dedicated or bare metal in the cloud, doesn't matter, I simply cannot match the equivalent compute of real hardware and it's not even close.
When I spin up a vm on my hardware and run an application. The performance is generally about 70% of what I can get in a container running on an OS on that very same bare metal.
But that isn't the delta I'm seeing, it's 5-10x performance delta not a 30-50% delta.
Why should it be expected? I'm being sold compute with quoted GHz CPU speeds, RAM and types of SSDs.
I would vaguely expect it to not match my workstation, sure, but all throughout this thread (and others) people have cited outrageous disparities i.e. 5x less performance that you'd expect even if you managed your expectations to e.g. 2x less due to the cloud compute not being a bare metal machine.
In other words, and to illustrate this with a bad example: I'd be fine paying for an i7 CPU and ending up at i5 speeds... but I'm absolutely not fine with ending up at Celeron speeds.
I tried a variety of configurations. The E-series was one of them, as it's advertised as "Great for relational database servers, medium to large caches, and in-memory analytics." Premium and Premium V2, tried those at larger capacities I didn't need just to get higher IOPS.
None came within an order of magnitude of a Ryzen 7600/nVME mobo sitting in my son's old gaming case.
An option I did not try was Ultra disk, which I recall being significantly more expensive and was not part of the standard corporate offering. I wasn't itching to get dragged in front of the architecture review board again, anyway.
Well guess what. Microsoft created Notes and Journal bceame a "legacy app." It was not possible to migrate. The deprecation of the .PST file in Exchange Server left me no way to transfer when I lease-rolled to a new laptop.
Enter Notes. As in, "notes.txt", which is exactly the same idea as todo.txt described here. Works. If this text file ever becomes machine unreadable, file compatibility will be the least of our worries.