> There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.
Ohhh yes!
So, a couple of weeks ago I came across a discussion where some distro (I don't remember which one) contemplated removing 32-bit user space support, suggesting to users to simply run a VM running a 32 bit Linux instead. It was a stupid suggestion then, and this statement is a nice authorative answer from the kernel side, where such suggestions can be shoved to.
Probably SuSE. We use SLES15 at work and their bizarre decisions in SLES 16 to remove X servers (except for XWayland), remove 32-bit libraries, and complete removal of their AutoYaST unattended install tool in favor for a tool that is maybe 25% compatible with existing AutoYaST files still baffles me. We spent months moving to SLES15 from a RHEL derivative a few years ago and now we basically have to do it again for SLES16 with as big as the changes are. We have some rather strong integrations with the Xorg servers, and Wayland won't cut it for us currently, so we're stuck unless we want to rearchitect 20 years of display logic to some paper spec that isn't evenly implemented and when it is, it's buggy as shit.
I've been pushing hard for us to move off SLES as a result, and I do not recommend it to anyone who wants a stable distribution that doesn't fuck over its users for stupid reasons.
With respect to OpenGL with the current de-facto standard toolkits Qt and GTK you can't really get away from them for the time being, since at the moment they pull in some implementation of OpenGL as a runtime dependency; crossing fingers for that going away soon.
Also for that matter, although OpenGL is a legacy API, it's a well understood, well documented, and well tested environment. And as much as Vulkan makes certain things – well – not easier, but more straightforward, it isn't without issues. Heck, only recently Matías N. Goldberg found a long standing issue with swapchain reuse that got finally resolved with VK_EXT_swapchain_maintenance1
With respect to "technical costs" in the context of Wayland: IMHO it's mostly pushing around responsibilities and moving goalposts. Granted, setting up an on-screen frame buffer to draw on incurs a lot less moving parts in Wayland compared to X11. However, it comes at the cost of multiplying rather basic graphics machinery that's required for drawing the most simple things into each and every client. Of course shared libraries will somewhat ease the requirements on .text and .rodata segments, which can be shared; but all the dynamic state that's generated on initialization ending up in .bss and .data is redundantly kept around. And then there's the issue that Wayland also forgoes things like efficient use of screen frame buffer memory that cuts all windows from the same region of memory and managing pixel ownership. The "every window gets its own wholly sized framebuffer" only worked well for that small time window (pun intended) in which screen resolutions weren't as big as they now are becoming commonplace.
"4k", i.e. 3840×2160 @ 10R10G10B2A resolution takes up about 64MiB in a double buffered configuration (256MiB in an 8k format), if there's only a single window on screen. And every additional full screen application (even if minimized) will add another 32 MiB (128 MiB) to that. Those gigabytes of GPU VRAM don't look as plenty from that view.
The old and dusted (but not busted) way of using a single frame buffer and cutting windows from that doesn't look as outdated anymore.
The issue is not with the level of testing of the API. The issue with the level of testing new implementations of the API will have. Since this API is grotestequely and absurdely complex, expect hell for new implementations to acheive a good level of compatibility with anything legacy (QA hellish nightmare).
12. The forensic analysis also revealed that Elez sent an email with a
spreadsheet containing PII to two United States General Services Administration
officials. The PII detailed a name, a transaction type, and an amount of money.
"Treasury said Ryan Wunderly will replace Marko Elez on the agency’s DOGE team. Elez examined the federal payments system housed at the Bureau of the Fiscal Service before he resigned from Treasury earlier this month after The Wall Street Journal surfaced racist social media posts."
> Some (many?) NASA engineers are at the high end of the band and are advocating a return on Dragon instead. Boeing is obviously at the low end of the band and thinks it is a low risk.
To me this gives a strong impression of history rhyming with itself. Back in the early 1980ies NASA engineers "close to the hardware" were raising warning, above warning about reliability issues of the shuttles, ultimately being overruled by management, leading to the Challenger disaster.
Then in 2003 again engineers were raising warnings about heat shield integrity being compromised from impacts with external tank insulation material. Again, management overruled them on the same bad reasoning, that if it did not cause problems in the past, it will not in the future. So instead of addressing the issue in a preventative action, the Columbia was lost on reentry.
Fool me once …, fool me twice …; I really hope the engineers will put their foot down on this and clearly and decisively overrule any mandate directed from management.
Given the many organizational failures that Boeing has had in recent years leading to safety problems (cough Dreamliner cough), I'm quite sure that Boeing's engineers have no way to put their feet down.
Afterwards one might come out as a whistleblower. But the fact that the last two whistleblowers wound up conveniently dead (no really, https://www.nbcnews.com/news/us-news/boeing-whistleblower-di...) is likely to have a chilling effect on people's willingness to volunteer as whistleblowers.
Scott Manley mentioned an interesting twist on this in a recent YouTube video of his: Kamala Harris, chair of the National Space Council, becoming a candidate in this year's Presidential election. The NSC is supposed to guide policy, so she wouldn't normally be involved in this kind of nitty-gritty, but there are people all up and down the hierarchy who would be well aware that this isn't how the media or her political opponents would think about it in the event of disaster.
Except in this case, according to Steve Stich, it is NASA engineers vs. Boeing engineers. And the Boeing engineers are the ones who are "closer to the hardware", while the NASA engineers are just overseeing it.
I have no idea who is right in this case. And even if the crew comes down on Starliner successfully, it doesn't mean that it was the right call. Maybe they just got lucky.
My sense from the call is that, if NASA engineers insist on a Dragon return, NASA management will support them.
I don't think this is good logic without more information about the actual calculation of risk. It should come down to who can accurately measure the risk and whether that risk is acceptable. People can roll the dice on low probability events, sometimes for an entire career without bad consequence but that shouldn't be conflated with good decision making.
Flying safely with a 10% failure risk when your acceptable risk is only 2% just means you got lucky, not that you're good.
Until management is held accountable and put into prison for their conscious unreasonable decisions against all advice, which led to the loss of life, nothing will ever change in megacorps.
Well, I'm reluctant to give him the benefit of the doubt because he also says "we don't know what's on the back side of the Moon" despite the fact that the agency he heads mapped the far side of the Moon decades ago.
Exactly. If we assume the backdoor via liblzma as a template, this could be a ploy to hook/detour both fprintf and strerror in a similar way. Get it to diffuse into systems that rely on libarchive in their package managers.
When the trap is in place deploy a crafted package file that appears invalid on the surface level triggers this trap. In that moment fetch the payload from the (already opened) archive file descriptor, execute it, but also patch the internal state of libarchive so that it will process the rest of the archive file as if nothing happened, and the desired outcome also appearing in the system.
What annoys me the most about Reddit is, that it's essentially just a rehash of Usenet with marginally more moderation features, up/down-voting and custom CSS for each group/subreddit. That's about it. If I were to attempt to implement a Reddit "clone", I'd merely spin up a couple of ISC InterNetNews NNTP servers and slap a web application in front of those.
The only thing that Reddit did – on the interaction level – was replacing the Usenet experience with a visually more appealing and easier to access web frontend. In that regard it's a continuation of Eternal September, with the side effect of draining the user pool from Usenet, leading to the shut down of many Usenet servers world wide because "nobody is using it anymore".
Every programmer sees message boards and quickly runs down a database schema for users, comments, and upvotes, because they're all trivial and almost identical. The user base that builds up is the value. In the case of Reddit, I don't mean to imply positive value, but that's the value.
Endless injection of cash? Sometimes I think that the VC model is wrong, and taking a few pages from bootstrapped start-ups might make lot more sense...
>I'd merely spin up a couple of ISC InterNetNews NNTP servers and slap a web application in front of those.
If this was the only thing differentiating Reddit from anything other text reliant information service, why hasn't anybody come along and disrupted Reddit by doing something similar to this?
You're ignoring the network of users they have, which is absolutely the hardest thing to create for an app like this. Any technically superior clone you build can easily lose to Reddit due to its userbase.
Reddit is fundamentally a link aggregator and there are countless examples of them. 20 years ago (jeez) slashdot was doing the same
In fact, Reddit went mainstream when Digg fumbled their 2.0 launch, and just recently Reddit was in danger of doing the exact same thing with their poorly designed apps and new design.
This IPO valuation is based entirely on the value of the user content that will be sold to AI firms. The product is always the users.
I miss newsgroups. They were good enough. We didn't need to dress them up.
Our incessant chasing of the latest shiny thing, it's all so silly. We keep throwing the baby out with the bathwater in our endless, desperate forward motion.
The baseline product can and has been cloned by many people. Hacker news is one of those clones. Whipped up with minimal effort in a meme language no one ever used before.
Strictly speaking, those fat Cookie banners are unlawful under the regulation of the GDPR; the GDPR mandates that a site must not behave functionally different given consent or not, as long as the functionality is not related to a specific user.
Unfortunately there are only so many GDPR compliance officers around, and they have to focus on the bigger fish to fry.
Fun fact: Rust's development saw significant traction and support by Mozilla in the first place to be used for the development of Servo. Or in other words, the eventual development of Servo was, what motivated the development of of Rust from pet project into what it is today.
Does anyone know the size of Bandcamp's catalogue. I'm just wondering about hardware costs (storage) would be for prospective competitors who intend to swoop up Bandcamp's customers (artists and listeners). Audio is a lot less demanding than video, and since there's no DRM it's basically just static files with some access control.
Another factor to consider is that BC makes files available to download in a variety of formats (MP3, FLAC, AAC, etc.) Presumably they're transcoding on the fly from a lossless format and not storing all those extra files...
Ohhh yes!
So, a couple of weeks ago I came across a discussion where some distro (I don't remember which one) contemplated removing 32-bit user space support, suggesting to users to simply run a VM running a 32 bit Linux instead. It was a stupid suggestion then, and this statement is a nice authorative answer from the kernel side, where such suggestions can be shoved to.