Hacker Newsnew | past | comments | ask | show | jobs | submit | peppermint_gum's commentslogin

What makes you think that it's "so damn hackable"?

Also, this particular attack requires administrator privileges and bypasses a security boundary that doesn't even exist on e.g. Linux. Linux doesn't have driver signatures and root can easily install a new kernel module.


> Linux doesn't have driver signatures and root can easily install a new kernel module.

Linux supports signed kernel modules (and not just on paper, this is a widely deployed feature).


Linux also has SELinux, root can't do everything there.


Yep, when booting with secure boot the kernel won't load any unsigned drivers.


This claim still assumes there's no vulnerabilities in a TCB sized in the millions of LoCs.

No chance.

Look elsewhere for actual security.

Right now, elsewhere just happens to be seL4. Anything else is either still too green or an architectural non-starter.


Just a quick look at 2024's CVEs, 0days for Windows is a security nightmare. Not singling out Windows specifically, but they have a lot.

Browsers only just recently patched browsers being able to be served javascript that scans local devices on 10.* and 192.168.* etc hitting IoT devices with exploits and payloads, hell even hitting open listening sockets on localhost and 0.0.0.0 -- that's cross platform, how many years did that go under the radar?

And now Windows is getting 'Recall' which will monitor and scan your every PC action to remember it for you using ML; I don't see that going back at all /s


>Browsers only just recently patched browsers being able to be served javascript that scans local devices on 10.* and 192.168.* etc hitting IoT devices with exploits and payloads, hell even hitting open listening sockets on localhost and 0.0.0.0 -- that's cross platform, how many years did that go under the radar?

Ironically windows was not hit by that, but the "secure"(?) operating systems of mac and linux were.


>It's simple. Human writing is short and to the point (either because they're lazy or want to save the reader's time), yet still manages to capture your attention. AI writing tends to be too elaborate and lacks a sense of "self".

Corporate (and SEO) writing has always been overly verbose and tried to sound fancy. In fact, this probably is where LLMs learned that style. There's no reliable heuristic to tell human- and AI-writing apart.

There's a lot of worry about people being fooled by AI fakes, but I'm also worried about false positives, people seeing "AI" everywhere. In fact, this is already happening in the art communities, with accusations flying left and right.

People are too confident in their heuristics. "You are using whole sentences? Bot!" I fear this will make people simplify their writing style to avoid the accussations, which won't really accomplish anything, because AIs already can be prompted to avoid the default word-salad style.

I miss the time before LLMs...


> AFAIK, there is reasonably clear evidence that deterrence has a very low impact on this sort of crime,

Could you share some of this evidence?


first result from google come for "effect of deterrence on property crime"

https://www.house.mn.gov/hrd/pubs/deterrence.pdf

second result, summarizes and links to several review papers:

https://nij.ojp.gov/topics/articles/five-things-about-deterr...


Then I'm not sure what you mean by "deterrence". Both of the linked articles argue against increasing the severity of punishment, but they also say that the certainty of getting caught is a strong deterrent.

This doesn't seem to be in conflict with what the GP said ("supporting laws and politicians that catch and punish criminals effectively"). It seems to me that many people have a problem with thieves not being punished at all.


Most of the people I have read or heard advocate for "more effective handling of crime" are much bigger on the severity of the sentence, though I don't deny that many will mention both. The "N strikes and you're out" angle, for example, is all about the severity of the sentence once you reach N.

New HN commenter "smeeger" whose subthread we are in seems close to favoring violence as punishment for relatively minor crimes, for example.

Still, yes, things that significantly increased the likelihood of being caught and punished do seem like a good idea, and do not require sentencing being changed.


ive been commenting here since 2014 but have to constantly make new accounts because HN bans me for expressing problematic beliefs. the fact that this thread got through the filter feels like a miracle or a dream… you need to read more carefully. i used the word effectively for a very specific reason. even the insanely sympathetic and humane punishments on the books in western countries now would basically stop crime if they were applied and implemented properly. if punishment were actually likely. if prisons werent just boot camp for criminals. social clubs. prisoners emerge from prison emboldened, not humbled. our system is broken and it stays broken because people are crappy. recently i actually decided to stop caring because its so pointless.

oh and your hands are waving a lot more than mine… you clearly dont want to think too hard about this


This is a spam video composed of stock footage. It doesn't add any new information, it shows some random wasp species, unrelated to the ones just discovered, and some random scientists (possibly actors).


Yeah, OP is overly dramatic, Reddit has technical issues all the time.


So we shouldn't make it easier for people to meet each other, because some of those people might be white supremacists?

I'm sorry, but I think you may be spending too much time online.


>not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit

It feels like whenever the author of bcachefs comes up, it's always because of some drama.

Just the other day he clashed with Linus Torvalds: https://lore.kernel.org/lkml/CAHk-=wj1Oo9-g-yuwWuHQZU8v=VAsB...

My reading is that he's very passionate, so he wants to "move fast and break things" and doesn't get why the others aren't necessarily very happy about it.


Hey at least it's not the worst behavior we've seen from a Linux file system creator...

I thought Carl Thompson's response was very good and constructive: https://lore.kernel.org/lkml/1816164937.417.1724473375169@ma...

What I don't understand is that IIUC Kent has his development git history well broken up into small tight commits. But he seems to be sending the Linux maintainers patches that are much larger than they want. I don't get why he doesn't take the feedback and work with them to send smaller patches.

EDIT: The culture at Google (where Kent used to work) was small patches, although that did vary by team. At Google you have fleet-wide control and can roll back changes that looked good in testing but worked out poorly in production. You can't do that across all organizations or people who have installed bcachefs. Carl pointed out that Kent seemed to be missing some social aspects, but I feel like he's also not fully appreciating the technical aspects behind why the process is the way it is.


Honesty, I think I just presented that pull request badly.

I included the rcu_pending and vfs inode rhashtable conversion because I was getting user reports that it fixed issues that were seriously affecting system usability, and because they were algorithmically simple and well tested.

Back in the day, on multiple occasions Linus and others were rewriting core mm code in RC kernels; bcachefs is still experimental, so stuff like this should still be somewhat expected.


> bcachefs is still experimental, so stuff like this should still be somewhat expected.

I really think you need to realign your expectations here. The Linux kernel is in a different place now than "back in the day" and you are not Linus Torvalds.

That PR would have been better off had it been split into multiple ones and timed differently.


They Kent. I love your work and I have succesfully used bcachefs in my main workstation since 6.7. I also happily donate monthly in patreon, which I do rarely if ever.

Hope you don't get too much into trouble with Linus. I do not want to see you or the project get into the wrong side of the old guard...


I second this. Please keep pushing through and don't let the peanut gallery get to you. bcachefs is our only realistic chance to bring our filesystem game into the 21st century, given that the little hope we might have had in Oracle has not been realized.


Who's the "old guard" - Linus Torvalds?


Yeah I see where you're coming from. By the way, I only heard of bcachefs yesterday and I watched a great video where you were presenting about it. I'm excited about the file system and it's super cool to hear from you!


Likewise!


> Hey at least it's not the worst behavior we've seen from a Linux file system creator...

I think that dubious distinction would go to Hans Reiser.


It's not about the size of each individual patches but about the large amount of changes in total *during the freeze•.


It's very clear from that thread that he doesn't understand the purpose of the stable branch. It doesn't mean "stable" as in "the best possible experience", it means it as in "this code has been tested for a long period of time with no serious defects found" so that when the stable branch is promoted to release, everything has undergone a long testing period by a broad user base.

If there is a defect found, the change to a stable branch should literally be the minimal code change to fix the reported issue. Ideally, if it's a newly introduced issue (i.e. since being on the stable branch), the problematic code reverted and a different fix to the original defect applied instead (or left if it's deemed less of an issue than taking another speculative fix). Anything that requires a re-organisation of code, by definition, isn't a minimal fix. Maybe it's the correct long-term solution, but that can be done on the unstable branch, but for the stable branch, the best fix is the simplest work around. If there isn't a simple work around, the best fix is to revert everything back to the previous stable version and keep iterating on the unstable branch.

The guy even admits it as well with his repeated "please don't actually use this in production" style messages - it's hard to give a greater indication than this that the code isn't yet ready for stable.

I can understand why from his perspective he wants his changes in the hands of users as soon as possible - it's something he's poured his heart and soul and he strongly believes it will improve his users' experience. It's also the case that he is happy running the very latest and probably has more confidence in it that an older version. The rational choice from his perspective is to always use the latest code. But, discounting the extremely unlikely situation that his code is entirely bug free, that just means he hasn't yet found the next serious bug. If a big code change is rushed out into the stable branch, it just increases the likelihood that any serious bug won't have the time it needs in testing to have the confidence that's the branch is suitable for promotion to release.


> The guy even admits it as well with his repeated "please don't actually use this in production" style messages - it's hard to give a greater indication than this that the code isn't yet ready for stable.

True that, and yet the kernel has zero issues keeping Btrfs around even though it's been eating people data since 2010. Kent Overstreet sure is naive at times, but I just can't not sneer at the irony that an experimental filesystem is arguably better than a 15-years old one that's been in the Linux kernel for more than a decade.


> True that, and yet the kernel has zero issues keeping Btrfs around even though it's been eating people data since

I can imagine scenarios where known failure modes on an "inferior" tool are better than unknown failure modes on a "superior" one.


honestly, it's mostly just a matter of trying to give myself the time to work with bugs as people hit them, and stage the rollout. I don't want clueless newbies running it until it's completely bulletproof.


It seems to be a difficult situation: he has bug fixes against the version in the stable kernel for bugs which haven't been reported. I can see both perspectives: on stable you don't want to do development, but also you want all bugfixes you can get. I can also see the point of Linus, who wants just to add bug fixes and to minimize the risk of introducing new bugs.

Considering that Kent himself warns against general use right now, I don't quite see the urgency to get the bug fixes out - in my understanding Linus would happily merge them in the next development kernel. And whoever is set to to run bcachefs right now, might also be happy to run a dev kernel.


I agree. If the author himself is telling people it's not ready for production, it doesn't really matter what bugs this code has, unless it affects other subsystems or is a dangerous regression from the previous stable release.

If the bug it's fixing was already in the current release branch, wasn't noticed before and has only shown up now late in the stable branch lifetime, then it definitely doesn't seem like something that needs an urgent fix.


He is not submitting changes for stable. He is submitting non-regression fixes after the merge window. It's clear he understands the rules and the reasons for them but feels like his own internal development process is equivalent at reducing the chance of major regressions introduced in such a PR such that he can simply persuade Linus to let things go through anyway.

Whether this internal process gives him a pass for getting his non-regression fixes in after the merge window is at the end of the day for Linus to decide. And Linus is finally erring on the side of "Please just do what everyone else is doing" rather than "Okay, fine Kent, just this once".

I would say it's ironic to start a comment saying: "It's very clear from that thread that he doesn't understand the purpose of the stable branch" when it's "very clear" from your opening paragraph that you don't understand the thread.


Perhaps you might enlighten me how it's "very clear" from my opening paragraph that I don't understand the thread. Granted, the initial post could be interpreted a number of different says, but having read the whole thread, I think I have a pretty good understanding of the intent. But clearly, you have a different interpretation, so please - enlighten me to your way of thinking.

Taken at it's most charitable, the opening of the first message "no more bug reports related to the disk accounting rewrite, things are looking good over here as far as regressions go" would suggest a meaning of "there are no significant bugs, the changes below are optional".

The next section in the change description then says that this fixes a number of very serious bugs. Straight away, I can see the potential for an interpretation difference. Is it "heads up, no changes required" or "these fixes are critical"?

He's told "no" by Linus, for reasons that seem to correlate with what I said (unless you'd like to point out in what way I don't understand the thread), and then rather than saying "yeah, then can wait until the next stable branch", he doubled down on the importance of getting these changes in and basically saying that the rules should only apply to everyone else and not him because he knows that there won't be any new bugs because of $REASONS. $REASONS that didn't apply when the bugs were introduced. $REASONS that include automated testing, but that didn't find these bugs originally.

The thread (which apparently I don't understand) contains a perfect summary from Linus himself: "But it doesn't even change the issue: you aren't fixing a regression, you are doing new development to fix some old probl;em, and now you are literally editing non-bcachefs files too."

All this for some changes to a system that he's actively discouraging people from using because it's not production ready anyway, and so none of these bug fixes are actually critical for right now.

It's good he ultimately backs down, but he should never have been pushing for these changes this late in the stable branch timeline anyway.

So, that's my understanding of the thread. I'd be interested to hear how your understanding of the thread is so radically different from that.


> enlighten me

Fundamentally on the whole I don't think most of your interpretation is comment worthy. (To clarify, I don't think its particularly objectionable following from the premise in your opening paragraph.) But...

> in the stable branch timeline

Again. Like I outlined in my initial reply. This has nothing to do with stable. I don't know why you keep talking about stable.

The discussion is about bleeding edge mainline Linux. It's clear to me because:

* It is a PR for Linus. I don't know enough about stable to know if they use PRs but most stable stuff I do know about involves marking patches for stable on the specific mailing lists oriented around stable.

* Linus doesn't handle stable.

* Linus and Kent are taking about the merge window and Kent submitting non-regression fixing patches after it. This doesn't make any sense if it was in a stable context. The process is different.

* If this was stable the discussion would be with GKH.

So, your comment is based on the premise of this being a discussion surrounding stable. It's not, so I don't know what to make of the rest of your comment on the basis of this incorrect premise.


> So, your comment is based on the premise of this being a discussion surrounding stable. It's not, so I don't know what to make of the rest of your comment on the basis of this incorrect premise.

The repo is literally called "linux-stable-rc"

It wasn't an incorrect premise, just incorrect terminology. Sorry, my bad, I shouldn't have referred to it prematurely as "stable" when it is just undergoing the process of stabilization.

> > in the stable branch timeline

> This has nothing to do with stable. I don't know why you keep talking about stable.

Yes, technically, this isn't officially called "stable" until after the last release candidate, however every release candidate should be considered as an attempt to create the stable release (although pragmatically, nobody expects the first few to have had enough testing to surface all the bugs that are likely to show up) and I don't think it's particularly egregious to talk about this change in the context of the stable branch timeline as -rc releases are just as much part of the timeline as the initial stable release and the later point releases.

For context, this change was being requested for inclusion in -rc6, which was over 4 weeks after the merge window ended. This very well could end up being promoted to the stable release if no more significant bugs are found. There is no way a change of this complexity should have been accepted, and when Linus pointed that out, Kent shouldn't have been arguing about it at all, instead he should have just waited to get it merged into 6.12 as he originally intended.

> The discussion is about bleeding edge mainline Linux.

Yes, it's mainline, but also "bleeding edge" is kind of a misnomer, as it hadn't been accepting feature changes and was in stabilization for producing stable release candidates for a month already, and by that point would have had significant testing.

Sorry for causing confusion my referring to it prematurely as "stable". I don't look at the kernel all that often, and we use a different process with different terminology in our environment. We keep mainline open all the time for ongoing feature work, fork that to "stable" which only accepts bug-fixes and from that we periodically create release candidates which get released for testing and possibly get relabeled as the actual release. Sorry I was still thinking in that mindset when I replied and didn't properly map the concepts back to those used in the kernel.


> The repo is literally called "linux-stable-rc"

It's not? What repo? The only two repos which are involved are https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin... which is mainline (implicitly) which is where these patches would land and git://evilpiepirate.org/bcachefs.git which is the source repo for the PR. The only branches being referenced are "master" (implicitly) for mainline and the tag "bcachefs-2024-08-23".

Regardless, to respond to the rest of your comment:

The reasons for why Linus is rejecting the change have nothing to do with the stable process and everything to do with the set release process. The mainline merge window opens, you (not you specifically unless you are a subsystem maintainer, if you want to contribute a patch as a non-maintainer, the process is completely separate and goes via the subsystem maintainers) submit features and bug fixes, the merge window closes, somewhere in the ballpark of 7 release candidates happen, and it's released as mainline. The goal of the RCs is to incorporate subsequent waves of fixes for any regressions introduced specifically by the bug fixes and new features.

Kent is claiming that, because he himself implements effectively an equivalently rigorous (according to him) feature testing and stabilisation process that his patches which do not fix regressions introduced by previous patches submitted during the merge window, but which do fix some real bugs, should be accepted outside the merge window.

In the past, Linus has let it slide, and he has also let it slide this time too: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin... . Linus is just asking Kent to stop doing this as he doesn't want to keep giving him special treatment.



Yes I can imagine there are a bunch of repos with "stable" in their name in git.kernel.org but where did you find a reference to this repo in that thread?

There are repos on git.kernel.org for microemacs, that doesn't mean this thread relates to microemacs.


I thought stable means "doesn't change"?


Not in the kernel land. Stable branches feature tens of thousands of patches.


Patches are expected but the kernel interfaces shouldn't change right? Like if I write a kernel module no patch should break my compatibility and make my module not build anymore (I think)? I don't care if it changes underneath as long as it doesn't change where I interface.


Userspace doesn't break, but if you don't want your module to break, upstream it (which is an important lesson about hardware selection: if it's not upstream and not being upstreamed, then you're going to get stuck on an old kernel at some point).

ZFS has broken on new releases (I don't recall if they were stable, I think they were), and that is one reason I won't use as the main filesystem on linux.


Usually. There are no hard rules though.

Upstream stable kernel certainly does not care about compatibility with your particular thirdparty module. You'll just have to add another KERNEL_VERSION #if. Maybe if you're nvidia, or something, things are different.


Let’s not misrepresent Kent over a single incident of sending too much after a merge window. He’s extremely helpful and nice in every interaction I’ve ever read.


My 2 cents: these are the types of people that actually get the job done. All good software in my experience starts thanks to overachieving human representations of Pareto's law - people that can do alone in months what a team of average-skilled developers do in years.

In this industry it's very, very easy to run in circle, keeping doing stuff over stuff without realising you are in a bikeshedding loop, you're overengineering or simply wasting time. We need people that want to just push forward and assume the responsibility of anything that breaks, otherwise I'm sure that in 30 years we'd all still be using the same stuff we've always used because it's just human nature to stick with what we know works, quirks and all.


> he wants to "move fast and break things"

That's not how I read that thread. This is just about where the diligence happens, not that it can be avoided; and exactly how small a fix is mergable in the fixes phase of kernel development.

I don't see that thread as being particularly angry either. There have been ones where both of them have definitely lost their cool; here they are having a (for them) calm disagreement. Linus is just not going to merge this one until the next development phase, which is fine.

There have been arguments involving this developer that do raise questions; I just don't see this as one of them.


That's a normal LKML conversation. Nowhere do I see actual bugs pointed out, in fact the only testimony by Carl said that bcachefs has been quite stable so far.

This is just about following procedure. There are people who follow procedure and introduce many bugs, and people who don't but write perfect software.

The bcachefs author cautiously marks bcachefs as unstable, which is normal for file systems. The only issue here is that the patch touched other areas, but in the kernel development model Linus is free not to pull it.


Is bcachefs-tools going into the mainline distros or into something that’s meant to be less stable and experimental? Linus makes it sound like there’s a more appropriate place for this work.

Edit: Reading through the thread, it seems like there is a claim of rigorous but effectively private testing. Without the ability to audit those results easily it’s causing a lot of worry.


I don't think Linus is particularly concerned about bcachefs-tools, and whether a particular distributions ships with it or not isn't a concern for the kernel. Presumably though, distributions that don't ship the tools may also want to disable it in the kernel, although I'd imagine they'd leave it alone if it was previously supported.

Linus' complaint was about adding feature work into a supposed minor bug fix, especially because (going from Linus' comments) they were essentially refactors of major systems that impacted other areas of the kernel.


Great and now the top thread on the HN discussion is about that drama only tangentially referenced in the article.


One thing I don't understand about Swift is why it uses a private fork of LLVM. Why can't they upstream whatever changes they need?


Because the same also applies to clang, and contrary to what people think, Apple like every other big tech, is only as nice to FOSS as they need to be for their own purposes.

Same applies to all C and C++ compiler vendors, that have replaced their proprietary compilers (there are plenty more than just clang/gcc/msvc), with LLVM.

Such is the freedom of Apache/MIT/BSD style licenses.


Swift's LLVM fork is open-source, just not upstreamed. No Free Software license requires forks to upstream their changes.

https://github.com/swiftlang/llvm-project


Referring to their fork as "private" seems quite misleading then.


Yet you're complaining that they are using the licence exactly as they feel like.


Parent was not complaining. Parent was asking what the rationale is behind maintaining a separate LLVM fork instead of upstreaming and reducing the maintenance burden on Apple engineers.


I would not say its about niceness, its more about necessities. If ya don't own something, the walls are high to have anything changed, naturally. Because it might not be aligned with what the owner has had in it's mind. And you want to move quickly. So you fork and apply your changes. But i also think, in the long run, it hurts, bcs at some point, you forks are too diverged. But so is life.


> Today there are a few non-trivial differences from LLVM, but we are actively working on either upstreaming or reverting those differences

https://github.com/swiftlang/llvm-project


Because Apple does not direct the LLVM project, and they have their own corporate designs and plans. I'm sure they upstream what changes they think will be accepted, but there's no guarantee upstream would accept what they want or need let alone on the timeline they need it.


Doesn't Xcode ship with a closed-source build of LLVM (with patches) that you're referring to.


I’ve just read the swift build guide and there’s no sign of a closed-source version of LLVM. What are you talking about?


peppermint_gum didn't mention a closed-source version of LLVM either. "Private" just means "their own" here.


> "Private" just means "their own" here.

"Fork" is actually the word that means "their own". The "private" does indeed mean "proprietary" in this context.


This is not the first time this has happened.

Back in 2019, many outlets[1] reported that Putin was still using Windows XP. Why? Because in a publicity photo, the taskbar on his computer had a blueish color. And some people even came up with theories like "it's because it's the last version that hasn't been backdoored".

The problem is that this isn't true. If you look at the photo in full resolution[2], it's clear that it was Windows 7 (or maybe even 8 with a Start menu mod).

[1] - For example: https://www.dailymail.co.uk/news/article-7800831/Vladimir-Pu...

[2] - http://static.kremlin.ru/media/events/photos/big2x/W2kaDAtDz...


I'm also interested in learning about it. I've only found this thread on Twitter: https://x.com/virtuallyfun/status/1804913568820699549

Apparently there are all kinds of goodies in that dump, like previously unknown betas of MS-DOS and OS/2, but there are no links to the dump itself. It's a shame that the community is so secretive about this :(


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: