It was still an entirely new and experimental feature which had not been properly reviewed. Why couldn't this feature wait until next kernel version? Other file systems have had their recovery tools improved over many years.
Filesystems like ext2/3/4 have their recovery tools in userland. Most of the recovery features in bcachefs are in the kernel. As a result, it is inevitable that at some point there was and will be a need to push a new feature into a stable release for the purpose of data recovery.
Over the long term the number of cases where such a response is needed will decrease as expected.
Do you really want to live in a world where data losses in stable releases is considered Okay?
>it is inevitable that at some point there was and will be a need to push a new feature into a stable release for the purpose of data recovery
It's really not, the proper way to recover your important data is to restore from backups, not to force other people to bend longstanding rules for you.
>Do you really want to live in a world where data losses in stable releases is considered Okay?
If it really was so urgent why didn't Kent just tell people to get those updates from his personal tree? There are rules in place if you want your stuff to get into Linus's tree, expecting Linus to pull whatever you sent him without any resistance whatsoever is likely just going to end up with him deleting your project, just like what happened here.
Because distributions don't ship Kent's kernel tree, and they're not going to. Distributions like Fedora ship as close to mainline as possible these days because of the pain experienced from shipping a heavily patched kernel in the past. Release cycles are upwards of 3 months for Linus' tree. With that kind of lengthy release cycle, for an experimental codebase which is undergoing rapid stabilization it was the right call: you don't want old code to linger around longer than necessary when they're predominantly bug fixes that successfully pass regression tests. The choice should be with the maintainer.
No, it is not. bcachefs needs to have all the code for error recover in the kernel as it needs to be available when a storage device fails in any of a myriad of ways.
Maintaining a piece of code that needs to run in both user space and the kernel is messy and time consuming. You end up running into issues where dependencies require the porting of gobs of infrastructure from the kernel into userspace. That's easy for some thing, very hard for others. There's a better place to spend those resource: by stabilizing bcachefs in the kernel where it belongs.
Other people have tried and failed at this before, and I'm sure that someone will try the same thing again in the future and relearn the same lesson. I know as business requirements for a former employer resulted in such a beast. Other people thought they could just run their userspace code in the kernel, but they didn't know about limits on kernel stack size, they didn't know about contexts where blocking vs non-blocking behaviour is required or how that interacted with softirqs. Please, just don't do this or advocate for it.
The fact you got downvoted makes me shake my head. One could still interpret this as contributor violation and thats fair.
If I'm not mistaken Kent pushed recovery routines in the RC to handle some catastrophic bug some user caused by loading the current metadata format into an old 6.12 kernel.
It isn't some sinister "sneaking features". This fact seems to be omitted by clickbaity coverage over the situation.
As I pointed out elsewhere, there was another -rc release put out shortly after that effectively added back in a feature that was removed 10 releases back. Granted, it was only a small thing, but it shows that there is nuance in application of the rule.