Not every bug fix is going to be a trivial one line change. Some fixes are going to be more involved. He was not just mixing in feature code, the code was needed to fix the bug.
That is correct answer to a data eating bug on a stable filesystem uncovered during RC, but the correct answer is also to then perform a thorough and public soul-searching for how such a bug got introduced to the development tree from which the pull request to linus was created. Losing files for a stable filesystem is a critical bug. In an experimental filesystem, not so much, and if you're going to claim sufficient priority to break the rc convention because of the criticality of the bug, the soul-searching is going to come along with it. And in this case, the result of the soul-searching was “the combination of the technology and the community responsible for it is not ready for mainline.”
By that line of reasoning, no feature could ever be merged in a filesystem. You simply cannot prove complex code has no bugs. Testing increases confidence that code may be stable, but that isn't a certainty as there's no guarantee that testing exercises all possible conditions. I know as code I've written that stood up to a lot of testing has resulted in data loss when a series of very complex conditions aligned to trigger a corner case.
Bugs are a fact of life. Bug fixes are a fact of life. Sometimes those bugs will cause data loss. Adding code in and -rc to support data recovery when a bug caused data loss to occur is a good thing for users. Portraying it as a bad thing is the worst kind of bike shedding.
I said that if you introduce a major dataloss into a stable filesystem such that you have to break rc discipline, okay, do it, but be prepared for a period of community soul-searching afterwards. How does that imply that no feature could ever be merged?
If you're arguing that bcachefs is incompatible with the practiced kernel development discipline, well, I guess linus agrees.