Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway8941's commentslogin

It's worthy of note that log-structured and copy-on-write filesystems (which I've seen described as two types of journaling) like btrfs and F2FS log data as part of their normal operation without any performance loss, so you always get a consistent view of the filesystem (barring bugs in the FS code or fsync-is-not-really-an-fsync treachery from your hardware).


This does not answer your question, but wouldn't it be easier to get a cheap UPS with enough battery life to get your machine through additional 5 minutes of uptime?

Although I've had problems with hardware hanging so completely that it does not respond to the reset button, and the only option is to cut power, so UPS does not provide full protection.


UPSs are never really cheap, since when converting AC to DC to AC you lose a significant amount of power efficiency for your equipment. The funny part is the server PSU then converts back to DC power yet again. My understanding is that well optimized data centers distribute conditioned, battery backed DC power directly to devices to avoid those double conversion losses.


A small UPS will often have the equipment running directly on AC input power and only switch to the battery if the input power fails.


Oh, thanks for the correction. It seems I forgot that type existed or just never knew. Now I'm curious about the relative merits. I'd guess the transition is not as smooth and presents some kind of a risk that's unacceptable for critical infra.


The terms that describe those types are Line-interactive and On-line UPS.

E.g., https://blog.tripplite.com/line-interactive-vs-on-line-ups-s...


It'd be good if consumer UPSes had DC output for this reason. If only laptops could be standardized in terms of voltage requirements.

A lot of them are pretty similar; 19V - 21V seems quite common.

Until recently phones were standardized on 5V. Now, it's a mess again. :-(


I vaguely recall seeing someone modify their UPS for this in their "homelab", such that they were running a 12v cable modem, router and WAP off of the UPS battery pack directly somehow. They had dramatically longer run time from a full charge. (Well more than double, I think.)


Ooh, imagine a NUC or such small enough machine running off of USB-C Power Delivery. 65 Watt power budget easily. I guess a Raspberry Pi would count as that?

Then a battery pack could provide that, as DC.


Journaling cannot guarantee data or filesystem integrity if your hardware is lying to you. If you send flush to an SSD and it reports "ok, your data is on the persistent storage", while actually keeping it in DRAM buffers (to get higher numbers on benchmarks), and your power goes down, shit ensues. This is surprisingly common behavior.


Wow, this jogged a memory from when Brad Fitzpatrick (bradfitz on HN) had to write a utility to ensure the hard drives running LiveJournal didn't lie about successfully completing fsync().[1] IIRC, the behavior caused fairly serious database corruption after a power outage.

Went back and found the link. To my surprise, it was 15 years ago. To my greater surprise, the original post, the Slashdot article, and the utility all remain available.

And hard drives (or their NVMe successors) still lie.

[1] https://brad.livejournal.com/2116715.html


> This is surprisingly common behavior.

Anecdotally, consumer NVMe SSDs actually tend to not lie about it. Every time I've benchmarked a consumer NVMe SSD under Windows both with and without the "Write Cache Buffer Flushing" option, it has a profound impact on the measured performance of the SSD. I have not observed a comparable performance impact for SATA SSDs, so I suspect Microsoft's description of what that option does is inaccurate for at least one type of drive, though it is at least possible that ignoring flushes is extremely common for consumer SATA SSDs but uncommon for consumer NVMe SSDs.


Sure, although the proper way to test it would be to write a lot of data to the drive, issue an fsync, and cut power in the middle of the operation. Rinse and repeat a (few) hundred times for each drive.

There's a guy on btrfs' LKML (also the author of [0]) who is diligent enough to do these tests on much of the hardware he gets, and his experience does not sound good for consumer drives.

[0]: https://github.com/Zygo/bees/


> although the proper way to test it would be to write a lot of data to the drive, issue an fsync, and cut power in the middle of the operation. Rinse and repeat a (few) hundred times for each drive.

This isn't quite right. You have to ensure that the drive returned completion of a flush command to the OS before the plug was pulled, or else the NVMe spec does allow the drive to return old data after power is restored. Without confirming receipt of a completion queue entry for a flush command (or equivalent), this test as described is mainly checking whether the drive has a volatile write cache—and there are much easier ways to check that.


Here is a post from him: https://lore.kernel.org/linux-btrfs/20190623204523.GC11831@h...

TLDR: Very few drives don't implement flush correctly. Notice that he mainly uses hard disks, not SSDs/NVMe. Failure often occurs when two (usually rare) things occur at once. E.g. remapping an unreadable sector while power-cycling.


Does he share the results of his tests anywhere?


But as long as you write the journal entry first and the device guarantees flushes for writes in the order they are queued, there should be no inconsistent state at all?


> and the device guarantees flushes for writes in the order they are queued

NVMe does not require such a guarantee, nor does it provide a way for drives to signal such a guarantee.

(Part of the reason is that NVMe devices have multiple queues, and the standard tries to avoid imposing unnecessary timing or synchronization requirements between commands that aren't submitted to the same queue.)


I see. Then it makes sense. Thanks.


Assuming NVME queing works like SATA or SCSI queuing (which I believe it does), then basically queue entries are unordered [1]; the device is free to process them in any order. If you (as in, person who is implementing a block layer or file system in an OS kernel, or some fancy kernel-bypass stuff) want requests A and B to be ordered before request C, then you must do something like

1. Issue A and B.

2. Wait for A and B to complete.

3. Issue a FLUSH operation (to ensure that A and B are written from the drive cache to persistent storage), and wait for it to complete.

4. Issue C with FUA (force unit access) bit set.

5. Wait for C to complete.

Alternatively, if the device doesn't support FUA, for writing C you must instead do

4b. Issue C.

5b. Wait for C to complete.

6b. Issue FLUSH, and wait for the FLUSH to complete.

Now, like wtallis already said, NVME additionally has multiple queues per device, but these are independent from each other. If you somehow want ordering between different queues, you must implement that in higher level software.

[1] The SCSI spec has an optional feature to enable ordered tags. But apparently almost no devices ever implemented it, and AFAIK Linux and Windows never use that feature either.


Don't forget a keyboard-driven navigation plugin for your browser, like vimium. I have a desktop PC and use my mouse maybe once or twice a day, mostly for JS-heavy crap (which also probably presents enormous problems for the blind folk).


I try to use the keyboard for everything but unfortunately I haven't found a proper plugin for modern browsers. I tried to like tridactyl on Firefox but there were too many rough edges and it was unreliable (not really their fault, mainly limitations of what webextensions can do). As a result it was frustrating to have your keyboard shortcuts not work right ~25% of the time because you had the wrong part of the browser focused or something like that.


You could try Vim Vixen as an alternative: https://addons.mozilla.org/en-US/firefox/addon/vim-vixen/


vimium works okay-ish, or maybe I simply got used to it.

https://addons.mozilla.org/en-GB/firefox/addon/vimium-ff/

The main problem I have with it is it does not keep key combinations stable — if you press F → Esc → F → … (or Alt+F), it assigns different combinations for each link each time.


Ahhh ! Haven't even thought about the browser-shortcuts ! Lol I know F5 basically :/ Thanks will def look into this.


Honest question: is there any reason for using RSA for new keys these days, if you are not working with extremely legacy systems? My ed25519 works fine with at least CentOS 7, and thankfully that's the oldest system I have to touch.

Maybe only if you want to store the key on a separate physical device, and it only supports RSA?


AWS for some reason only supports RSA keys...

Those are needed for example to push code to their managed git (CodeCommit) and if you want a key added when you launch an EC2 machine and to fetch the Windows password.


Is there a reason for AWS being stuck with RSA? It's not like they lack money or engineering talent.


If you are generating a new keypair, you should default to ed25519. There are still a number of openpgp/smartcard devices that only support RSA keys.

Edit: Another reason one might still use RSA keys: ed25519 isn't a FIPS-140 approved algorithm (yet).


To be fair, “because it is FIPS-140 approved” is also a reason some people give for avoiding an algorithm.


The physical auth devices survive a lot and will last for years / decades. Quite a few of them supported only RSA keys, so don't expect it to go away any time soon.

My almost 10yo yubi still works just fine.


Why not? RSA is perfectly secure and more compatible. Short public key is nice, but does not matter much in reality as you'll copy&paste it anyway.


RSA 1024, which is supported, is not what anybody would call "perfectly secure".


Breaking a 1024-bit RSA key for SSH is a lot of effort for a very minimal reward.

The benefit if you do this is now you can impersonate the key's owner for new connections. So if it's a host key you can pretend to be that host if you're able to get on path between a victim and the real host, if it's a user key you can log in as that user with public key authentication.

But that's an active attack and an expensive key break.

Breaking 1024-bit RSA for HTTPS servers was a much juicier target because you can passively snoop RSA kex in TLS 1.2 and older. But that's not a thing in SSH, it's active attacks only.


RSA 1024 is still almost perfectly secure in practice. Something like the NSA might be able to break it only after the expenditure of years of work and zillions of dollars.

But beside the point as we are talking about RSA 2048 here which is in fact "perfectly secure" and the public key is not the part subject to downgrade attacks.


Azure only supported RSA keys last time I checked.


There's zero reason to ever use RSA for SSH keys, certainly.


At least Indian authorities seem to be relatively open about the censorship. Here in Kazakhstan they simply block everything they want, taking down many "innocent" sites as collateral damage, and write it off as "temporary networking problems" and "works for me, what are you talking about?".

For the past week I've had numerous problems connecting to TLS directly, without using a VPN. I still update my system directly from a mirror to avoid wasting limited data on a remote VM, that's how I discovered it personally. I worry they're trying to implement something nefarious behind the scenes, again.


From glancing at the source code I believe this uses the same algorithm as df from coreutils. It outputs pretty much useless data for btrfs, which needs to be handled differently. See:

  $ btrfs fi df
  $ btrfs fi us


Don't you worry, it'll all be deprecated and removed in a release or two.


and replaced by more stuff I dont want.

At least with Ubuntu/Debian there is documentation, and the software feels 'complete' - the CentOS systems I work on feel like working/living in a half finished house by comparison.


As much as I like to crap on nvidia, they have supported a relatively high quality Linux driver when it barely registered as a desktop platform. Hell, they even have a FreeBSD driver.


Why would you have a separate machine just for that? I thought one of the strongest points of PCs (as opposed to phones/game consoles) was their wide applicability to pretty much every task?


In my case, mainly to lower the risk of supply chain attacks. Windows gaming and other such activity still includes a lot of “must run as administrator and does unclear things with this”, especially in anything with anti-cheat mechanisms as mentioned elsewhere, and there are environments (especially when dealing with mods) where you can wind up running code from dozens of randoms across the Internet in nothing approaching a meaningful sandbox. Popular messengers, video apps, etc. don't exactly seem trustworthy nowadays either.

I wouldn't want to try to directly deliver anything from such an environment that I would ask other people to run. Even my more-trusted development laptop feels scary at times, especially when I'm operating in environments where I have to do about the same thing as above with installing a dozen dependencies from who-knows-whom. I generally use separate build UIDs for some measure of separation in these cases, but we still have Linux and X being potential emmentaler attack surfaces, and I haven't yet arranged my workflow to the point that spinning up new virtual machines is trivial, especially because then you have a lot more friction with testing GUI software, sharing existing files, etc. etc.—most of the easier solutions to which seem to be very cloud-oriented and “when your Internet connection goes down, so does everything else”, which is something I insist on pushing back against in this context, including because “someone upstream did something unexpected and now everything is instantly broken in a way I have no real leverage over” is its own massive trust hazard.

My dedicated low-sensitivity machine isn't very powerful, so the cost wasn't as much of an issue as it could have been; it was a midrange laptop several years ago which I'm still using. If your workplace environment comes with its own hardware, then that's a thing too.

It would certainly be nice to have better, though, and the desire for less redundancy of costly hardware is legitimate. My desired setup from a while ago, which I never managed thus far, is to have more powerful hardware with multiple boot configurations, but not all of them persistently present like most multi-boot machines: instead, I would physically attach and detach system and user disks, assuming that firmware-level attack persistence is rare, and then rely on power-down flushing any lower-trust code before attaching a higher-trust disk. It'd be hard to ask most people to do this, though.


Obviously not if they're having to shoehorn Linux in there just to attract developers!


They don't have to do that. But it's yet another thing that can be done.


Sure, but it rather pokes a hole in the concept of general computing, does it not? If windows were an acceptable general purpose OS people here would be mad they were wasting their time on this.

Somehow I just don't see the same demand for people on macs/linux and WINE. It seems like a niche interest to actively want to combine the two worlds and I am VERY interested if any significant number of people use this who are not driven by gaming needs.


Sure, but it rather pokes a hole in the concept of general computing, does it not?

I don't see how. I pretty much think anyone should do whatever they want with their computers. Windows is acceptable for some, and not for others. I can't see the point of getting mad about what software other people are running.

I have never used WSL, but if I ever do, it definitely won't be for gaming.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: