Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My interpretation of this is different:

- free space is being used as swap until needed, then transparently released to be used

- "free" uniformly means being used this way, unlike "not free" which means it will not be released transparently when space is needed, so "free" is not being contextualized

- since the space is free but reported as "not free" by df, df has a bug

(edit: formatting)



As someone who often plays near this limit, I can tell you that

> then transparently released to be used

is doing most of the heavy lifting here, because in practice it often is NOT transparently released.

To give a concrete example, imagine you have 500GB free, which is enough for you to unpack and analyze a single simulation output at a time. You unpack the first one, do your analysis, and then rm that folder and its archive to make space to pull down the next batch of results.

However, you then will find that for some reason, despite Finder saying that it has plenty of space, that trying to unpack another 500GB result will fail with an "out of space" error, unless you wait several hours/days for unnamed "background processes" to garbage collect that drive space. What that looks like in practice is that Finder will start to report superlinear space usage (e.g. for every 10GB you unpack, >>10GB of usage will be reported) until the operation fails.

Thus, I find that df actually is much better about predicting the actual expected behavior of the system than Finder itself. If there were a way to manually trigger an immediate cleanup via the command line, that would also be fine. As it is, I just work off of external drives w/o APFS, which avoids the issue entirely.


df just uses C library calls which use system calls.

If you're going to implement something like this then ffs do it right. I don't know the "easy" way to fix this, but in any case if something is supposed to be transparently free and available then standard tools using standard calls should see it as free and available, with specialized user space tools that can look behind the screens when needed.


For clarification, by "df has a bug" I meant that it doesn't produce the result it should given the situation, but not necessarily that the problem is located in df's own source code, vs. the kernel, other process, library etc.


if the system is not reporting the space as free, then how in the world does the bug belong to df. it is the system not reporting it correctly.


question: how can space needed for swap be transparently released?

When disk, RAM, and swap are all full, does write() sometimes cause the OOM killer to kill other processes?


Hopefully, yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: