Hacker News new | past | comments | ask | show | jobs | submit login

I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors. I would guess that Windows NT handles take up more system resources since NT handles have a lot of things that file descriptors do not (e.g. ACLs).

Still, on the other hand, opening a lot of file descriptors will necessarily incur a lot of resource usage, so really if there's a more efficient way to do it, we should find it. That's definitely the case with the old way of doing inotify for recursive file watching; I believe most or all uses of inotify that work this way can now use fanotify instead much more efficiently (and kqueue exists on other UNIX-likes.)

In general having the limit be low is probably useful for sussing out issues like this though it definitely can result in a worse experience for users for a while...

> Feels a bit like Windows programming back when GDI handles were a limited resource.

IIRC it was also amusing because the limit was global (right?) and so you could have a handle leak cause the entire UI to go haywire. This definitely lead to some very interesting bugs for me over the years.






> I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors

Same reason disks have quotas and containers have cpu & memory limits: to keep one crappy program from doinking the whole system. In general it's seen as poor form to let your server crash just because somebody allowed infinite loops/resource use in their program.

A lot of people's desktops, servers, even networks, crashing is just a program that was allowed to take up too many resources. Limits/quotas help more than they hurt.


As long as you can lift them when it actually makes sense to do so.

The reason for this limit, at least on modern systems, is that select() has a fixed limit (usually 1024). So it would cause issues if there was an fd higher than that.

The correct solution is basically 1. On startup every process should set the soft limit to the hard limit, 2. Don't use select ever 3. Before execing any processes set the limit back down (in case the thing you exec uses select)

This silly dance is explained in more detail here: https://0pointer.net/blog/file-descriptor-limits.html


> I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors.

There was. Even if a file handle is 128 bytes or so, on a system with only 10s or 100s of KB you wouldn't want it to get out of control. On multi-user especially, you don't want one process going nuts to open so many files that it eats all available kernel RAM.

Today, not so much though an out-of-control program is still out of control.


The limit was global, so you could royally screw things up, but it was also a very high limit for the time, 65k GDI handles. In practice, hitting this before running out of hardware resources was unlikely, and basically required leaking the handles or doing something fantastically stupid (as was the style at the time). There was also a per process 10k GDI handle limit that could be modified, and Windows 2000 reduced the global limit to 16k.

It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.


> It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.

You say that, but when I actually tried I found that despite not actually having robust memory protection, it's not as though it's particularly straightforward. You certainly wouldn't do it by accident... I can't imagine, anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: