Hacker News new | past | comments | ask | show | jobs | submit login

No it's way simpler than that. The file descriptors are indices into an array containing the state of the file for the process. Limiting the max size of the array makes everything easier to implement correctly.

For example consider if you're opening/closing file descriptors concurrently. If the array never resizes the searches for free fds and close operations can happen without synchronization.




I meant the existence of ulimit was about partitioning resources.

Imagine a primitive UNIX with a global fixed size file descriptor probing hashtable indexed by FD+PID: that's more what I was getting at. I have no idea if such a thing really existed.

> If the array never resizes the searches for free fds and close operations can happen without synchronization.

No, you still have to (at the very least) serialize the lookups of the lowest available descriptor number if you care about complying with POSIX. In practice, you're almost certain to require more synchronization for other reasons. Threads share file descriptors.

The modern Linux implementation is not so terrible IMHO: https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds...


This particular resource limit stuff was not in "primitive Unix". It was a novelty that came with 4.2BSD.


The Linux FD table's performance does not depend on assumptions of non-growth.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: