It's always a good thing to have resource limits, to constrain runaway programs or guard against bugs. Low limits are unfortunate, but extremely high limits or unbounded resource acquisition can lead to many problems. I rather see "too many open files" than my entire machine freezing up, when a program misbehaves.
There's a certain kind of person who just likes limits and will never pass up an opportunity to defend and employ them.
For example, imagine a world in which Linux had a RLIMIT_CUMULATIVE_IO:
"How else am I supposed to prevent programs wearing out my flash? Of course we should have this limit"
"Of course a program should get SIGIO after doing too much IO. It'll encourage use of compression"
"This is a security feature, dumbass. Crypto-encrypters need to write encrypted files, right? If you limit a program to writing 100MB, it can't do that much damage"
Yet we don't have a cumulative write(2) limit and the world keeps spinning. It's the same way with the limits we do have --- file number limits, vm.max_map_count, VSIZE, and so on. They're relicts of a different time, yet the I Like Limits people will retroactively justify their existence and resist attempts to make them more suitable for the modern world.
Your entire machine won't freeze if you have a sensible limit of the direct cause of that freeze (which would be e.g. memory or CPU %, not some arbitrary number descriptors)