Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

It's less the actual overhead of the process but the savings you get from sharing. You can reuse database connections, have in-memory caches, in-memory rate limits and various other things. You can use shared memory which is very difficult to manage or an additional common process, but either way you are effectively back to square one with regards to shared state that can be corrupted.




You certainly can get savings. I question how often you need that.

I just said one of the costs of those saving is crashing may bring down multiple requests - and you should design with that trade off.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: