Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Postgres (as an example) has no upper limit for database size, and the upper limit for a single table was 64TB a while ago, perhaps higher now. And you can buy stock hardware with 6 TB RAM and 76TB SSD e.g. from Dell.

If you max out that single server, you could easily hire one database engineer to care about horizontal scaling.




It doesn't help when the nbr of queries and updates increases a lot. For many systems having TB of data also means having thousands or millions of users.

You also probably want backups and possibly slaves that are updated inside transactions to avoid data loss. These things can lower your throughput even more.


Show me a single person who runs 64TB on a single table on a single node, and I'll show you someone who will cry because of a prod failure sooner than later.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: