Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're needing to fight the GC to prevent crashes or whatever then you have a system design issue not a tooling/language/ecosystem issue. There are exceptions to this but they're rare and not worth mentioning in a broad discussion like this.

Sadly very few people take interest in learning how to design systems properly.

Instead they find comfort in tools that allow them to over-engineer the problems away. Like falling into zealotry on things like FP, zero-overhead abstractions, "design patterns", containerization, manual memory management, etc, etc. These are all nice things when properly applied in context but they're not a substitute for making good system design decisions.

Good system design starts with understanding what computers are good at and what they suck at. That's a lot more difficult than it sounds because today's abstractions try to hide what computers suck at.

Example: Computers suck at networking. We have _a lot_ of complex layers to help make it feel somewhat reliable. But as a fundamental concept, it sucks. The day you network two computers together is the day you've opened yourself up to a world of hurt (think race conditions) - so, like, don't do it if you don't absolutely have to.



It's because system design is a lot less theoretically clean than something like FP, zero-cost abstractions, GC-less coding, containerization, etc, and forces programmers to confront essential complexity head-on. Lots of engineers think that theoretically complex/messy/hacky solutions are, by their nature, lesser solutions. Networking is actually a great example.

Real life networking is really complicated and there are tons of edge cases. Connections dropping due to dropped ACKs, repeated packets, misconfigured MTU limits causing dropped packets, latency on overloaded middleboxes resulting in jitter, NAT tables getting overloaded, the list goes on. However most programmers try to view all of these things with a "clean" abstraction and most TCP abstractions let you pretend like you just get an incoming stream of bytes. In web frameworks we abstract that even further and let the "web framework" handle the underlying complexities of HTTP.

Lots of programmers see a complicated system like a network and think that a system which has so many varied failure modes is in fact a badly designed system and are just looking for that one-true-abstraction to simplify the system. You see this a lot especially with strongly-typed FP people who view FP as the clean theoretical framework which captures any potential failure in a myriad of layered types. At the end of the day though systems like IP networks have an amount of essential complexity in them and shoving them into monad transformer soup just pushes the complexity elsewhere in the stack. The real world is messy, as much as programmers want to think it's not.


> The real world is messy, as much as programmers want to think it's not.

You hit the nail on the head with the whole comment and that line in particular.

I'll add that one of the most effective ways to deal with some of the messiness/complexity is simply to avoid it. Doing that is easier said than done these days because complexity is often introduced through a dependency. Or perhaps the benefits of adopting some popular architecture (eg: containerization) is hiding the complexity within.

> It's because system design is a lot less theoretically clean

Yea this is a major problem. It's sort of a dark art.


> Computers suck at networking. We have _a lot_ of complex layers to help make it feel somewhat reliable.

I've got bad news pal: your SSD has a triple-core ARM processor and is connected to the CPU through a bus, which is basically a network, complete with error correction and exact same failure modes as your connection to the new york stock exchange. Even the connection between your CPU and it's memory can prodice errors, it's turtles all the way down.


Computer systems are imperfect. No one is claiming otherwise. What matters more is the probability of failure, rates of failure in the real world, P95 latencies, how complex it is to mitigate common real world failures, etc, etc, etc.

"Turtles all the way down" is an appeal to purity. It's exactly the kind of fallacious thinking that leads to bad system design.


the difference of distributed (networked) systems is that they are expected to keep working even in the presence of partial (maybe byzantine) failures.

The communication between actor itself is not the problem, unreliable comunication between unreliable actors is.

If any of my CPU, RAM, Motherboard has a significant failure my laptop is just dead, they all can assume that all the others mostly work and simply fail if they don't.


Come now. Nobody can sever my connection to the CPU with a pair of hedge clippers in the backyard.


>Computers suck at networking. ... The day you network two computers together is the day you've opened yourself up to a world of hurt.

This is actually a pretty insightful comment, and something I haven't thought about in a number of years since networking disparate machines together to create a system in now so second nature to any modern software that we don't think twice about the massive amount of complexity we've suddenly introduced.

Maybe the mainframe concept wasn't such a bad idea, where you just build a massive box that runs everything together so you never get http timeouts or connection failed to your DB since they're always on.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: