Hacker News new | past | comments | ask | show | jobs | submit login

> Among the many reasons why too much toil is bad

They missed the big one : human error is a common point of failure. Some of the big outages on GCP were due to ops configuration changes. Gitlab wiped their prod DB one time. KnightCapital suffered death by config error..etc.




I wonder if writing (bad?) software can also be toil.

Like if I need to change the spelling or add a new configuration setting and I need to make sure to use the same spelling in three places because they are all "stringly(sic) typed", is that toil?


If we ignore the value judgement and instead look at maintaining a sufficiently large codebase then, yes.

In the below paper an example given is migrating from one API to another. The paper describes a semantically-aware large-scale tool for refactoring a Google sized codebase using map-reduce.

Given the externally visible churn in Google products it isn't much of a stretch to imagine they have similar or worse internal churn. In fact I have heard from xooglers that it was common place to internally have competing systems in different states of development and adoption.

https://research.google.com/pubs/archive/41342.pdf


> human error is a common point of failure.

True. But it is also common to find that software automating the process didn't cover some corner case and you need human intervention. And it's worse if the process assumed that human intervention would never be necessary...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: