Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A thought experiment.

When is it "Normalization of Deviance"? and when is it a "Efficiency Optimization"?

I mean, the difference is pretty clear after something has failed, But very murky before.



It is Efficiency Optimization when you know why the rule is there, and having made an estimation of the risks, perform a cost-benefit analysis.

aka "Chesterton's Fence"

Otherwise, it's "Normalization of Deviance":

* The build is broken again? Force the submit.

* Test failing? That's a flaky test, push to prod.

* That alert always indicates that vendor X is having trouble, silence it.

Those are deviant behaviours, the system is warning you that something is broken. By accepting that the signal/alert is present but uninformative, we train people to ignore them.

vs...

* The build is always broken - Detect breakage cause and auto rollback, or loosely couple the build so breakages don't propagate.

* Low-value test always failing? Delete it/rewrite it.

* Alert always firing for vendor X? Slice vendor X out of that alert and give them their own threshold.


Unfortunately I don't find that most software engineers understand the difference between actually determining costs and benefits and choosing to make certain tradeoffs and rationalizing whatever choice they already made.


I think that's ok, For me, it's more about "change the system, instead of ignoring it".

Once you change the system (document/rules/alerts/etc), then if it breaks, you change it again and learn the lesson. Both are conscious decisions by the org.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: