One of my all-time favorite HN comments is from the original discussion of this article:
> Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code.
> The reason I put stateless code as the highest priority is it's the easiest to reason about. Stateless logic functions the same whether run normally, in parallel or distributed. It's the easiest to test, since it requires very little setup code. And it's the easiest to scale up, since you just run another copy of it. Once you introduce state, your life gets significantly harder.
> I think the reason that novice programmers optimize around code reduction is that it's the easiest of the 4 to spot. The other 3 are much more subtle and subjective and so will require greater experience to spot. But learning those priorities, in that order, has made me a significantly better developer.
As a kind of extreme example, I've gone off and duplicated a whole computing stack because I think C is the wrong abstraction. For example, the way signed and unsigned numbers are defined in the C standard really over-complicates simple programs. We often don't care about portability in these days of instruction set monoculture.
Here's how I render the silhouette of the Mandelbrot set using fixed-point math on my computer. Each statement translates to a single x86 instruction. To detect overflow in a computation I don't perform more computation. I just use the processor's overflow flag, which C "abstracts" from me.
Presumably something like the representation of signed numbers being implementation defined (instead of two's complement as is virtually always the case nowadays).
Yeah. The standard avoids obvious guarantees because of some computer you never heard of. And then compiler writers use the standard as license to mess with the guarantee on _your_ computer.
I recently wrote a post about the problem with naming things (https://itnext.io/and-naming-things-tailwind-css-typescript-...), but this post points out that not only is that the case -- once an abstraction has been introduced, it's hard to get rid of simply because removing them intuitively feels like a step backwards. Very good point!
I think this sort of mis-identifies the problem: the problem isn’t the wrong abstraction, it’s that later programmers don’t feel free to refactor to a better abstraction. Especially in languages with good refactoring tools (I.e. Java), inlining and refactoring is a much lower cost than the long-term maintenance cost of duplication.
Not sure. I'd argue that "refactoring" (also by its name) is less about eliminating duplication but rather about restructuring in general. Sure, in many (most?) cases you introduce new abstractions into code that was written before those abstractions could be clearly identified. But in my understanding it is just as much about replacing or (rarely) removing abstractions. When I see a commit that says "refactored X" I certainly don't assume there is only code deduplication in it.
> Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code.
> The reason I put stateless code as the highest priority is it's the easiest to reason about. Stateless logic functions the same whether run normally, in parallel or distributed. It's the easiest to test, since it requires very little setup code. And it's the easiest to scale up, since you just run another copy of it. Once you introduce state, your life gets significantly harder.
> I think the reason that novice programmers optimize around code reduction is that it's the easiest of the 4 to spot. The other 3 are much more subtle and subjective and so will require greater experience to spot. But learning those priorities, in that order, has made me a significantly better developer.
https://news.ycombinator.com/item?id=11042400