Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue with Chesterton's fence is something of a meta-problem in conversation and politics in general. It is the ultimate thought terminating cliche. If you accept it as given it allows you to build walls faster than you can investigate their removal, and by the time you have investigated it, they are building walls around your original position. It's an argument for inertia, fine in its basis, but over applied it becomes "Chesterton's big fucking infinite barrier", instead of a simple fence.


I doubt the conclusion Chesterton wanted anyone to draw was "consider carefully before breaking down a wall, but build all the walls you want without due diligence". The original context of the quote was an essay about reform. This is ultimately a thesis about conservatism in the old school sense. I think he wants you to think hard about the reasons for doing anything. I also don't think he'd refuse to tear down a wall if it presented an urgent threat—metaphorical wall or not. He's not a wall maximizing AI.

And, fortunately, I think you're the only one I've heard of who has interpret it in this other, interesting way.


I don’t think it’s particularly uncommon to find bureaucrats who use “we need to do more research from first principles” to frustrate progress of any form.


How do you reliably differentiate between actual, bonafide, needs for 'research from first principles' and claimed needs?


I don’t know a general way, but have definitely found straightforward ways to tell in the past in specific circumstances.


Can you list a few of them as examples?


I can totally see how a bad faith interpretation could lead to “Chesterton’s big fucking infinite barrier”. And of course bad faith interpretations abound in politics.

It reminds me of people I’ve worked with who will always bring up edge cases in any discussion. They use their “ability” to identify edge cases to shut down conversation and present themselves as the smartest in the room. If they can think of ways this design might fail of course they must be the most qualified to do the design right? It really has a chilling effect on teams and has, in my experience, lead to really bad technical decisions.


I fully agree about the potential for bad faith interpretations of Chesterson’s fence, and I’ve encountered them a lot.

But regarding edge cases, I’ve come to see that type of person in a different light. The way I see it, edge cases pressure test the design. Some edge cases we just don’t care about, and when someone raises them, we put them on a published list of non-goals. This acknowledges that they exist and shuts down future naysayers. But sometimes an edge case ends up mattering a lot; enough to change the direction of the design. And for that reason, I welcome the edge case hunters. If managed, it becomes a valuable source of feedback and strengthens the overall design.

But to your point, this type of individual can sidetrack the team if not managed. Having processes in place and tactics to incorporate the concern without getting stuck on it is critical, and not always easy.

In this regard, I think it’s a bit different than the fence. The fence is a barrier that already exists, while the pessimistic edge case finder is trying to build fences that don’t yet exist.


If you are unable to identify an edge case, how can you possibly determine if it is important?


Wait, are you saying that if Alice learns of an edge case by having it pointed out to them by Bob then Alice is a priori unqualified to analyze whether the edge case is important?


I guess GP meant, if Alice doesn't know about the edge case, she can't analyze it.


Ah, the nitpicking contest. I know it well :( My solution is to bluntly and quickly ask for a likelihood estimation. If the nitpicker can't tell, they have to find out for the next discussion. Until then it is shelved. If it is below 80% it is shelved until it becomes more salient.

That led to a few interesting developments. 1) participants say their edge case and immediatly shelf it themselves. 2) Or, they come prepared with good reasoning why case X is important and must be treated now. Discussion itself became more constructive, too. In any case a win!


You just opened my mind that I did this to somebody at a conference a month ago. Perfect is the enemy of good I guess.


Note that destruction is mostly easier than creation. There are only a few exceptions, like state regulations and bureaucracy.


I said that's life

And as funny as it may seem

Some people get their kicks

Stomping on a dream

But I don't let it, let it get me down

Cause this fine old world, it keeps spinnin' around


I would add software systems to that list.


Think of the complexity of making a new browser. It's hard to destroy one, but arguably even harder to make one.


These are huge exceptions


Creating bullshit is a lot easier than destroying it.


You are too nice.

I think Chesterton's fence is awful. It's cited as a nugget of infinite wisdom, and it's the opposite.

Anyone who's ever worked in a corporate knows how strong inertia is. Chesterton's way of thinking is the default. Cruft accrues because people are so averse to remove things from the code base. What if anyone needs this fence? Just to be on the safe side, I'll just leave it here. Little upside to remove it, lots of possible downsides. That's how people think.

And this essayist comes and writes it down as some great insight, and people can point to it and get legitimacy.

Everyone, have guts. Be brave and call a spade a spade: Chesterton's fence is the lazy guy's rationalization to leave things as they are. And to look wise doing that.


It is annoying because it seems to have been watered down to “have the correct amount of prudence” at this point. Which is definitionally correct. But we don’t need an analogy to get there.


It’s effectively the precautionary principle, and it’s bad epistemology for the same reason the precautionary principle is bad epistemology.

You’re imagining a danger, which you by definition cannot explain to anyone why the danger is real, and then you’re arguing that some very specific action should not be taken because it might trigger the imaginary danger.


Chesterton is talking about cultural systems which have evolved into their current form over a very long expanse of time. They are the result of a very long dynamic process.

It’s easy to look at the end result of that process and assume that one can easily move things around. But that’s not how the system came to be in the first place.

Maybe the new changes would be inconsequential. Maybe they would even make things run better. Or maybe they would be just an evolutionary dead end, eventually discarded by natural selection.

The point is that one needs to be humble and understand not just the current state of the system but also how it gradually came to be formed over time. Only then should one start proposing changes, carefully.


I think that Chesterton's fence is a useful guide when it comes to some incomprehensible systems, including our own biochemistry and physiology.

Once upon a time it was thought that some organs are useless (thymus in older age, appendix pretty much always). This has been revised. Nature often introduced hard-to-understand systems in order to cope with something we don't even realize is a problem.


Everybody who is listening to what you say and countering with, "sure that's possible in an extreme bad faith scenario" is falling victim to a black and white thinking fallacy.

If it's possible in the extreme, it's possible in a fine grained gradient between the two extremes. The danger isn't the infinite barrier, which I believe you posed as a thought experiment. The danger is death by a thousand cuts of new ideas that are so important that they could make the relevant concerns irrelevant.


How it's thought terminating cliche? If anything, it's thought initiating cliche. It compels you to consider the actual trade-off between keeping the barrier and removing it, instead of just dismissing the status quo as "stupid old shit that makes no sense so we don't have to assign any value". Yes, it's inertia - inertia is good. At least reasonable amount of it. It allows for permanence, planning, prediction, order. Of course, overdoing it is bad - that's true for any principle.

> and by the time you have investigated it, they are building walls around your original position

Who are "they"? It seems that you are trying to blame a general principle on some kind of fight you personally are having and not winning. But that's not the point of the principle.


the thought experiment assumes that people aren't building walls or doing things for no reason. The main issue with tech bros using it is that Chesterton was aiming it at people who were trying to change social norms that had been around for centuries built around the cumulative knowledge and wisdom of millions of people

it wasn't supposed to be used for some 2 year old startup's SOPs, software architecture, or business decisions. The argument assumes you already have a system that has been working for a long time and was well thought out


Nonsense. Chesterton’s fence creates a clear criteria for eliminating fences and other accumulated junk


The criteria for removal are clear on paper, but impossible to meet, so the effect is the same.

It is impossible to meet because the reason why the proverbial fence was put up is, more often than not, a mixture of misunderstandings, logical fallacies and political motives from both the one who put it up, as well as other stakeholders involved, and most importantly, none of those aspects are documented and none of the people are around anymore. You'd need nothing less than a time machine to understand why the fence was put up.


As with all things in life, it's a matter of accumulating evidence up to an acceptance criterion. That criterion can never be 100% certainty, because nothing can be 100% certain.

The difficulty lies in agreeing on what that point should be, and on the magnitude of any particular piece of information as evidence for or against any particular hypothesis.

"The fence was a mistake" is a completely legitimate conclusion here. The principle exists to prevent us from jumping immediately to that conclusion out of convenience, arrogance, or ignorance. It should not be interpreted as preventing us from ever reaching that conclusion after some careful consideration.


"impossible to meet", "more often than not", "none of those aspects are documented": These are huge generalizations, and I very much doubt that they are generally true, or that anyone--including you--has done any kind of investigation into how general these conditions are.


You are the one adding the criteria that you need to know EXACTLY and fully why the fence was built. There is obviously a spot between “I have no idea what the fence was built” and “I know the exact intricate workings of all the minds that were involved in the decision to put up the fence”


> The criteria for removal are clear on paper, but impossible to meet, so the effect is the same.

These arguments are exhausting, especially when we are talking about political issues.

Do you really believe that it is literally impossible to figure out that some legislative "fence" is really a moat protecting entrenched interests of the lobbyists who helped write the legislation?


Fascinating, can you give a couple examples of pre-existing rationale that make it impossible to replace an unneeded fence?


Not OP, but I’ve encountered this in situations where legacy code involved in mission critical business functionality was nearly impossible to remove due to the potential risk of unforeseen impact.

In other words, if the potential impact of an unforeseen breakage is high enough (costs us or the customer $$$), it’s not worth risking the change even if we can find absolutely no good reason for the current behavior to exist.

Example: complex spaghetti that touches billing calculations. These things are better addressed by a full rewrite/redesign followed by a long period of running both things in parallel until we’re confident that we didn’t miss anything. Maybe this is just a more complex way of removing the fence, but I think it’s more like moving away from the ground the fence is built on.


Sounds like you did the right thing by not changing the code! The rule worked

What would have been the advantage, in this situation, to ignoring the rule? What problem here is the Chestertons Fence rule causing?

EDIT: (responding to below, your issue has nothing whatsoever to do with Chestertons Fence. Making careful code changes to test then effect of removal is a great way to apply the rule. Building a separate system and testing in parallel would be an even cooler way to apply the rule)


The benefit of rearchitecting the code would have been high. As it stood, the status quo was blocking progress on other initiatives and making it difficult to meet the needs of customers. If we had ignored the fence and nothing broke, we could have immediately made major improvements that our customers had been begging us for.

The reality is that we don’t know if it worked. It’s possible that our conclusion that this had no reason to exist as written was accurate, and that doing nothing didn’t actually prevent anything bad from happening. It’s also possible that we saved ourselves from something we didn’t understand.

I’m not disagreeing with the premise of the fence, just pointing out that at times, even doing all of the due diligence to understand the fence isn’t enough to remove it.

Edit: I can't really respond to your response in its current form without these comments getting really difficult to understand. I disagree that this has nothing to do with Chesterson's fence. It's essentially a failure mode that can occur when applying the ideas behind the principle, i.e. there are times when learning everything we can about a barrier and believing with a high degree of confidence that the barrier isn't needed still isn't enough to remove it due to other factors. This points to the fact that this is a guideline, not a law.


> I’m not disagreeing with the premise of the fence, just pointing out that at times, even doing all of the due diligence to understand the fence isn’t enough to remove it.

I don't understand this. Implementing billing from scratch and running it in parallel to the old code is a form of doing due diligence. I.e. it is an application of Chesterton's fence. It might be an expensive application but it is one.

To me, this anecdote shows the value of documenting the why of software. I've read some time ago, to add code to systems such that, it is simple to remove it again. This discussion deepened that insight for me. This is preparing for Chesterton's fence.


> it is an application of Chesterton's fence

That's why I closed with:

> Maybe this is just a more complex way of removing the fence, but I think it’s more like moving away from the ground the fence is built on.

In other words, it's still learning from the cautionary tale of the fence, but the end result isn't a classic removal or non-removal of the fence itself. And it's not as if the idea has hard and fast rules :)

I agree regarding the value of documentation. None of us involved in the project were at the company when the code was written, and so we were left with a dangerous task.


What if you investigate why the fence is there and find nothing or a list of contradictory or nonsensical reasons? This is incredibly common in real life.

If you must understand something to remove it you do end up with a lot of things that can never be removed. It’s a big reason that laws and regulations build forever.


> What if you investigate why the fence is there and find nothing or a list of contradictory or nonsensical reasons? This is incredibly common in real life.

Congratulations, you can now opt to remove the fence. This is also discussed in the article.

Chesterton's fence is philosophical, and as always, there will be a value proposition to consider. E.g. building a new beltway that would connect two previously unconnected cities or developing trade routes completely overrides the value that a fence being used to keep wildlife away.


Nonsense, the response you'll get is to spend more time investigating:

"To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think HARDER and investigate better. ""


At that point, you give up on the search for understanding and must evaluate the fence given the information that you do have and can make sense of.

It's really not any different from any other decision making process. The principle exists to encourage critical thinking, not to obstruct progress.


I actually like the principle as a tool for critical thinking, but like the precautionary principle which can also be a tool for critical thinking it has a tendency in practice to be converted into a thought-stopping and action-stopping cliche.


> find nothing or a list of contradictory or nonsensical reasons

Then you look at implementing process changes in the company at a higher level, there is 100% something absolutely wrong, but it may not be a problem with a physical system, but a human interaction one.


"It is the ultimate thought terminating cliche."

This is a straw man. The fence is simply an excuse to think a bit more deeply.


It is both a thought terminating cliche and a reminder to think more deeply. Clearly the original intent was the latter.

But many people who invoke Chesterson’s fence do so with the intent to lobby against changing something. It’s raised not as a caution, but as a barrier. The concept gets the most airtime in political circles, and I think this has led many people to misunderstand the original point.

This is not a straw man as far as I’m concerned, and while I agree that the original intent was about careful reflection, this is often not how people engage with it.


No true fence ;)


Who defines what a 'fence' is in the first place?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: