Chesterton’s Fence: Don’t Remove Something Until You Know Why It’s There

In my third year of teaching, I thought a particular school rule seemed completely unnecessary. It restricted students from a certain activity, and I couldn’t see any reason for it. “This could just be removed,” I thought.

Fortunately, I asked a senior teacher first. There was a reason. A few years earlier, a student safety incident had occurred because of that activity, and the rule was created afterward. The rule I’d dismissed as “unnecessary” actually had an important purpose.

This is the core of the Chesterton’s Fence principle.

What Is Chesterton’s Fence?

A principle introduced by G.K. Chesterton in a 1929 essay.[1] The core idea:

Related: optimize your sleep

There is a fence in the middle of a road. Someone says, “Why is this fence here? Let’s get rid of it.” Chesterton’s answer: “Don’t remove it until you know why it was built. Once you understand why, if you still want to remove it, then do so.”

In the LessWrong community, this principle is used as a core checklist whenever changing existing systems or practices.

The Danger of Not Knowing Why the Fence Is There

Existing practices and rules often have reasons that aren’t immediately visible. When you change them without understanding those reasons, unexpected consequences follow.

Nassim Taleb calls these “second-order effects.”[2] The direct effect of a change (first-order) may be clear, but the impact on other parts of the system (second and third order) is hard to predict. When you change established practices without understanding their original purpose, these second-order effects can create unforeseen problems.

Viewed through Kahneman’s System 1/2 framework, we focus on the direct effects of change (what System 1 can easily imagine) and underestimate the indirect effects (what System 2 must laboriously analyze).[3]

Chesterton’s Fence in the Classroom

I now always ask before changing an existing practice: “Why does this exist?”

Specific examples:

  • Class scheduling: “Why was this class assigned to this time slot?” It turned out to have been based on student concentration patterns.
  • Specific grading criteria: “Why are these criteria so detailed?” They reflected past cases of student challenges and appeals.
  • Prohibited activities: “Why is this activity banned?” As with the safety incident above — there was a reason.

When Can You Ignore Chesterton’s Fence?

Chesterton’s principle doesn’t forbid change. It demands understanding before change. Once you know the reason:

  • If the reason is no longer valid → go ahead and change it.
  • If the reason is still valid → address that reason through other means as you make the change.
  • If the reason is unreasonable → change it, but discuss it thoroughly with other stakeholders first.

Eliezer Yudkowsky applies this principle to AI safety research as well.[4] Existing ethical systems and social norms contain lessons humanity has learned over thousands of years. Rushing to change them can bring unforeseen catastrophe.

The next time you think “this is unnecessary,” pause. Then ask: “Why does this exist?” Finding that answer before making a decision is never too late.

Disclaimer: This article is for educational and informational purposes only. It is not a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider with any questions about a medical condition.

Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

Last updated: 2026-03-16

About the Author

Written by the Rational Growth editorial team. Our health and psychology content is informed by peer-reviewed research, clinical guidelines, and real-world experience. We follow strict editorial standards and cite primary sources throughout.

References

  1. Chesterton, G. K. (1929). The Thing: Why I Am a Catholic. Dodd, Mead and Company.
  2. Taleb, N. N. (2012). Antifragile. Random House.
  3. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  4. Yudkowsky, E. (2008). Rationality: From AI to Zombies. MIRI.
  5. Tetlock, P., & Gardner, D. (2015). Superforecasting. Crown Publishers.

Leave a Reply

Your email address will not be published. Required fields are marked *