Chesterton's Fence is a principle of caution, that is especially important in engineering, and especially software engineering. The principle originates with GK Chesterton, in the form of a parable, paraphrased as follows:
Suppose you arrive in a new town, and find that the central street of the town has a fence stretching across it. This seems like a very dumb thing, and you can't think of any reason why anyone would put this fence here. So you decide, it would be best if you remove the fence entirely. Chesterton says, no. If you don't know why the fence is there, then you should not take it down. Find out why it is there first, and then you can take it down.
Or more succinctly, don't change a system until you know why it was put in its current condition.
In software engineering, this "fence" is often some class or method written in a confusing manner, or some algorithm that seems less than optimally implemented. There is a temptation to always complain about code, and to always want to refactor everything, especially things that seem obviously wrong or suboptimal. But if we try to refactor a block of code without understanding the original intention, we will likely only make things worse, or break something more complicated than we understood at first glance.
To really understand the principle, you have to recognize that things like fences, software blocks, or legal policies, are not the kinds of things that just happen. People make them. If a man builds a fence stretching across a street, he has to be pretty intent about doing so. If he is intent on doing so, then he has some idea in his head that putting this fence here is a good idea. If he didn't, he would not have built the fence. So there must be some reason, even if we don't know it.
If a software dev writes 200 lines of spaghetti code with jumps and labels, then he must have an idea in his head that this is a good idea. There must be some reason, even if it doesn't seem obvious. Now, maybe we can accomplish the same goal without the gotos. But to do that, we have to know the goal. If we don't know why the dev thought he needed gotos, then we can't make the code better. It will only become worse.
Finding out should take the form of tracking blames, looking at notes in old PRs or tickets, or asking the original author. But you should always take the time to find out.
Now here is the actual point: In the world of ChatGPT and LLMs and AI-generated code, we can no longer apply Chesterton's Fence to any code written after 2023.
Prior to 2023, if you saw a block of code, you had to assume that a human typed those characters with some intentionality aimign at some purpose, and that therefore there was some reason for them, however so vague or misinformed.
Since the arrival of AI-generated code, you cannot assume that. Maybe a person wrote this for a good reason, or a bad reason. Or maybe an LLM wrote it for literally no reason at all.
The problem of hallucinations in the generative AI that many software engineers have come to rely on means that there is no actual agency or intentionality behind the creation of code. The AI that writes the code is never trying to do whatever it is the engineer intends; the AI is instead always and only and ever trying to fill in a word that is most likely in the situation. Often, that word leads to properly functioning and sensible code. Maybe. But it doesn't have to. And once an LLM has started spouting nonsense, then the next most likely word is also nonsense, and so on.
Entire purposeless code structures can appear in this way, unrelated to any actual function of the program, simply because once it starts making it, it makes the most sense to finish it.
With a mechanical agent that builds things for no reason simply because a stochastic process got it started, the principle behind Chesterton's Fence can no longer apply. You can look at code written since 2023, and in some cases you might be exactly correct to conclude that there is no reason for it to exist in the way that it does.
Or not. Maybe it was written by a human for a reason you don't understand. Who knows.
LLMs really should not be used to create any code that is expected to work correctly. If the code is only supposed to work qualitatively, maybe for the purpose of a demo, then an LLM can be helpful. If the code is only supposed to show how to use some API call, then an LLM can be helpful. And for languages like C++ requiring lots of boilerplate (like filling in rule of 3/5 in classes), an LLM can cut down on a lot of excess typing.
Just please do not mindlessly commit the output of an LLM to a real working codebase without making sure it's doing what you want it to. If for no other reason, then because you'll break Chesterton's Fence.