Rationality Fails at the Edge
Assumption-digging as ritual
Marco Giancotti,

Marco Giancotti,
Cover image:
Photo by Shainee Fernando, Unsplash
You can spawn whole new mathematical worlds by shifting the axioms of a theory even a little bit. This happened, for example, in the mid-19th century, when some daring mathematicians asked themselves: what happens if I try to flip one of the foundations of Euclid's time-tested geometry (the one children learn in school) upside down?
Surprisingly, it works. Gauss wrote about it in a private letter in 1824:
The assumption that (in a triangle) the sum of the three angles is less than 180° leads to a curious geometry, quite different from ours, but thoroughly consistent, which I have developed to my entire satisfaction.
Although Gauss never published these results, later Nikolai Lobachevsky developed and published the same, and Bernhard Riemann extended, expanded, and deepened it. The result was non-Euclidean geometry, a radically different, but equally valid, way to reason about space. Although it was completely novel (and rather startling) at the time of its inception, today non-Euclidean geometry is indispensable for physical theories like general relativity, and by extension for applications like GPS, astrodynamics, celestial mechanics, the study of black holes, cosmology, and more.
How can both Euclidean geometry and a theory that subverts its core tenets be right? That's a false dichotomy: they're both "right", because in mathematics all that matters is whether a theory correctly follows from the given assumptions. As Einstein famously said, “as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”
In math, truth—at least in the colloquial sense of the word—doesn't exist: you either have correct or incorrect statements based on other statements you have previously accepted.
(It is true that Gödel's theorems showed that even mathematical systems have limits in how much they can correctly state, but we don't need to go there today. My goal here is to consider the limitations of rationality in real-world settings.)
Outside mathematics things work a bit differently. We recognize a statement as "true" only when it is both correct (logical) and anchored to some objective reality. It's not enough, as inhabitants of the real world, to say that A is logically followed by B: for that information to be useful we also need to be sure that A is a good starting point in the first place. Unfortunately, we tend to focus on the B part much more than on A, although a rational argument that is flawlessly self-consistent and logical can—and often will—be wrong due to its flawed implied assumptions. Rationality fails at the edge.
Scientific Prestidigitation
Many magic tricks rely on the fact that rationality fails at the edge. They are surprising because the audience believes that the tools and laws of physics involved are different from what they really are, and this leads them to the wrong logical expectations.

In many intellectual arguments, we do the equivalent of (reciprocal) magic tricks without even realizing it. We take some things for granted as starting points of our logical steps, and don't feel compelled to voice them. Since they're obvious starting points for us, they must be obvious also for our counterpart, we think. Being implied assumptions, they are hard to notice and compare with each other. Confusion and frustration ensue.
I see this all the time in arguments between highly rational, educated, and well-meaning people. Out of these, let's start with the scientists.
Of course, there are cases when even PhDs draw illogical or unjustified conclusions from the evidence—i.e. their arguments themselves are irrational—but exposing those mistakes is relatively easy. The problem is when both sides of a scientific debate are rigorous and correctly apply their reason to the same evidence... except, they start from different implicit assumptions.
Here are three examples of current debates in science (summarized by Perplexity's Deep Research model). Even if some of those terms are unfamiliar to you, the nature of those contrasting arguments should still transpire.
Viral Origins of Complex Life Debate
- Description: Evolutionary biologists debate whether viral genetic material played a decisive role in eukaryotic cell evolution.
- Contention: Proponents cite homology between viral fusion proteins and eukaryotic membrane systems. Critics argue horizontal gene transfer evidence remains circumstantial.
Dark Matter vs Modified Gravity Debate
- Description: Astrophysicists disagree on whether the observed gravitational effects in galaxies are caused by invisible dark matter or require adjustments to Newtonian dynamics (MOND).
- Contention: Proponents of dark matter cite gravitational lensing and cosmic microwave background data as evidence. Critics counter that MOND successfully predicts galaxy-scale phenomena without requiring fine-tuned dark matter distributions.
CRISPR Off-Target Effects Debate
- Description: While CRISPR gene editing shows therapeutic promise, researchers disagree on the clinical significance of off-target DNA modifications. Disputes center on detection methods and long-term safety.
- Contention: Some labs report high-fidelity editing with minimal off-targets using advanced algorithms like GUIDE-seq. Critics argue current assays miss structural variants and epigenetic changes.
See what's going on? Every time, the evidence is open for all to see, but the proponents and critics of a theory give different weights to it. The reason is that they base themselves off different perspectives: what kind of evidence is more authoritative, how elegant an explanation is "supposed" to be, and even things like how much work and reputation they have already invested in their own hypotheses.
Often, the experimental evidence is simply not enough to determine which assumptions are best. In these cases, the debate stems from that fundamental vagueness. Then, the scientists' job is to try to uncover more evidence, not to convince each other with the brute force of their logic.
I don't mean to say that scientific debates are useless and should always be avoided. There are times when the assumptions are shared, and the thing that is unclear is which line of reasoning should be applied. In these cases, a debate can be a powerful way to sort things out. What seems silly and harmful to me is keeping a dispute going on for decades and getting nasty about it. When one side calls the other "intellectual fraud" and is called by the other "fundamentalists," as was the case in the long-running unit of selection debate in evolutionary biology, you know that something has gone awry.
The Messier Cases
By and large, edge-failure remains under control in scientific circles, and eventually, things do get sorted out. But the same thing happens regularly outside of academia, too, and that's where it can cause endless trouble.
In most daily conversations, which don't usually hinge on big complex logical arguments, everything is fine. It becomes an issue in slightly more intellectual disputes, on topics like ethics, business and military strategy, technological predictions (some AI doomsayers are very rational people), and—hmmm... all political ideology?
This is where we are at a constant risk of doing all our logical steps perfectly well and still failing at the edge without realizing it.
Now, the kinds of assumptions hiding under an argument will vary from case to case, from philosophical and religious beliefs to value judgments, from degrees of selfishness to political interests and so on, but one major assumption that is always present in every explanation is its purpose.
Every line of reasoning must always have one goal or another as its driving force, but we're often unable to state it clearly. Two people may get caught in an argument where both sides are rational and their logic is based on true facts, yet they apply different boundaries in their mental framings because they care about different things. This effectively means that the worlds they're simulating in their heads are different, and are ruled by different forces and rules of logic. And guess what: they will reach different answers, answers that will be impossible to reconcile until this fundamental rift is made explicit.
An area where this becomes an almost insurmountable problem is with conspiracy theories and religious pseudo-science. Here the superficial assumption is obvious enough—that the other side is a deceitful conspirator, or simply insane—but the deeper root of that disagreement is really the purpose of discussing those topics.
The goal of a Flat Earther is to feel part of a minority of enlightened people, while the goal of an Earth scientist is to be objectively right, or as close as possible. A creationist aspires to be a paladin of Christianity, while the evolutionary biologist aims to get published in scientific journals. This is why arguing with a conspiracist never works: with different goals, the choice of base assumptions will also be different.
Showing conspiracists holes in their reasoning only encourages them to shift their assumptions until those specific holes are covered and even to sophisticate their arguments in ways that sound largely reasonable and self-consistent. When creationists were told that you need science to explain the world, they rebranded their contorted arguments with names like "creation science" and "flood geology"; when their reliance on biblical texts was attacked for being mere dogma, they removed direct references to religion from their "theories", keeping them instead as implicit subtext; when more and more strong evidence piled up in favor of evolution, they retreated to attacking the parts that are still unclear with the concept of "irreducible complexity".
If your purpose is to appear right at all costs, you'll be able to interpret the same evidence in ways that are diametrically opposite to your adversary. When Bill Nye (the Science Guy) debated creationist Ken Ham, Nye asserted that radiometric dating is a useful tool to estimate the age of the Earth, because of its low error margins of less than a few million years (small, when compared to the estimated age of the Earth of 4.5 billion years). His opponent's rebuttal was that a few million years of error sounds like a lot, thus radiometric dating is useless. Who is right? Well, your definition of "right" will depend on what you're trying to achieve. The Wikipedia article about that debate makes the difference in agendas—and the futility of a discussion—quite clear:
Towards the end of the debate, Ham admitted that nothing would change his mind concerning his views on creationism, whereas Nye acknowledged that, if enough convincing evidence were presented to him, he would change his mind immediately.
Assumption-Digging
How does all of this affect you, beyond diatribes about conspiracies and religions? If you're reading this blog, chances are you stumbled here while looking for the bathroom you have the habit of engaging with rational arguments every now and then. Maybe you might even like to discuss reasoned arguments with other people yourself.
Reading or hearing something that you strongly disagree with might be tolerable when it comes from an illogical or openly biased individual, but what if it's from a smart, honest friend or expert in the field? You might know what I mean. Sometimes, you just can't fathom why someone you respect could think a certain way or how they continue to insist on reasonable-sounding ideas that you nevertheless know to be incorrect. That can stir up emotions.
I think that reminding ourselves that rationality often fails at the edge makes reconciliation easier.

I don't know how to effectively debate a conspiracist because they assume that the debate itself is pointless. This might be a (rare) case where disengagement is the wisest choice. For other cases, however, I propose a simple conversation or thinking technique, a ritual I call "assumption-digging."
To assumption-dig, you explicitly dedicate time, early on when you encounter a puzzling disagreement, to make an inventory of the things that the participants are taking for granted. If it is a live discussion, set aside your planned arguments for a moment and instead begin with a reciprocal Q&A to uncover each other's root assumptions.
The keyword here is "explicitly." You want to be upfront and relaxed about the need to expose and compare those foundational beliefs: both sides should agree that this step is necessary because the premises could otherwise be overshadowed by your laborious explication of logic.
Ask questions like:
- Why do you think it is important to show that you're right?
- What, in practice, will you lose if you're proven wrong?
- Do we mean the same thing when we use the keywords "A," "B," and "C" in our arguments?
- Do we agree on the value we assign to each type of available evidence?
- Do we have compatible philosophical views with a bearing on how we think about this topic?
- Are personal preferences and inclinations involved?
More often than not (if all participants are honest), you'll find that you don't even need to present the arguments because you've found a fundamental mismatch at the level of your premises. The conversation then becomes one of realigning on the purpose of the debate and, sometimes, accepting that you have different goals and thus different problems to solve. Congratulations, no debate is necessary! That might be, in my humble opinion, the nicest way to "agree to disagree."
If you both think you have the same goals but can't agree on which assumptions are appropriate, you can either agree that the lack of evidence is the problem—and defer the discussion to a time when more evidence will be available—or you treat the assumptions as explicit hypotheticals: if assumption X were true, do we agree that Y would be the consequence? Next, what if assumption Z were true? While this might not give you the satisfaction of "winning" the argument definitively, it can still be a valuable use of your time for its conciliatory and elucidating effects.
Why do we usually jump to the arguments and insist on our conclusions without thoroughly reviewing our premises? Perhaps we should blame cognitive dissonance. Perhaps pride, hubris, and laziness. But I suspect that, in most cases, we just don't realize that those assumptions we're making are only one choice among many. They've become part of the mental furniture—they are our default framings, lenses that we forget we have on our noses. That makes edge-failure a bit more forgivable, at least until you've read this post. ●
Cover image:
Photo by Shainee Fernando, Unsplash