The Correction

Factors Influencing Correction

Why do some beliefs that depart significantly from objective reality go uncorrected for long periods of time while others that are only slightly wrong get corrected quickly? What determines when alpha reality steps in and rubs our noses in the cold, hard facts? A review of many different kinds of corrections reveals four types of factors that collectively determine beliefs’ susceptibility to, or robustness against, correction: veridicality, falsifiability, accretion, and utility. By governing the process through which alpha and beta reality are kept in approximate alignment, these four factors are the fenceposts that trace the boundary between the objective world and the subjective world.


Smaller discrepancies between beliefs and the objective facts they represent have a better chance of going unnoticed than larger ones. Moths are often mistaken for butterflies, to which they are morphologically and genetically similar, but rarely for frogs. A teen who replaces a quarter of his parents’ bottle of vodka with water is less likely to be caught than one who waters down half of the bottle. We apply a margin of error to our judgments to compensate for our fallibility, which can allow beliefs that are false, but not false by much, to sneak through. Beta reality flourishes at the margins of alpha reality.

Rule 1: The larger the gap between a belief and objective reality, the more likely it is to be corrected.


The gate into and out of beta reality is weighted in one direction: it is easier for a belief to be accepted than, once accepted, to be tossed out. False beliefs vary in the degree to which their lack of correspondence to objective reality can be demonstrated. A false claim that a shipwreck with a large cargo of gold lies at a specific latitude and longitude is easily refutable (given access to the equipment required to locate it), but the same claim about a shipwreck “off the coast of Florida” would be much more difficult to disprove.

Falsifiability is dependent on our access to relevant aspects of the objective world through our senses and their technological extensions. For a belief to be falsified, we need to be able to “touch” something out there – see it, measure it, sample it. The general ability to falsify beliefs is increasing with our continuing development of tools and techniques to take the pulse of the outside world and know it in greater detail. DNA sequencing, for example, has enabled the verification and falsification of beliefs in a variety of areas from unsolved crimes to the paths through which early humans migrated out of sub-Saharan Africa and populated the globe. Undoubtedly errors are sometimes made in its application, but few would dispute that its use is resulting in a net increase of knowledge about the alpha world.

Veridicality might seem to imply falsifiability – the bigger the gap between a belief and objective reality, the more noticeable it should be. That is usually the case, as illustrated by the dot trendline in Figure 2. When a belief is very discrepant from objective reality (low veridicality) it tends to be more vulnerable to falsification. The near-success of the tailors’ deception in Hans Christian Anderson’s fairly tale notwithstanding, it is not very likely that they would be able to convince a nearly naked person and a crowd of onlookers that he is in fact wearing a fine suit. As an example at the other end, if the

Figure 2.  Relationship Between Veridicality and Falsifiability

Figure 2. Relationship Between Veridicality and Falsifiability

actual paid attendance at a sporting event is 38,627 but the reported attendance is off by 3, the claim is both highly veridical (it is not wrong by much) and difficult to falsify even with a clear definition of “attendance,” because of the measurement error associated with people arriving and departing at different times through different gates, being in and out of their seats during the game, etc.

But there are exceptions to the general correlation between veridicality and falsifiability in both counter-quadrants. Our ability to detect and measure some alpha phenomena with a high degree of accuracy makes possible the falsification of even highly veridical beliefs – for example, a claim that no radioactivity is present in a specified area when in fact there is a trace amount. This is not a purely academic example: radioactivity is feared, and actions to minimize it are taken, disproportionate to the actual harm it poses specifically because of its high detectability, while the harm from other environmental toxins whose presence is harder to measure is often underacknowledged.

Conversely, low correspondence of a belief to alpha reality does not necessarily mean it is highly falsifiable. String theory holds that the entire universe is a product of invisible strings vibrating in 10 or 11 dimensions. This theory may be wildly wrong, but even its proponents admit that it is not testable (and hence not falsifiable). Religion is another example: either it is true (which still leaves the question of which religion out of many mutually exclusive contenders is the true religion), or it is false, but believers’ information processing errors (compartmentalization, rationalization: “He works in mysterious ways,” etc.) are a sufficient bulwark to prevent religions’ various truth claims, some very specific (creation chronologies, virgin birth, reanimation of the dead, etc.), from being falsified to the satisfaction of their adherents.

The falsifiability of a belief also depends on its position on the concrete-abstract continuum (how many degrees it is removed from directly observable entities and events) and on the amount of consensus about the definition of its terms. Complex beliefs, those that have multiple premises and those prone to being interpreted differently are more resistant to falsification. Normative beliefs (that something is good or bad) often fall into this category. A belief that a proposed new tax would negatively impact a community may rest on a series of assumptions that are themselves difficult to confirm or falsify: that the additional tax burden would cause businesses to move out of the area, that the new program it would fund would not deliver the intended benefits, etc. General beliefs are more robust than specific ones: a religious cult’s prediction that the world will end on a specific date (and history records several such dates which have come and gone) is obviously more falsifiable than the fuzzy prophesies of the Book of Revelations and Nostradamus.

Rule 2: The more readily that consensual means are available to disprove it, the more likely a false belief is to be corrected.


The third correction factor is the epistemological equivalent of Lord Mansfield’s famous observation that possession is nine-tenths of the law. The number of possible beliefs about the world is nearly infinite but our ability to verify truth claims is limited. As a result, we tend to take for granted beliefs that have been accepted somewhere, by someone, and invest no further effort in verifying them, nor do we actively monitor them for changes in their truth status. As beliefs spread through an information community, they lay down roots and can become embedded in the knowledge infrastructure. It has even been suggested that beliefs are capable of autonomous propagation, adaptation and evolution.

The physical replication of beliefs facilitates their continued influence and increases their robustness against correction. The city founded by Moses Cleaveland on Lake Erie was supposed to bear his name, but after an “a” was repeatedly left out of a newspaper masthead, the misspelled name stuck. In the same manner, beliefs that are false but have been widely recorded are often sustained by their own mass. Contrary to popular belief, lemmings do not commit mass suicide by jumping into the sea, and P.T. Barnum did not say, “There’s a sucker born every minute,” but these misconceptions, along with countless old wives’ tales, urban legends and other mistaken beliefs are kept alive through continued repetition. Some have expressed concern that the Internet is leading to a general lowering of the signal to noise ratio in our global dialogue by enabling the easy encoding and replication of unverified truth claims.

Rule 3: The more extant recordings of a false belief, the less likely it is to be corrected.


The reason that probably first comes to mind for why false beliefs are created and maintained is to benefit someone or something. The benefits of manipulating others’ beliefs are obvious, but they can also apply to self-deception, one purpose of which is the maintenance of cognitive consistency. People strive for harmony and balance in their attitudes and in their dispositions toward other people and their attitudes. Attitudes tend to change in the direction of congruence and symmetry. We apprehend our environment through the use of knowledge structures that are theories about the world and we are biased toward receiving information that supports existing beliefs or can be interpreted in a manner consistent with them.

As a result, we are more disposed to accepting and retaining false beliefs that are consistent with our other beliefs. One of the reasons it took over 40 years for the Piltdown man fossils to be exposed as a hoax was that the discovery fit the already accepted model of evolution by providing the missing link between humans and our ape ancestors. A superficial similarity between the development of organisms from embryonic cells into mature adults and the evolution of species from simpler to more complex forms was sufficient to long sustain the belief that “ontogeny recapitulates phylogeny,” the ultimate falseness of Ernst Haeckel’s famous statement notwithstanding.

In addition to cognitive consistency, the maintenance of false beliefs about oneself can serve other purposes. The tendency of people to rate themselves above average on a wide variety of tested attributes, popularly known as the Lake Wobegon effect, shows how we embellish our self-images through self-enhancing biases. We suspend disbelief to varying degrees to protect ourselves from the psychological shock of having a dread disease, to be titillated by celebrity exposés in tabloids, and to entertain ourselves with the thought that Bigfoot really exists.

The deliberate misleading of others to gain benefits is something of which we humans are all too familiar.  Objective facts are either strongly misrepresented (consumer fraud, lying about a romantic affair) or, in less extreme manipulations, inflected or stretched (marketing, political campaigns) in order to enhance, enrich, promote, or legitimize. There is certainly no shortage of examples currently and throughout history, including Christopher Columbus’ tricking of New World natives into thinking he influenced the gods by accurately predicting a lunar eclipse to get them to bring food to his starving crew.

The observation that beliefs have uses, and have the power to construct realities, is the foundation of William James’ philosophy of pragmatism. According to OD, however, a practical use may be sufficient motivation to introduce a belief that does not in fact correspond with objective reality, but it may not be sufficient to prevent it from being reset by other correction factors (or by the utility of any interested parties for the belief to be false).

Rule 4: The more useful a false belief is, the less likely it is to be corrected.

Corollary: The more useful the disproving of a false belief is, the more likely it is to be corrected.


Pages: 1 2

Comments on this entry are closed.