Foundations of Subjective Reality

If men define situations as real, they are real in their consequences.  –W. I. Thomas

Truth lives, in fact, for the most part on a credit system. Our thoughts and beliefs “pass,” so long as nothing challenges them, just as bank-notes pass so long as nobody refuses them.  –William James

It’s the truth even if it didn’t happen.  –Ken Kesey, One Flew Over the Cuckoo’s Nest

What makes subjective reality possible? First, we need to define what we mean by “reality.” The definition of reality used here is a system that imposes effects on the entities within it. For entities that can’t think, there is only one reality, and only one source of effects. A rock, for example, resides in an objective reality that subjects it to a variety of physical and chemical forces. Deep in the earth, it can be deformed by pressure; on a beach, it can be eroded by surf; on a steep slope, it’s vulnerable to being repositioned by gravity.

People also occupy this corporeal world and are subject to its effects, but human reality has an additional layer. Unless you live on a desert island, were born with a debilitating congenital condition, or suffered a serious accident, the effects of the physical world are likely to have less of an impact on your life than the effects of the social world. The latter effects are based on beliefs.

Beliefs – our own, and those of others – determine everything from our self-image to our social status and how much money people give us. Beliefs can be objectively true or false, but the critical point to be made about their role as the building blocks of subjective reality is that their objective truth status is irrelevant to their effects, per sociologist W.I. Thomas’s famous observation above. The hardships of imprisonment are not lessened by the de facto innocence of a person sentenced to prison for a crime he did not commit, and psychologists know the effects of unearned guilt are no less debilitating than for guilt that is earned. Beliefs set in motion their own powerful mechanisms of approval and disapproval which often give the appearance of being independent of their correspondence with the physical world.

But how independent are they? We know that many beliefs persist despite having been repeatedly disproved – superstitions, conspiracy theories, urban legends, etc. Yet it would be going too far to conclude that beliefs’ objective truth or falsity places no limits on the ability to establish and sustain them. We might have success convincing some people that the daily drinking of, say, a cup of lemon juice has beneficial health effects; we would not expect to be as successful making the same claim about drinking a cup of lye. Beliefs that pigs can fly, or that Barry Manilow wrote the Magna Carta, would be difficult to establish and maintain.

Modeling the World

The survival of living organisms depends on the choices they make in response to continuous environmental challenges. To make the right choices, they need accurate information about the world. Where is water most likely to be found? Is that animal a threat or a potential meal?

The higher animals process the information provided by their senses to form mental representations of the world – models which embed their assumptions about how the world operates and which allow them to run mental simulations of possible actions to assess the likely outcome. It is the existence of these representations (or knowledge structures, or schemas, or beliefs – those terms are used interchangeably here) that make subjective reality possible and provide the foundation for its construction. Beliefs can and do vary in the accuracy with which they represent the outside world, which accounts for the factual differences between objective and subjective reality.

If we were omniscient and infallible, there would be no subjective reality, because our mental models would fully and precisely mirror the external world.1 The capacity limitations and biases of human information processing guarantee both that our mental models will be incomplete representations of elements of objective reality and that they will contain inaccuracies. This means, first, that for human cognition, objective reality is underdetermined. There is often more than one plausible theory for a given environmental state and insufficient information to reliably eliminate the incorrect ones. Scientific researchers are careful to avoid “going beyond the data” – drawing a conclusion that may not be contradicted by the data they studied, but is only one of several explanations that could fit that data equally well.

As humans going about our daily lives, however, we go beyond the data all the time, because the many decisions we routinely face require it. Our senses provide us with only a limited amount of information and we have to fill in the rest (often incorrectly) through extrapolation and inference. Nor is the imputation of missing environmental data limited to our conscious, higher-level thinking: neuroscience tells us it starts with our low-level perceptual processes like vision, which uses its own hypotheses about such things as potential objects’ edges and texture gradients to filter and actively structure (rather than faithfully record) what William James called the “blooming, buzzing confusion” of the sensory world, and reduce the complexity of the task to one that our information processing systems can handle.

In addition to constructing our models of objective reality from only partial environmental data, our collection and processing of even that subset of data is prone to a host of biases, errors and other constraints. Wikipedia lists over 100 of them (though there is some redundancy in their list). We’re biased in the information we allow to be around us (selective exposure), mentally tune into (selective attention) and in seeing what we want to see (selective perception). We’re subject to various memory biases (selective encoding of information, selective retention of it, selective retrieval) and various kinds of reasoning errors.

The temptation is to think we are poorly-designed information processing machines, but it is more surprising that we get as much right as we do. We have developed various optimization strategies called heuristics to compensate for our capacity and processing limitations and maximize the truth value we can extract from a set of environmental data. The errors and biases that are responsible for the differences between objective and subjective reality are the price we pay for those efficiencies.

Viewed this way, the adaptive value of many of our biases becomes evident. Prejudices and stereotypes are the consequence of probabilistic models by which we make inferences about a member of a class on the basis of past experience with other members of that class. Those inferences are sometimes wrong (as they can be when we impute attitudinal or behavioral characteristics on the basis of social class or race), but they are far more often right: the next car we see in front of us signaling a left turn very likely will turn left, our next drink of curdled milk will probably taste bad, etc. If it looks like a duck and quacks like a duck, more often than not, it’s a duck.

The causes and effects of environmental events are among the most valuable things for organisms to know, so that they can engineer (or place themselves in a position to benefit from) positive outcomes and prevent (or avoid) harmful ones. Today we know that “correlation does not imply causation” – two events may occur together purely by accident, not because they are related in any way. But for our ancestors, believing this (also called the post hoc fallacy) had survival value. This time, the sound of rustling grass might be the wind instead of an approaching predator, but the cost of a false negative was greater than the cost of a false positive. Vestiges of that bias survive today as superstitions: astrology, the association of black cats with bad luck, baseball players’ batting rituals, etc.

Of all the cognitive biases that shape our beliefs – and through them, the character of subjective reality – the most influential is the primacy effect. It causes the first piece of evidence for a belief to be both better-remembered and weighted more heavily in judgment. In the course of a normal day, we are faced with hundreds of (mostly routine) choices that are based on theories – beliefs – about the world. If we broaden this to include decisions made by our low-level perceptual processes about how to organize a flood of sensory data into discrete objects, the number of choices balloons to tens of thousands or more. Time and our processing capacity being zero-sum, we need strategies to manage this data deluge. By strategies, I mean shortcuts and the errors that accompany them. One such strategy is to go with the first theory we form, or that is presented to us.

First impressions are sometimes inaccurate, but they are more often right – if not to every detail, “right enough” to inform decision making. From an evolutionary perspective, going with our first theory or hunch makes sense. Our early ancestors couldn’t afford to question everything. Was that orange blur really a lion? Couldn’t this water still be OK to drink even though it tastes bad? Sacrificing some accuracy in order to make speedy judgments is generally adaptive – the African savannah would have made short work of an habitual skeptic. Once a belief establishes a foothold, a host of confirmation biases bolster it and make it difficult to dislodge, even if it’s wrong.

The need to form theories to guide our actions from limited data naturally disposes us to something statisticians call Type I bias – a tendency to form or accept beliefs about relationships in the external world that are not in fact true. This is the reason we lean in the opposite direction when we conduct research, drawing conclusions so conservatively that we deliberately risk failing to confirm relationships that do in fact exist (Type II bias). The bias of our research methodology counterbalances our natural human bias.

How Right is Right Enough?

We carry around with us a large set of beliefs, some of which are objectively true and some of which are not. Because those beliefs are the premises for our interaction with the world, it is reasonable to wonder to what degree they must accurately correspond to the world in order for us to operate and thrive in our environment. The answer must fall somewhere between 100% accuracy and zero accuracy. If our goal is to find a destination in Paris, a 50-year-old map of the city might well be sufficient to get us there, but a map of London almost certainly will not.

We tend to take for granted that reality is moderately forgiving of our fallibility. We can misapprehend the objective world to a certain extent and still successfully find destinations, feed ourselves, and build things. Our error-prone, fuzzy logic is usually good enough to achieve our goals, but there has to be a limit to how wrong we can be and still survive. If we were all as impaired in understanding the world as are those with severe autism or advanced Alzheimer’s, our species would likely cease to exist.

The general degree to which our mental representations accurately mirror the world is probably no accident, but was tuned by natural selection. Intelligence obviously aids adaptiveness by providing a means for anticipating the outcome of actions. But each additional increment of intelligence comes with a cost. Smarter (capable of higher level, more accurate abstractions about the world) animals require larger brains. Larger brains are bulky, harder to protect, consume a greater portion of energy & nutrients, and extend the period that infants are helpless while they learn to use them. The body’s resources are zero-sum, and each evolutionary change undergone by one of our systems has consequences for the others. At one point or another in our evolution, increasing our ability to outrun predators may have been more adaptive than adding three more points to our IQ.

The degree to which our beliefs accurately reflect the objective world, then, is likely an optimum which, in combination with other optima involved in our evolution, best contributed to our survival. When it comes to the accuracy of our representations of the world, natural selection determined how right was “right enough.”

As a consequence of our fallibility, we live in two overlapping realities. I call this theory Ontological Dualism (OD). I refer to objective reality, the corporeal world of matter and energy, as alpha reality. This is the only reality that exists for entities lacking the ability to form mental representations of (i.e., beliefs about) the world – e.g., rocks, clouds, (perhaps arguably) trees. Entities capable of having beliefs simultaneously occupy alpha reality and a second, subjective reality called beta reality. Beliefs (whether objectively true or not) are to beta reality what the laws of physics and other natural laws are to alpha reality – each imposes consequences on its inhabitants.

Beliefs vary in the degree to which they are “alpha true” – i.e., correspond to states that exist in objective reality. To the extent that this correspondence exists, beta reality is a conduit (if not an entirely passive one) for effects that originate in alpha reality. Beliefs that largely mirror objective facts are alphagenic, while those lacking such grounding are betagenic. Of course, virtually no beliefs are entirely alphagenic or betagenic, but rather some mix of the two. No representation of a state in objective reality can ever match the thing it represents 100%, particularly if the representation is a product of the filtration and distortion to which the human information processing system is prone. And at the base of even the most fantastic beliefs can usually be found an alpha seed around which the artificial part was constructed.

Beliefs can lead to powerful effects, but the beliefs themselves are only the proximate causes of those effects. The majority of beliefs represent states and relationships that actually do exist (to one degree or another) in the objective world, and to the extent they do, the ultimate causes of their effects lie in alpha reality. If I correctly conclude that the rumbling I feel is the beginning of an earthquake, and my belief in turn induces fear, the effect is alphagenic – its ultimate cause is a fact of alpha reality that my belief system is simply passing through. If it’s just a nearby truck, however, then my belief (and consequent fear) are mostly betagenic. But the beta effects – the fear reaction and the actions I take in response – are the same regardless of whether the belief is alpha true.

A species’ evolutionary adaptiveness would seem to be served by the maximization of the correspondence between its mental representations and the external world, and by the swift identification and disaffirmation of representations that lack such correspondence. For the purposes of this website we are most concerned with the sustainability of beliefs that do not correspond with objective reality. It is these beliefs that create the parts of beta reality that conflict with, or at least have no basis in, alpha reality. Why do some beliefs that are wildly wrong persist while others that are only slightly inaccurate get corrected quickly?

Continue


1This is more an argumentative point about correspondence than one that is literally true. Models are products not only of raw data but also of the machines that construct them, and hence are subjective by nature. Even given unlimited access to environmental data and error-free processing, our ability to build veridical mental models would still be constrained by the human brain’s limited data modeling abilities, just as a budget desktop computer lacks the computational power to model complex weather systems.

Comments on this entry are closed.