“Mental representation” is another term for belief. Our beliefs are models of parts of the objective world that our brains build from information provided by our senses. But they are incomplete replicas of the things they model. Mental representations stand for or correspond to their counterparts in the objective world, but they typically don’t capture all the information that’s out there.
Capacity limitations prevent us from holding in our heads and processing every bit of information that objectively exists about whatever we’re modeling (if we could collect it all in the first place). The previous post examined how our brains solve this problem by throwing some information away, and compared it to the lossy compression used to squeeze digital music files down to a size that will fit comfortably on your iPod.
Lossy compression in itself needn’t compromise the objectivity of a representation. Take one kind of lossy model of the objective world, the road map, which intentionally omits detail below a certain information threshold (from smaller cities and towns down to pebbles and blades of grass). A road map can still be considered a faithful, if incomplete, representation of objective reality as long as the relative positions of the cities that are shown mirror their relative positions in the physical world.
Our brains perform something similar to lossy compression when we place mental representations in long term memory, maintain them over time, and retrieve them. We can’t remember every detail, but we strive to retain the important stuff. In order to preserve our representations’ fidelity to the objective world, however, each of these steps needs to be conducted in a consistent and unbiased manner, omitting only data below a specified information threshold, and not dropping, adding to, or changing any of the substantive stuff.
If we’ve learned anything from decades of research in human cognition, it’s that we routinely fail to live up to this standard. A variety of cognitive biases make our mental representations vulnerable to corruption and distortion at every storage stage. We may fail to record key parts of them to memory through selective encoding. Over time while stored, the memory traces of elements not regularly reinforced can decay, leading to selective retention. And recall cues sometimes cause us to remember some parts of a representation and not others through selective retrieval.
Our mental representations are prone to contamination in the process of being compressed, stored and retrieved. But remembering the phrase, “Garbage in, garbage out,” let’s back up. How sure are we of their accuracy as mirrors of objective reality before we store them?
Returning to the file compression example, suppose we have a CD recording of a live music performance which we have always considered a faithful representation of the event. What might compromise its accuracy and objectivity? Just a few things:
- Microphone placement. Sound waves bounce off surfaces, which selectively absorb some frequencies and reflect others, and they and their echoes arrive at different locations at different intensities. No two locations in a concert hall receive exactly the same sound.
- Equipment sensitivity. Microphones, magnetic recording tape, and other electronic components are able to capture some parts of the sound spectrum but not others.
- Mixing and mastering. The producer and recording engineer make many decisions about how to combine the individual tracks into what they subjectively judge to be the most esthetically pleasing final mix – take some shrillness off the vocals, add a little more thump to the drums, etc. Their touching up of a mostly realistic representation is the aural equivalent of airbrushing a photograph, and is a minor act of authorship.
- Analog to digital conversion. Sound waveforms have a continuous range of values. Digital technology can handle only discrete values (zeroes and ones). Per the figure below, converting the former to the latter is like forcing a round peg into a square hole.
Even before you apply lossy compression to your CD of James Brown Live at the Apollo to shrink it enough to fit on your iPod, much of the original event has already been lost in translation. The same is true of our mental representations before we commit them to the further corrupting processes of memory. Like the microphone, our senses have a limited range, and they can’t collect every bit of data floating around out there. Worse, we miss much of the stimuli we can perceive, either because we don’t expose ourselves to it (selective attention) or we don’t see it because we’re not looking for it (selective perception). And our cognition screens, inflects, and otherwise adulterates mental representations in myriad other ways, each one reducing their fidelity to objective reality.
The fact that each of us is exposed to a different set of stimuli from an event in the objective world, from which we shape a mental representation using the filters and distorters of our own unique set of cognitive biases, makes our mental models of the world subjective by nature. Embedded in each one is the implicit qualifier: seen from this position, constructed in this manner within these constraints.
Before we completely write off our ability to know objective reality, though, we need to note that our information processing deficiencies are mitigated by two important factors, the first of which is the refinement of our representations through error correction and data accumulation. The construction of mental representations is a continuous and incremental process involving many “reality checks” to detect processing errors and test representations’ accuracy. Some of those checks are built into our perceptual and other low level cognitive systems in the form of correction subroutines that catch many errors early in the construction process, before they can propagate.
Our mental representations continue to be refined, and their correspondence to objective reality improved, when we pool our knowledge with others’. This is the purpose of the scientific method, which weeds out many of the errors and biases that can taint our individual representations by requiring detailed disclosure of the steps taken to construct them so that others can verify them. The scientific method is the formal and systematic expression of a process we routinely conduct more casually whenever we compare notes and look at the world through others’ eyes.
The cognitive errors and biases that taint our mental representations of the objective world are a significant source of what is conventionally called subjectivity. (I include in this category our use of interpretive schemas or frames to make sense of the world, which selectively filter and inflect environmental data in the act of model construction, though this is a topic worthy of a more detailed discussion than can be given here.) But pooling information also helps us to overcome positional subjectivity, which is not so much about error correction as it is filling in missing data about the objective world. An example is the triangulation process by which an object’s geographical position can be accurately estimated with information from several observation points.
By constantly refining our models of the world and filling in missing data, both within our individual information processing systems and collaboratively by pooling our information with others’, we move in the direction of “de-subjectifying” them and making them represent objective reality more faithfully.
The key word in the preceding sentence is “more.” The factor that most mitigates our flawed information processes is their considerable fault tolerance. Our models of the objective world virtually never mirror it perfectly. But too much is made of this, because they don’t have to. The fact that we have become the planet’s dominant species and have walked on the moon is proof that our fuzzy accuracy is good enough to achieve substantial success navigating objective reality.