To revere the truth is to romanticize a tool

 

TruthWhy do we value the truth? We take the answer to this question for granted, but if asked, our first reaction might be that knowing what’s true is… well, useful. Being able to discriminate nutrients from poisons, good stocks from bad, and authentic folks from charlatans has obvious practical payoffs.

But characterizing the truth as a mere tool seems to cheapen it. We make a point of saying that we value the truth not just for what it can do for us, but for its own sake. The truth is more than just a means to an end; it’s an end in itself. In his book The Righteous Mind, social psychologist Jonathan Haidt describes how cultures and social groups unite around values they designate as sacred. In our culture, “the truth for its own sake” is such a sacred arch-value – witness the homage we pay to veritas in our universities’ mottos. We don’t just value the truth; we revere it.

I want to argue that the veneration of truth is misguided, and even harmful. I think we overrate the truth, and that sanctifying it takes our eyes off the outcome to which we should be dedicating our efforts: maximizing desirable effects.

Recognizing this requires understanding what truth is, and the function that truth-seeking evolved to serve. The contents of the world, and particularly, its cause and effect relationships, are not of idle interest to living things. The #1 question facing an organism at any given moment, and the question upon which the survival and replication of its ancestors depended, is: what should I do? The better it can read the environment, the better it can choose actions that leverage the natural laws of cause and effect to its advantage.

The behaviorists were the first to systematically study how we learn to maximize rewards through our behavior, but the actual process by which we map environmental inputs to efficacious actions was to them a black box, and most were content to leave it that way. Not until the cognitive revolution did psychology start to fill in the box and reveal how we navigate the world: we build mental representations of it that we use to try out potential actions and estimate their likely effects.

“Truth” in its most common meaning is simply the correspondence of our mental representations with the environmental phenomena they model. It’s an accurate map of the physical world. Evolution didn’t give us maps for the sake of having maps, it gave them to us to help us navigate the world and find things that benefit us and avoid things that harm us. Our information processing system is designed to answer the “what should I do?” question. Like the rest of our biological systems, it was shaped by natural selection for the purely utilitarian purpose of promoting the continuation of our species. Knowledge evolved for the purpose of informing behavior.

Viewed from the perspective of its biological origin, truth is thus not an end, but a means to a single, prosaic, end: the replication of our genes. Living organisms are by design ethical consequentialists: the expected effects of their actions are both necessary and sufficient reason for choosing them. To guide us, evolution designed in us indicators – primarily positive and negative emotions – of environmental patterns that are more or less likely to contribute to the health and safety upon which the replication of our genes depends. The feedback loops that guide our behavior train to these indicators: we seek the good and avoid the bad, all in the service of spreading our genes.

A funny thing happened on the way to our perfection as replicating machines, though. We developed higher consciousness and an awareness of self. The positive affective states that nature crafted in us as proximal objectives intended to serve the distal goal of genetic replication – from the carnal (sexual excitement, enjoying a tasty meal) to the contemplative (fulfillment, peace, happiness) – have become ends in themselves for us as individual organisms. To thrill, to savor, to feel joy, contentment and satisfaction – those are the goals to which most people dedicate their actions, for the experiences themselves.

What does this have to do with striving for truth? Sometimes mis-modeling the world – believing things that aren’t true – serves our individual-level goals better than veridical knowledge. Believing a better life awaits us in the hereafter makes it easier to bear the slings and arrows of this one. Attributing effects to false causes reduces uncertainty, the bane of a conscious organism’s existence. Distorting our personal attributes – overestimating our abilities and downplaying our faults – makes us feel better about ourselves and can give us the confidence to successfully undertake tasks that an accurate self-assessment would have aborted. (For a catalog of beneficial self-deceptions, see Shelley Taylor’s Positive Illusions.) The only truths we ignore at our peril are those that come back to bite us… and many don’t. There are plenty that we can flout and actually come out ahead on a cost/benefit analysis.

It’s probably an understatement to say that the position I’m taking here is likely to be unpopular. My knocking of truth off its pedestal is bound to offend many of those sworn to honor and uphold it. It will strike them as something akin to heretical, but my point is that there’s nothing holy about truth in and of its nature. Sanctity can only be conferred on it, and that act, like all human acts, is calculated to create an effect – in this case, to serve as a unifying value that gives a culture identity and purpose. It is ironic that many of the truth’s most fervent worshippers are members of the scientific and skeptic communities who treat it as a replacement for God, even though its reverence is no less religious than worshipping spirits or deities.

Objections

What about the many times that people seek the truth with no visible payoff except the knowledge itself? Aren’t these examples of non-utilitarian value? No immediate payoff, perhaps. But just as a squirrel stashes nuts when they’re in abundance, evolution likely trained us to squirrel away facts of no current use to us on the chance that they’ll be useful in the future. In this way, the truth has inherent value (all knowledge of the physical world is potentially useful) but not intrinsic value (value in and of itself, unconnected to its practical utility). Evolution made knowledge-seeking pleasurable to encourage us to do it. That pleasure is for us an end in itself.

What about esthetics; the elegance, beauty and wonder of truth? That’s an affective response – a pleasurable effect – and one that doesn’t even require veridical knowledge, as countless takers of LSD trips can attest.

But aren’t you essentially espousing hedonism? Hedonism’s goal is to maximize pleasure and minimize pain. If “pleasure” is defined in the broadest possible way to cover the whole spectrum of gratifications from the sensory and immediate to the contemplative and measured, and it is recognized that the interdependencies between different kinds of pleasurable experiences point more toward their optimization than their maximization as a practical zenith, then yes, I’m making a case for hedonism.

In common use, hedonism refers more often to short term physical pleasures, whose habitual overindulgence can preclude experiencing other pleasures that typically mark a balanced, fulfilling life. The greatest danger of making pleasure our goal is in defining it too narrowly. Thus the positive psychology movement arose to study how we can achieve a broader happiness, and even that movement is sometimes criticized for neglecting eudaimonic aspects of well-being like meaning and purpose.

But what about other, larger entities that have a stake in our actions? It’s naïve to think that the decisions we make based on our understanding of what’s true don’t also serve higher level purposes for which we as individuals are just instruments. Just as the invisible hand of our genes influences many of our choices to serve their ends, we’re also part of social and other large systems that pursue their own ends through our actions. These superordinate goals are not our concern as individual organisms. Unless we choose to make them our concern, in which case they become just another personal goal we pursue for its gratifications.

Asterisks

We need to stop personifying the truth as if it were a Greek muse, and abandon the mantra “truth for truth’s sake.” The truth doesn’t have a sake. The physical world doesn’t care if our mental representations accurately model it or not1. Strip away the utilitarian functions that knowing what’s true serve for living things, and there’s nothing left to value. In other words, knowledge is power – period.


1Or as the title of an album by Nada Surf nicely puts it, The Stars Are Indifferent to Astronomy.

Share

{ 0 comments }

Think how differently you would have to go about filling your needs if you lived on a desert island rather than in a society of men and women.

“[The] thirst for objective knowledge is one of the most neglected aspects of the thought of people we call ‘primitive,’” observed anthropologist Claude Lévi-Strauss.  Living in a state of nature with your survival under constant threat is a powerful incentive to learn the cause-and-effect relationships of the physical world.  Primitive people have to be part meteorologist, part horticulturalist, and part engineer, as did our early ancestors.

Thanks to civilization, each of us personally no longer needs to acquire that kind of knowledge.  We still live in the physical world, and we still have the same basic needs for nourishment, shelter and health maintenance.  Somebody in our social system needs to have the technical knowledge to grow food, fabricate things and cure what ails us, but we don’t.

That doesn’t relieve us of the need to manipulate the environment to fill our needs, though; it just adds a sequence to the causal chain that concludes with their fulfillment.  If we want to make a hole in the ground, we can pick up a shovel and dig one.  Or we can use a different kind of tool: another human.

You need to know how to use a tool in order to get it to do what you want.  Whoever actually digs the hole will need to have the physical ability and knowledge of basic mechanics to use the shovel, or some other implement capable of doing the job.  We’ll call that implement the first-order tool.

If you want to have a hole dug and are unable or unwilling to do it yourself, you could use a second-order tool, another person, to operate the first-order tool.  Just as the eventual shovel-wielder needs to know how to make that tool do its job, you need to know how to make the second-order tool do its job – pick up the shovel and dig the hole.

The significance of being able to use people as tools – of human agency – to bring about the physical effects that support and enhance our lives is hard to overstate.  It’s almost like traveling to a new universe that replaces our laws of physics with its own cause-and-effect rules.

The instructions for our second-order tool might look like this:


Congratulations on leasing the
Homo sapiens 3000 all-purpose tool!

Used properly, your HS-3000 can build you a comfortable house and keep your pantry perpetually stocked, transport you across continents in a few hours, or fill almost any material or emotional need you may have.

To operate your HS-3000, offer it some commodity or service it values – some physical effect for which you are its HS-3000.  Depending on the task, most units will also accept a tradable proxy for that value (money).

Note: Although all HS-3000s are equipped with the same NeuroCogTM operating system and leave the factory with the same default settings, their associative networks can develop significant differences in the field.  As a result, no two units respond to their controls in precisely the same way.  This is a feature, not a bug, and allows you to select the HS-3000 that gives you the best results.

Disclaimer: Each HS-3000 is an independent agent wholly responsible for ensuring that it is operated in an ethical manner within recommended parameters for approved purposes only.  Some units have more robust self-protection circuits than others.  The defeating of those circuits, and other forms of abuse of the HS-3000 for personal gain, are a matter between the user and his or her conscience.


 

We have to work with people’s heads to get them to perform actions that benefit us.  It’s impossible to physically force someone to pick up a shovel and dig a hole, as you would use a hammer or lever.  In order to use humans as tools, we have to get their brains to send a signal to their muscles to move in such a way as to cause the physical effect we seek.

To do that, we need to understand cognition.  We build mental representations of the world that encode our assumptions about its cause-and-effect relationships, and allow us to “sandbox” actions we’re considering – try them out and see what’s likely to happen.

The informal word for mental representation is “belief.”  Whether you believe that Bigfoot is real, that cell phones cause cancer, or that Google is making us stupid, those propositions are slices of the mental models you build to represent the world and the rules by which it operates.  When someone asks (or orders) you to do something, you consult your mental models and calculate what’s likely to happen if you do it, and also if you don’t.

We steer cars by turning a wheel.  We steer people by appealing to and manipulating their beliefs.  To make our human tools work for us, we leverage the entries in their mental lookup tables about what causes lead to what effects.

We could direct their attention to the consequences of failing to do what we want them to do.  That can include undesirable effects that we personally promise to cause (I’ll punch you; I’ll sue you; I’ll withhold sex; I won’t recommend you for promotion).  Or we might invoke other supposed cause-and-effect relationships to get them to do our bidding (Congolese orphans will starve if you don’t; God will hold it against you).

For the purpose of making our second-order tool do the work we’ve assigned it, what matters is not what really will happen as a result of their choice (our threat to punish them could be a bluff; God may not exist, or may not care), but what they believe will happen – what causal rules are coded into their mental models.  If the rule we seek to leverage is already there, so much the better.  If not, we can try to program it in – to convince them that A leads to B.

Most of the time, we operate our human tools using a carrot rather than a stick.  In return for manipulating the environment to cause an effect that fills a need of ours, we offer to do the same for an equivalent need of theirs, either directly, or using a substitute for value, money.  You scratch my back, I’ll scratch yours.

But the equivalence of that exchange depends on how accurately our information processing systems link causes with effects.  For example, we’re fairly confident that putting a liquid made from the fossilized bodies of ancient plants and animals into our cars makes them move, so we generously reward those with the knowledge and ability to extract it.  But we also spend a lot on additives that claim to improve gas mileage, despite repeated scientific tests that find virtually no benefit to them.

Then there’s the herbal and dietary supplements industry, to which we trade tens of billions of dollars’ value of our work annually for health benefits that are either non-existent or so small that they’re overwhelmed by other effects.  The fact is, misattribution and the placebo effect account for much of our evaluation of the effectiveness of the goods for which we swap our labor and skills.  There’s nothing in Coke’s formula that makes us “open happiness” by snapping off the cap; “scrubbing bubbles” is just a marketer’s metaphor to boost sales of a bathroom cleaner whose actual performance is no better than its competitors’.  The word “organic” on product labels may well account for the greatest amount of work in history traded for benefits perceived above and beyond those objectively caused.

It’s not just to make products that we use people as tools, but also to cause other effects we desire.  We rent CEO tools to turn struggling companies around and politician tools to create jobs and defuse world tensions.  Here, causality can get really complicated.  For one thing, we’re using them as n-order tools in a causal chain: they act on the people who report to them, who in turn manipulate their staffs, and so on, until, if all goes as planned, the effect we seek (solvency, peace) pops out at the end.

And this kind of tool often operates inside a large system in which a variety of forces (some controllable, some not) interact in complex ways to determine the outcome, such that it can be impossible to reliably calculate one person’s contribution.  When causation is complicated, it’s easier for plausible but false explanations to “pass.”

All this doesn’t mean that we can rip each other off at will.  Even though the saying, “The truth will out,” is false as an absolute, we can jigger belief only within certain limits.  Our fondness for citing “The Emperor’s New Clothes,” though, shows that the wiggle zone can be pretty wide, and the fact remains that all we have to do to make our human tools work for us is get them to believe we can create an effect they desire, even if all we’re really providing is a placebo.

Sum all of these causal attributions across a population, embed them into their institutions and practices, and you’ve got a new kind of reality: social reality.  Status, power, the distribution of social rewards and sanctions – it can be argued that all are ultimately based on collective inferences about causality that are only loosely correlated with what really causes what.

That disconnect isn’t going away anytime soon, because it’s a consequence of the cognitive biases, errors and capacity limitations that are our evolutionary legacy.  What’s a second-order tool who doesn’t want to be taken advantage of – or a user of second-order tools who wants to get the most from them – to do?

How well an organism thrives depends on how well it masters its environment’s causal rules.  We live simultaneously in two different worlds, one physical, one social, each with its own rulebook.  Many, if not most, of us are better at knowing and manipulating one of those causal systems than the other.

So we go with our strengths.  If you’re good at knowing the objective (physical) world, you can get the best value for your work in occupations with low tolerances for error in working with its causal rules, such as astronaut, farmer, or computer programmer.  People with good social and persuasive skills can maximize the return on their labor in careers that depend more on – and allow more wiggle room in – the construction and manipulation of beliefs.  Examples include salesperson, coach and elected official.

We’ve come a long way from the African savanna.  Civilization didn’t overturn the laws of physics, but it made their direct effects less important to the quality of most of our lives than the effects that flow from each other’s interpretation of them.  Human agency is a game-changer.

Share

{ 0 comments }

Did Reason Evolve to Persuade?

June 24, 2011

A new theory says reason developed in order to help us win arguments, not to know what’s true. Could it be right?

Share
Read the full article →

Behaviorism Redux

December 31, 2010

Its proponents were right all along – behavior drives psychology.

Share
Read the full article →

How Much Does Being Right Matter?

July 15, 2010

A Review of Wrong: Why Experts Keep Failing Us – And How to Know When Not to Trust Them, by David H. Freedman, New York: Little, Brown and Company, 2010. 304 pps. $25.99. ISBN-13: 978-0316023788 The market for books about how ordinary people make thinking mistakes being fairly saturated (Predictably Irrational, Sway, Nudge), it makes […]

Share
Read the full article →