đź“š Book Notes: Stumbling on Happiness
Daniel does a great job at explaining why we are so bad at predicting what will make us happy in the first place.
Here are my notes from Stumbling on Happiness:
- Adults love to ask children idiotic questions so that we can chuckle when they give us idiotic answers. One particularly idiotic question we like to ask children is this: “What do you want to be when you grow up?” Small children look appropriately puzzled, worried perhaps that our question implies they are at some risk of growing down. If they answer at all, they generally come up with things like “the candy guy” or “a tree climber.” We chuckle because the odds that the child will ever become the candy guy or a tree climber are vanishingly small, and they are vanishingly small because these are not the sorts of things that most children will want to be once they are old enough to ask idiotic questions themselves. But notice that while these are the wrong answers to our question, they are the right answers to another question, namely, “What do you want to be now?” Small children cannot say what they want to be later because they don’t really understand what later means. So, like shrewd politicians, they ignore the question they are asked and answer the question they can. Adults do much better, of course. When a thirtyish Manhattanite is asked where she thinks she might retire, she mentions Miami, Phoenix, or some other hotbed of social rest. She may love her gritty urban existence right now, but she can imagine that in a few decades she will value bingo and prompt medical attention more than art museums and squeegee men. Unlike the child who can only think about how things are, the adult is able to think about how things will be. At some point between our high chairs and our rocking chairs, we learn about later.
Later! What an astonishing idea. What a powerful concept. What a fabulous discovery. How did human beings ever learn to preview in their imaginations chains of events that had not yet come to pass? What prehistoric genius first realized that he could escape today by closing his eyes and silently transporting himself into tomorrow? Unfortunately, even big ideas leave no fossils for carbon dating, and thus the natural history of later is lost to us forever. But paleontologists and neuroanatomists assure us that this pivotal moment in the drama of human evolution happened sometime within the last 3 million years, and that it happened quite suddenly. The first brains appeared on earth about 500 million years ago, spent a leisurely 430 million years or so evolving into the brains of the earliest primates, and another 70 million years or so evolving into the brains of the first protohumans. Then something happened — no one knows quite what, but speculation runs from the weather turning chilly to the invention of cooking — and the soon-to-be-human brain experienced an unprecedented growth spurt that more than doubled its mass in a little over two million years, transforming it from the one-and-a-quarter-pound brain of Homo habilis to the nearly three-pound brain of Homo sapiens.
Now, if you were put on a hot-fudge diet and managed to double your mass in a very short time, we would not expect all of your various body parts to share equally in the gain. Your belly and buttocks would probably be the major recipients of newly acquired flab, while your tongue and toes would remain relatively svelte and unaffected. Similarly, the dramatic increase in the size of the human brain did not democratically double the mass of every part so that modern people ended up with new brains that were structurally identical to the old ones, only bigger. Rather, a disproportionate share of the growth centered on a particular part of the brain known as the frontal lobe, which, as its name implies, sits at the front of the head, squarely above the eyes (see figure 2). The low, sloping brows of our earliest ancestors were pushed forward to become the sharp, vertical brows that keep our hats on, and the change in the structure of our heads occurred primarily to accommodate this sudden change in the size of our brains. What did this new bit of cerebral apparatus do to justify an architectural overhaul of the human skull? What is it about this particular part that made nature so anxious for each of us to have a big one? Just what good is a frontal lobe? - Thinking about the future can be pleasurable. We daydream about slamming the game-winning homer at the company picnic, posing with the lottery commissioner and the door-sized check, or making snappy patter with the attractive teller at the bank — not because we expect or even want these things to happen, but because merely imagining these possibilities is itself a source of joy. Studies confirm what you probably suspect: When people daydream about the future, they tend to imagine themselves achieving and succeeding rather than fumbling or failing.
Indeed, thinking about the future can be so pleasurable that sometimes we’d rather think about it than get there. In one study, volunteers were told that they had won a free dinner at a fabulous French restaurant and were then asked when they would like to eat it. Now? Tonight? Tomorrow? Although the delights of the meal were obvious and tempting, most of the volunteers chose to put their restaurant visit off a bit, generally until the following week. Why the self-imposed delay? Because by waiting a week, these people not only got to spend several hours slurping oysters and sipping Château Cheval Blanc ’47, but they also got to look forward to all that slurping and sipping for a full seven days beforehand. Forestalling pleasure is an inventive technique for getting double the juice from half the fruit. Indeed, some events are more pleasurable to imagine than to experience (most of us can recall an instance in which we made love with a desirable partner or ate a wickedly rich dessert, only to find that the act was better contemplated than consummated), and in these cases people may decide to delay the event forever. For instance, volunteers in one study were asked to imagine themselves requesting a date with a person on whom they had a major crush, and those who had had the most elaborate and delicious fantasies about approaching their heartthrob were least likely to do so over the next few months. - Our desire to control is so powerful, and the feeling of being in control so rewarding, that people often act as though they can control the uncontrollable. For instance, people bet more money on games of chance when their opponents seem incompetent than competent — as though they believed they could control the random drawing of cards from a deck and thus take advantage of a weak opponent. People feel more certain that they will win a lottery if they can control the number on their ticket, and they feel more confident that they will win a dice toss if they can throw the dice themselves. People will wager more money on dice that have not yet been tossed than on dice that have already been tossed but whose outcome is not yet known, and they will bet more if they, rather than someone else, are allowed to decide which number will count as a win. In each of these instances, people behave in a way that would be utterly absurd if they believed that they had no control over an uncontrollable event. But if somewhere deep down inside they believed that they could exert control — even one smidgen of an iota of control — then their behavior would be perfectly reasonable. And deep down inside, that’s precisely what most of us seem to believe. Why isn’t it fun to watch a videotape of last night’s football game even when we don’t know who won? Because the fact that the game has already been played precludes the possibility that our cheering will somehow penetrate the television, travel through the cable system, find its way to the stadium, and influence the trajectory of the ball as it hurtles toward the goalposts! Perhaps the strangest thing about this illusion of control is not that it happens but that it seems to confer many of the psychological benefits of genuine control. In fact, the one group of people who seem generally immune to this illusion are the clinically depressed, who tend to estimate accurately the degree to which they can control events in most situations. These and other findings have led some researchers to conclude that the feeling of control — whether real or illusory — is one of the wellsprings of mental health. So if the question is “Why should we want to control our futures?” then the surprisingly right answer is that it feels good to do so — period. Impact is rewarding. Mattering makes us happy. The act of steering one’s boat down the river of time is a source of pleasure, regardless of one’s port of call.
- Consider a study in which volunteers were shown some quiz-show questions and asked to estimate the likelihood that they could answer them correctly. Some volunteers were shown only the questions (the question-only group), while others were shown both the questions and the answers (the question-and-answer group). Volunteers in the question-only group thought the questions were quite difficult, while those in the question-and-answer group — who saw both the questions (“What did Philo T. Farnsworth invent?”) and the answers (“The television set”) — believed that they could have answered the questions easily had they never seen the answers at all. Apparently, once volunteers knew the answers, the questions seemed simple (“Of course it was the television — everyone knows that!”), and the volunteers were no longer able to judge how difficult the questions would seem to someone who did not share their knowledge of the answers.
Studies such as these demonstrate that once we have an experience, we cannot simply set it aside and see the world as we would have seen it had the experience never happened. To the judge’s dismay, the jury cannot disregard the prosecutor’s snide remarks. Our experiences instantly become part of the lens through which we view our entire past, present, and future, and like any lens, they shape and distort what we see. This lens is not like a pair of spectacles that we can set on the nightstand when we find it convenient to do so but like a pair of contacts that are forever affixed to our eyeballs with superglue. Once we learn to read, we can never again see letters as mere inky squiggles. Once we learn about free jazz, we can never again hear Ornette Coleman’s saxophone as a source of noise. Once we learn that van Gogh was a mental patient, or that Ezra Pound was an anti-Semite, we can never again view their art in the same way. If Lori and Reba were separated for a few weeks, and if they told us that they were happier now than they used to be, they might be right. But they might not. They might just be telling us that the singletons they had become now viewed being conjoined with as much distress as those of us who have always been singletons do. Even if they could remember what they thought, said, and did as conjoined twins, we would expect their more recent experience as singletons to color their evaluation of the conjoined experience, leaving them unable to say with certainty how conjoined twins who had never been singletons actually feel. In a sense, the experience of separation would make them us, and thus they would be in the same difficult position that we are in when we try to imagine the experience of being conjoined. Becoming singletons would affect their views of the past in ways that they could not simply set aside. All of this means that when people have new experiences that lead them to claim that their language was squished — that they were not really happy even though they said so and thought so at the time — they can be mistaken. In other words, people can be wrong in the present when they say they were wrong in the past. - Experience stretching is a bizarre phrase but not a bizarre idea. We often say of others who claim to be happy despite circumstances that we believe should preclude it that “they only think they’re happy because they don’t know what they’re missing.” Okay, sure, but that’s the point. Not knowing what we’re missing can mean that we are truly happy under circumstances that would not allow us to be happy once we have experienced the missing thing. It does not mean that those who don’t know what they’re missing are less happy than those who have it. Examples abound in my life and yours, so let’s talk about mine. I occasionally smoke a cigar because it makes me happy, and my wife occasionally fails to understand why I must have a cigar to be happy when she can apparently be just as happy without one (and even happier without me having one). But the experience-stretching hypothesis suggests that I too could have been happy without cigars if only I had not experienced their pharmacological mysteries in my wayward youth. But I did, and because I did I now know what I am missing when I don’t, hence that glorious moment during my spring vacation when I am reclining in a lawn chair on the golden sands of Kauai, sipping Talisker and watching the sun slip slowly into a taffeta sea, is just not quite perfect if I don’t also have something stinky and Cuban in my mouth. I could press both my luck and my marriage by advancing the language-squishing hypothesis, carefully explaining to my wife that because she has never experienced the pungent earthiness of a Montecristo no. 4, she has an impoverished experiential background and therefore does not know what happiness really is. I would lose, of course, because I always do, but in this case I would deserve it. Doesn’t it make better sense to say that by learning to enjoy cigars I changed my experiential background and inadvertently ruined all future experiences that do not include them? The Hawaiian sunset was an eight until the Hawaiian sunset à la stogie took its place and reduced the cigarless sunset to a mere seven.
But we’ve talked enough about me and my vacation. Let’s talk about me and my guitar. I’ve played the guitar for years, and I get very little pleasure from executing an endless repetition of three-chord blues. But when I first learned to play as a teenager, I would sit upstairs in my bedroom happily strumming those three chords until my parents banged on the ceiling and invoked their rights under the Geneva Convention. I suppose we could try the language-squishing hypothesis here and say that my eyes have been opened by my improved musical abilities and that I now realize I was not really happy in those teenage days. But doesn’t it seem more reasonable to invoke the experience-stretching hypothesis and say that an experience that once brought me pleasure no longer does? A man who is given a drink of water after being lost in the Mojave Desert for a week may at that moment rate his happiness as eight. A year later, the same drink might induce him to feel no better than two. Are we to believe that he was wrong about how happy he was when he took that life-giving sip from a rusty canteen, or is it more reasonable to say that a sip of water can be a source of ecstasy or a source of moisture depending on one’s experiential background? If impoverished experiential backgrounds squish our language rather than stretch our experience, then children who say they are delighted by peanut butter and jelly are just plain wrong, and they will admit it later in life when they get their first bite of goose liver, at which time they will be right, until they get older and begin to get heartburn from fatty foods, at which time they will realize that they were wrong then too. Every day would be a repudiation of the day before, as we experienced greater and greater happiness and realized how thoroughly deluded we were until, conveniently enough, now. - The human brain creates a similar illusion. If you’ve ever tried to store a full season of your favorite television show on your computer’s hard drive, then you already know that faithful representations of things in the world require gobs of space. And yet, our brains take millions of snapshots, record millions of sounds, add smells, tastes, textures, a third spatial dimension, a temporal sequence, a continuous running commentary — and they do this all day, every day, year after year, storing these representations of the world in a memory bank that seems never to overflow and yet allows us to recall at a moment’s notice that awful day in the sixth grade when we teased Phil Meyers about his braces and he promised to beat us up after school. How do we cram the vast universe of our experience into the relatively small storage compartment between our ears? We do what Harpo did: We cheat. As you learned in the previous chapters, the elaborate tapestry of our experience is not stored in memory — at least not in its entirety. Rather, it is compressed for storage by first being reduced to a few critical threads, such as a summary phrase (“Dinner was disappointing”) or a small set of key features (tough steak, corked wine, snotty waiter). Later, when we want to remember our experience, our brains quickly reweave the tapestry by fabricating — not by actually retrieving — the bulk of the information that we experience as a memory. This fabrication happens so quickly and effortlessly that we have the illusion (as a good magician’s audience always does) that the entire thing was in our heads the entire time.
But it wasn’t, and that fact can be easily demonstrated. For example, volunteers in one study were shown a series of slides depicting a red car as it cruises toward a yield sign, turns right, and then knocks over a pedestrian. After seeing the slides, some of the volunteers (the no-question group) were not asked any questions, and the remaining volunteers (the question group) were. The question these volunteers were asked was this: “Did another car pass the red car while it was stopped at the stop sign?” Next, all the volunteers were shown two pictures — one in which the red car was approaching a yield sign and one in which the red car was approaching a stop sign — and were asked to point to the picture they had actually seen. Now, if the volunteers had stored their experience in memory, then they should have pointed to the picture of the car approaching the yield sign, and indeed, more than 90 percent of the volunteers in the no-question group did just that. But 80 percent of the volunteers in the question group pointed to the picture of the car approaching a stop sign. Clearly, the question changed the volunteers’ memories of their earlier experience, which is precisely what one would expect if their brains were reweaving their experiences — and precisely what one would not expect if their brains were retrieving their experiences.
This general finding — that information acquired after an event alters memory of the event — has been replicated so many times in so many different laboratory and field settings that it has left most scientists convinced of two things. First, the act of remembering involves “filling in” details that were not actually stored; and second, we generally cannot tell when we are doing this because filling in happens quickly and unconsciously. Indeed, this phenomenon is so powerful that it happens even when we know someone is trying to trick us. - In other words, brains believe, but they don’t make believe. When people see giant floating heads, it is because giant heads are actually floating in their purview, and the only question for a psychologically minded philosopher was how brains accomplish this amazing act of faithful reflection. But in 1781 a reclusive German professor named Immanuel Kant broke loose, knocked over the screen in the corner of the room, and exposed the brain as a humbug of the highest order. Kant’s new theory of idealism claimed that our perceptions are not the result of a physiological process by which our eyes somehow transmit an image of the world into our brains, but rather, they are the result of a psychological process that combines what our eyes see with what we already think, feel, know, want, and believe, and then uses this combination of sensory information and preexisting knowledge to construct our perception of reality. “The understanding can intuit nothing, the senses can think nothing,” Kant wrote. “Only through their union can knowledge arise.” The historian Will Durant performed the remarkable feat of summarizing Kant’s point in a single sentence: “The world as we know it is a construction, a finished product, almost — one might say — a manufactured article, to which the mind contributes as much by its moulding forms as the thing contributes by its stimuli.” Kant argued that a person’s perception of a floating head is constructed from the person’s knowledge of floating heads, memory of floating heads, belief in floating heads, need for floating heads, and sometimes — but not always — from the actual presence of a floating head itself. Perceptions are portraits, not photographs, and their form reveals the artist’s hand every bit as much as it reflects the things portrayed.
This theory was a revelation, and in the centuries that followed, psychologists extended it by suggesting that each individual makes roughly the same journey of discovery that philosophy did. In the 1920s, the psychologist Jean Piaget noticed that the young child often fails to distinguish between her perception of an object and the object’s actual properties, hence she tends to believe that things really are as they appear to be — and that others must therefore see them as she does. When a two-year-old child sees her playmate leave the room, and then sees an adult remove a cookie from a cookie jar and hide it in a drawer, she expects that her playmate will later look for the cookie in the drawer — despite the fact that her playmate was not in the room when the adult moved the cookie to the drawer from the jar. Why? Because the two-year-old child knows the cookie is in the drawer and thus expects that everyone else knows this as well. Without a distinction between things in the world and things in the mind, the child cannot understand how different minds can contain different things. Of course, with increasing maturity, children shift from realism to idealism, coming to realize that perceptions are merely points of view, that what they see is not necessarily what there is, and that two people may thus have different perceptions of or beliefs about the same thing. Piaget concluded that “the child is a realist in its thought” and that “its progress consists in ridding itself of this initial realism.” In other words, like philosophers, ordinary people start out as realists but get over it soon enough. - If you live in a city with tall buildings, then you already know that pigeons have an uncanny ability to defecate at precisely the moment, speed, and position required to score a direct hit on your most expensive sweater. Given their talent as bombardiers, it seems odd that pigeons can’t learn to do much simpler things. For example, if a pigeon is put in a cage with two levers that can be briefly illuminated, it can easily learn to press the illuminated lever to get a reward of bird seed — but it can never learn to press the unilluminated lever to receive the same reward. Pigeons have no trouble figuring out that the presence of a light signals an opportunity for eating, but they cannot learn the same thing about the absence of a light. Research suggests that human beings are a bit like pigeons in this regard. For example, volunteers in one study played a deduction game in which they were shown a set of trigrams (i.e., three-letter combinations such as SXY, GTR, BCG, and EVX). The experimenter then pointed to one of the trigrams and told the volunteers that this trigram was special. The volunteers’ job was to figure out what made the trigram special — that is, to figure out which feature of the special trigram distinguished it from the others. Volunteers saw set after set, and each time the experimenter pointed out the special one. How many sets did volunteers have to see before they deduced the distinctive feature of the special trigram? For half the volunteers, the special trigram was distinguished by the fact that it and only it contained the letter T, and these volunteers needed to see about thirty-four sets of trigrams before they figured out that the presence of T is what made a trigram special. For the other half of the volunteers, the special trigram was always distinguished by the fact that it and only it lacked the letter T. The results were astounding. No matter how many sets of trigrams they saw, none of the volunteers ever figured this out. It was easy to notice the presence of a letter but, like the barking of a dog, it was impossible to notice its absence.
- Just as objects that are near to us in space appear to be more detailed than those that are far away, so do events that are near to us in time. Whereas the near future is finely detailed, the far future is blurry and smooth. For example, when young couples are asked to say what they think of when they envision “getting married,” those couples who are a month away from the event (either because they are getting married a month later or because they got married a month earlier) envision marriage in a fairly abstract and blurry way, and they offer high-level descriptions such as “making a serious commitment” or “making a mistake.” But couples who are getting married the next day envision marriage’s concrete details, offering descriptions such as “having pictures made” or “wearing a special outfit.” Similarly, when volunteers are asked to imagine themselves locking a door the next day, they describe their mental images with detailed phrases such as “putting a key in the lock,” but when volunteers are asked to imagine themselves locking a door next year, they describe their mental images with vague phrases such as “securing the house.” When we think of events in the distant past or distant future we tend to think abstractly about why they happened or will happen, but when we think of events in the near past or near future we tend to think concretely about how they happened or will happen.
Seeing in time is like seeing in space. But there is one important difference between spatial and temporal horizons. When we perceive a distant buffalo, our brains are aware of the fact that the buffalo looks smooth, vague, and lacking in detail because it is far away, and they do not mistakenly conclude that the buffalo itself is smooth and vague. But when we remember or imagine a temporally distant event, our brains seem to overlook the fact that details vanish with temporal distance, and they conclude instead that the distant events actually are as smooth and vague as we are imagining and remembering them. For example, have you ever wondered why you often make commitments that you deeply regret when the moment to fulfill them arrives? We all do this, of course. We agree to babysit the nephews and nieces next month, and we look forward to that obligation even as we jot it in our diary. Then, when it actually comes time to buy the Happy Meals, set up the Barbie playset, hide the bong, and ignore the fact that the NBA playoffs are on at one o’clock, we wonder what we were thinking when we said yes. Well, here’s what we were thinking: When we said yes we were thinking about babysitting in terms of why instead of how, in terms of causes and consequences instead of execution, and we failed to consider the fact that the detail-free babysitting we were imagining would not be the detail-laden babysitting we would ultimately experience. Babysitting next month is “an act of love,” whereas babysitting right now is “an act of lunch,” and expressing affection is spiritually rewarding in a way that buying French fries simply isn’t. - The fact that we imagine the near and far futures with such different textures causes us to value them differently as well. Most of us would pay more to see a Broadway show tonight or to eat an apple pie this afternoon than we would if the same ticket and the same pie were to be delivered to us next month. There is nothing irrational about this. Delays are painful, and it makes sense to demand a discount if one must endure them. But studies show that when people imagine the pain of waiting, they imagine that it will be worse if it happens in the near future than in the far future, and this leads to some rather odd behavior. For example, most people would rather receive $20 in a year than $19 in 364 days because a one-day delay that takes place in the far future looks (from here) to be a minor inconvenience. On the other hand, most people would rather receive $19 today than $20 tomorrow because a one-day delay that takes place in the near future looks (from here) to be an unbearable torment. Whatever amount of pain a one-day wait entails, that pain is surely the same whenever it is experienced; and yet, people imagine a near-future pain as so severe that they will gladly pay a dollar to avoid it, but a far-future pain as so mild that they will gladly accept a dollar to endure it.
Why does this happen? The vivid detail of the near future makes it much more palpable than the far future, thus we feel more anxious and excited when we imagine events that will take place soon than when we imagine events that will take place later. Indeed, studies show that the parts of the brain that are primarily responsible for generating feelings of pleasurable excitement become active when people imagine receiving a reward such as money in the near future but not when they imagine receiving the same reward in the far future. If you’ve ever bought too many boxes of Thin Mints from the Girl Scout who hawks her wares in front of the local library but too few boxes from the Girl Scout who rings your doorbell and takes your order for future delivery, then you’ve experienced this anomaly yourself. When we spy the future through our prospectiscopes, the clarity of the next hour and the fuzziness of the next year can lead us to make a variety of mistakes. - When college students hear persuasive speeches that demonstrably change their political opinions, they tend to remember that they always felt as they currently feel. When dating couples try to recall what they thought about their romantic partners two months earlier, they tend to remember that they felt then as they feel now. When students receive their grades on an exam, they tend to remember being as concerned about the exam before they took it as they currently are. When patients are asked about their headaches, the amount of pain they are feeling at the moment determines how much pain they remember feeling the previous day. When middle-aged people are asked to remember what they thought about premarital sex, how they felt about political issues, or how much alcohol they drank when they were in college, their memories are influenced by how they think, feel, and drink now. When widows and widowers are asked how much grief they felt when their spouse died five years earlier, their memories are influenced by the amount of grief they currently feel.
- If the past is a wall with some holes, the future is a hole with no walls. Memory uses the filling-in trick, but imagination is the filling-in trick, and if the present lightly colors our remembered pasts, it thoroughly infuses our imagined futures. More simply said, most of us have a tough time imagining a tomorrow that is terribly different from today, and we find it particularly difficult to imagine that we will ever think, want, or feel differently than we do now. Teenagers get tattoos because they are confident that DEATH ROCKS will always be an appealing motto, new mothers abandon promising law careers because they are confident that being home with their children will always be a rewarding job, and smokers who have just finished a cigarette are confident for at least five minutes that they can easily quit and that their resolve will not diminish with the nicotine in their bloodstreams. Psychologists have nothing on teenagers, smokers, and moms. I can recall a Thanksgiving (well, actually, most Thanksgivings) when I ate so much that I realized only as I swallowed my last bite of pumpkin pie that my breathing had become shallow and labored because my lungs no longer had room to expand. I staggered to the living room, fell flat on the couch, and, as I descended mercifully into a tryptophan coma, was heard to utter these words: “I’ll never eat again.” But, of course, I did eat again — possibly that night, surely within twenty-four hours, and probably turkey. I suppose I knew that my vow was absurd even as I made it, and yet, some part of me seemed sincerely to believe that chewing and swallowing were nasty habits that I could easily renounce, if only because the torpid mass that was winding its way through my digestive tract at the approximate speed of continental drift would supply all my nutritional, intellectual, and spiritual needs forevermore.
- Alas, metaphors can mislead as well as illuminate, and our tendency to imagine time as a spatial dimension does both of these things. For example, imagine that you and a friend have managed to get a table at a chic new restaurant with a three-month waiting list, and that after browsing the menus you have discovered that you both want the wasabi-encrusted partridge. Now, each of you has sufficient social grace to recognize that placing identical orders at a fine restaurant is roughly equivalent to wearing matching mouse ears in the main dining room, so you decide instead that one of you will order the partridge, the other will order the venison gumbo, and that you will then share them oh so fashionably. You do this not only to avoid being mistaken for tourists but also because you believe that variety is the spice of life. There are very few homilies involving spices, and this one is as good as they get. Indeed, if we were to measure your pleasure after the meal, we would probably find that you and your friend are happier with the sharing arrangement than either of you would have been had you each had a full order of partridge to yourselves.
But something strange happens when we extend this problem in time. Imagine that the maître d’ is so impressed by your sophisticated ensemble that he invites you (but alas, not your friend, who really could use a new look) to return on the first Monday of every month for the next year to enjoy a free meal at his best table. Because the kitchen occasionally runs short of ingredients, he asks you to decide right now what you would like to eat on each of your return visits so that he can be fully prepared to pamper you in the style to which you are quickly becoming accustomed. You flip back through the menu. You hate rabbit, veal is politically incorrect, you are appropriately apathetic about vegetable lasagna, and as you scan the list you decide that there are just four dishes that strike your rapidly swelling fancy: the partridge, the venison gumbo, the blackened mahimahi, and the saffron seafood risotto. The partridge is clearly your favorite, and even without a pear tree you are tempted to order twelve of them. But that would be so gauche, so déclassé, and what’s more, you would miss the spice of life. So you ask the maître d’ to prepare the partridge every other month, and to fill in the remaining six meals with equal episodes of gumbo, mahimahi, and risotto.
You may be one snappy dresser, mon ami, but when it comes to food, you have just cooked your own goose. Researchers studied this experience by inviting volunteers to come to the laboratory for a snack once a week for several weeks. They asked some of the volunteers (choosers) to choose all their snacks in advance, and — just as you did — the choosers usually opted for a healthy dose of variety. Next, the researchers asked a new group of volunteers to come to the lab once a week for several weeks. They fed some of these volunteers their favorite snack every time (no-variety group), and they fed other volunteers their favorite snack on most occasions and their second-favorite snack on others (variety group). When they measured the volunteers’ satisfaction over the course of the study, they found that volunteers in the no-variety group were more satisfied than were volunteers in the variety group. In other words, variety made people less happy, not more. Now wait a second — there’s something fishy here, and it isn’t the mahimahi. How can variety be the spice of life when one sits down with a friend at a fancy restaurant but the bane of one’s existence when one orders snacks to be consumed in successive weeks?
Among life’s cruelest truths is this one: Wonderful things are especially wonderful the first time they happen, but their wonderfulness wanes with repetition. Just compare the first and last time your child said “Mama” or your partner said “I love you” and you’ll know exactly what I mean. When we have an experience — hearing a particular sonata, making love with a particular person, watching the sun set from a particular window of a particular room — on successive occasions, we quickly begin to adapt to it, and the experience yields less pleasure each time. Psychologists call this habituation, economists call it declining marginal utility, and the rest of us call it marriage. But human beings have discovered two devices that allow them to combat this tendency: variety and time. One way to beat habituation is to increase the variety of one’s experiences (“Hey, honey, I have a kinky idea — let’s watch the sun set from the kitchen this time”). Another way to beat habituation is to increase the amount of time that separates repetitions of the experience. Clinking champagne glasses and kissing one’s spouse at the stroke of midnight would be a relatively dull exercise were it to happen every evening, but if one does it on New Year’s Eve and then allows a full year to pass before doing it again, the experience will offer an endless bouquet of delights because a year is plenty long enough for the effects of habituation to disappear. The point here is that time and variety are two ways to avoid habituation, and if you have one, then you don’t need the other. In fact (and this is the really critical point, so please put down your fork and listen), when episodes are sufficiently separated in time, variety is not only unnecessary — it can actually be costly. - When people predict future feelings by imagining a future event as though it were happening in the present and then correcting for the event’s actual location in time, they make the same error. For example, volunteers in one study were asked to predict how much they would enjoy eating a bite of spaghetti and meat sauce the next morning or the next afternoon. Some of the volunteers were hungry when they made this prediction, and some were not. When volunteers made these predictions under ideal conditions, they predicted that they would enjoy spaghetti more in the afternoon than in the morning, and their current hunger had little impact on their predictions. But some of the volunteers made these predictions under less-than-ideal conditions. Specifically, they were asked to make these predictions while simultaneously performing a second task in which they had to identify musical tones. Research has shown that performing a simultaneous task such as this one causes people to stay very close to their starting points. And indeed, when volunteers made predictions while identifying musical tones, they predicted that they would like spaghetti just as much in the morning as in the afternoon. What’s more, their current hunger had a strong impact on their predictions, so that hungry volunteers expected to like spaghetti the next day (no matter when they ate it) and sated volunteers expected to dislike spaghetti the next day (no matter when they ate it). This pattern of results suggests that all volunteers made their predictions by the flip-then-flop method: They first imagined how much they would enjoy eating the spaghetti in the present (“Yum!” if they were hungry and “Yuck!” if they were full) and used this prefeeling as a starting point for their prediction of tomorrow’s pleasures. Then, just as the hypothetical teenager corrected his judgment when he considered the fact that his current appreciation of a curvaceous coquette would probably be different fifty years later, the volunteers corrected their judgments by considering the time of day at which the spaghetti would be eaten (“Spaghetti for dinner is terrific, but spaghetti for breakfast? Yuck!”). However, volunteers who had made their predictions while identifying musical tones were unable to correct their judgments, and as such, their ending point was quite close to their starting point. Because we naturally use our present feelings as a starting point when we attempt to predict our future feelings, we expect our future to feel a bit more like our present than it actually will.
- If you’ve ever fallen asleep one night with the television blaring and been awakened another night by a single footstep, then you already know the answer. The human brain is not particularly sensitive to the absolute magnitude of stimulation, but it is extraordinarily sensitive to differences and changes — that is, to the relative magnitude of stimulation. For example, if I blindfolded you and asked you to hold a wooden block in your hand, would you be able to tell if I then placed a pack of gum on top of it? The right answer is “It depends,” and what it depends on is the weight of the block. If the block weighed only an ounce, then you’d immediately notice the 500 percent increase in weight when I added a five-ounce pack of gum. But if the block weighed ten pounds, then you’d never notice the .03 percent increase in weight. There is no answer to the question “Can people detect five ounces?” because brains do not detect ounces, they detect changes in ounces and differences in ounces, and the same is true for just about every physical property of an object. Our sensitivity to relative rather than absolute magnitudes is not limited to physical properties such as weight, brightness, or volume. It extends to subjective properties, such as value, goodness, and worth as well. For instance, most of us would be willing to drive across town to save $50 on the purchase of a $100 radio but not on the purchase of a $100,000 automobile because $50 seems like a fortune when we’re buying radios (“Wow, Target has the same radio for half off!”) but a pittance when we’re buying cars (“Like I’m going to schlep across the city just to get this car for one twentieth of a percent less?”).
Economists shake their heads at this kind of behavior and will correctly tell you that your bank account contains absolute dollars and not “percentages off.” If it is worth driving across town to save $50, then it doesn’t matter which item you’re saving it on because when you spend these dollars on gas and groceries, the dollars won’t know where they came from. But these economic arguments fall on deaf ears because human beings don’t think in absolute dollars. They think in relative dollars, and fifty is or isn’t a lot of dollars depending on what it is relative to (which is why people who don’t worry about whether their mutual-fund manager is keeping 0.5 or 0.6 percent of their investment will nonetheless spend hours scouring the Sunday paper for a coupon that gives them 40 percent off a tube of toothpaste). Marketers, politicians, and other agents of influence know about our obsession with relative magnitudes and routinely turn it to their own advantage. For instance, one ancient ploy involves asking someone to pay an unrealistically large cost (“Would you come to our Save the Bears meeting next Friday and then join us Saturday for a protest march at the zoo?”) before asking them to pay a smaller cost (“Okay then, could you at least contribute five dollars to our organization?”). Studies show that people are much more likely to agree to pay the small cost after having first contemplated the large one, in part because doing so makes the small cost seems so . . . er, bearable.
Because the subjective value of a commodity is relative, it shifts and changes depending on what we compare the commodity to. For instance, every morning on my walk to work I stop at my neighborhood Starbucks and hand $1.89 to the barista, who then hands me twenty ounces of better-than-average coffee. I have no idea what it costs Starbucks to make this coffee, and I have no idea why they have chosen to charge me this particular amount, but I do know that if I stopped in one morning and found that the price had suddenly jumped to $2.89, I would immediately do one of two things: I would compare the new price to the price I used to pay, conclude that coffee at Starbucks had gotten too damned expensive, and invest in one of those vacuum-sealed travel mugs and start brewing my coffee at home; or I would compare the new price to the price of other things I could buy with the same amount of cash (e.g., two felt-tip markers, a thirty-two-inch branch of artificial bamboo, or 1/100th of the twenty-CD boxed set The Complete Miles Davis at Montreux) and conclude that the coffee at Starbucks was a bargain. In theory I could make either of these comparisons, so which one would I actually make?
We both know the answer to that: I’d make the easy one. When I encounter a $2.89 cup of coffee, it’s all too easy for me to recall what I paid for coffee the day before and not so easy for me to imagine all the other things I might buy with my money. Because it is so much easier for me to remember the past than to generate new possibilities, I will tend to compare the present with the past even when I ought to be comparing it with the possible. And that is indeed what I ought to be doing because it really doesn’t matter what coffee cost the day before, the week before, or at any time during the Hoover administration. Right now I have absolute dollars to spend and the only question I need to answer is how to spend them in order to maximize my satisfaction. If an international bean embargo suddenly caused the price of coffee to skyrocket to $10,000 per cup, then the only question I would need to ask myself is: “What else can I do with ten thousand dollars, and will it bring me more or less satisfaction than a cup of coffee?” If the answer is “more,” then I should walk away. If the answer is “less,” then I should get a cup of coffee. And an accountant with a whip.
The fact that it is so much easier to remember the past than to generate the possible causes us to make plenty of weird decisions. For instance, people are more likely to purchase a vacation package that has been marked down from $600 to $500 than an identical package that costs $400 but that was on sale the previous day for $300. Because it is easier to compare a vacation package’s price with its former price than with the price of other things one might buy, we end up preferring bad deals that have become decent deals to great deals that were once amazing deals. The same tendency leads us to treat commodities that have a “memorable past” differently from those that don’t. For example, imagine that you have a $20 bill and a $20 concert ticket in your wallet, but when you arrive at the concert you realize that you’ve lost the ticket en route. Would you buy a new one? Most people say no. Now imagine that instead of a $20 bill and a $20 ticket, you have two $20 bills in your wallet, and when you arrive at the concert you realize that you’ve lost one of the bills en route. Would you buy a concert ticket? Most people say yes. It doesn’t take a logician to see that the two examples are identical in all the ways that matter: In both cases you’ve lost a piece of paper that was valued at $20 (a ticket or a bill), and in both cases you must now decide whether to spend the money that remains in your wallet on a concert. Nonetheless, our stubborn insistence on comparing the present to the past leads us to reason differently about these functionally equivalent cases. When we lose a $20 bill and then contemplate buying a concert ticket for the first time, the concert has no past, hence we correctly compare the cost of seeing the concert with other possibilities (“Should I spend twenty dollars to see the concert, or should I buy some new sharkskin mittens?”). But when we lose a ticket we’ve previously purchased and contemplate “replacing it,” the concert has a past, and hence we compare the current cost of seeing the concert ($40) with its previous cost ($20) and feel disinclined to see a performance whose price has suddenly doubled. - We make mistakes when we compare with the past instead of the possible. When we do compare with the possible, we still make mistakes. For example, if you’re like me, your living room is a mini-warehouse of durable goods ranging from chairs and lamps to stereos and television sets. You probably shopped around a bit before buying these items, and you probably compared the one you ultimately bought with a few alternatives — other lamps in the same catalog, other chairs on the showroom floor, other stereos on the same shelf, other televisions at the same mall. Rather than deciding whether to spend money, you were deciding how to spend money, and all the possible ways of spending your money were laid out for you by the nice folks who wanted it. These nice folks helped you overcome your natural tendency to compare with the past (“Is this television really that much better than my old one?”) by making it extremely easy for you to compare with the possible (“When you see them side by side here in the store, the Panasonic has a much sharper picture than the Sony”). Alas, we are all too easily fooled by such side-by-side comparisons, which is why retailers work so hard to ensure that we make them.
For example, people generally don’t like to buy the most expensive item in a category, hence retailers can improve their sales by stocking a few very expensive items that no one actually buys (“Oh my God, the 1982 Château Haut-Brion Pessac-Léognan sells for five hundred dollars a bottle!”) but that make less expensive items seem like a bargain by comparison (“I’ll just stick with the sixty-dollar zinfandel”). Unscrupulous real estate agents bring buyers to dilapidated dumps that are conveniently located between a massage parlor and a crack house before bringing them to the ordinary homes that they actually hope to sell, because the dumps make the ordinary homes seem extraordinary (“Oh, look, honey, no needles on the lawn!”). Our side-by-side comparisons can be influenced by extreme possibilities such as extravagant wines and dilapidated houses, but they can also be influenced by the addition of extra possibilities that are identical to those we are already considering. For example, in one study, physicians read about Medication X and were then asked whether they would prescribe the medication for a patient with osteoarthritis. The physicians clearly considered the medication worthwhile, because only 28 percent chose not to prescribe it. But when another group of physicians was asked whether they would prescribe Medication X or an equally effective Medication Y for a patient with the same disease, 48 percent chose to prescribe nothing. Apparently, adding another equally effective medication to the list of possibilities made it difficult for the physicians to decide between the two medications, thus leading many of them to recommend neither. If you’ve ever caught yourself saying, “I’m having such a hard time deciding between these two movies that I think I’ll just stay home and watch reruns instead,” then you know why physicians made the mistake they did.
One of the most insidious things about side-by-side comparison is that it leads us to pay attention to any attribute that distinguishes the possibilities we are comparing. I’ve probably spent some of the unhappiest hours of my life in stores that I meant to visit for fifteen minutes. I stop at the mall on the way to the picnic, park the car, dash in, and expect to reemerge a few minutes later with a nifty little digital camera in my pocket. But when I get to Wacky Bob’s Giant Mega Super Really Big World of Cameras, I am confronted by a bewildering panoply of nifty little digital cameras that differ on many attributes. Some of these are attributes that I would have considered even if there had been only one camera in the display case (“This is light enough to fit in my shirt pocket so I can take it anywhere”), and some are attributes I would never have thought about had the differences between cameras not been called to my attention (“The Olympus has flash output compensation, but the Nikon doesn’t. By the way, what is flash output compensation?”). Because side-by-side comparisons cause me to consider all the attributes on which the cameras differ, I end up considering attributes that I don’t really care about but that just so happen to distinguish one camera from another. For example, what attributes would you care about if you were shopping for a new dictionary? In one study, people were given the opportunity to bid on a dictionary that was in perfect condition and that listed ten thousand words, and on average they bid $24. Other people were given the opportunity to bid on a dictionary with a torn cover that listed twenty thousand words, and on average they bid $20. But when a third group of people was allowed to compare the two dictionaries side by side, they bid $19 for the small intact dictionary and $27 for the large torn dictionary. Apparently, people care about the condition of a dictionary’s cover, but they care about the number of words it contains only when that attribute is brought to their attention by side-by-side comparison. - Most of us have had similar experiences. We compare the small, elegant speakers with the huge, boxy speakers, notice the acoustical difference, and buy the hulking leviathans. Alas, the acoustical difference is a difference we never notice again, because when we get the monster speakers home we do not compare their sound to the sound of some speaker we listened to a week earlier at the store, but we do compare their awful boxiness to the rest of our sleek, elegant, and now-spoiled décor. Or we travel to France, meet a couple from our hometown, and instantly become touring buddies because compared with all those French people who hate us when we don’t try to speak their language and hate us more when we do, the hometown couple seems exceptionally warm and interesting. We are delighted to have found these new friends, and we expect to like them just as much in the future as we do today. But when we have them over for dinner a month after returning home, we are surprised to find that our new friends are rather boring and remote compared with our regular friends, and that we actually dislike them enough to qualify for French citizenship. Our mistake was not in touring Paris with a couple of dull homies but in failing to realize that the comparison we were making in the present (“Lisa and Walter are so much nicer than the waiter at Le Grand Colbert”) is not the comparison we would be making in the future (“Lisa and Walter aren’t nearly as nice as Toni and Dan”). The same principle explains why we love new things when we buy them and then stop loving them shortly thereafter. When we start shopping for a new pair of sunglasses, we naturally contrast the hip, stylish ones in the store with the old, outdated ones that are sitting on our noses. So we buy the new ones and stick the old ones in a drawer. But after just a few days of wearing our new sunglasses we stop comparing them with the old pair, and — well, what do you know? The delight that the comparison produced evaporates.
- When facts challenge our favored conclusion, we scrutinize them more carefully and subject them to more rigorous analysis. We also require a lot more of them. For example, how much information would you require before you were willing to conclude that someone was intelligent? Would their high school transcripts be enough? Would an IQ test suffice? Would you need to know what their teachers and employers thought of them? Volunteers in one study were asked to evaluate the intelligence of another person, and they required considerable evidence before they were willing to conclude that the person was truly smart. But interestingly, they required much more evidence when the person was an unbearable pain in the ass than when the person was funny, kind, and friendly. When we want to believe that someone is smart, then a single letter of recommendation may suffice; but when we don’t want to believe that person is smart, we may demand a thick manila folder full of transcripts, tests, and testimony.
Precisely the same thing happens when we want or don’t want to believe something about ourselves. For instance, volunteers in one study were invited to take a medical test that would supposedly tell them whether they did or did not have a dangerous enzyme deficiency that would predispose them to pancreatic disorders. The volunteers placed a drop of their saliva on a strip of ordinary paper that the researchers falsely claimed was a medical test strip. Some volunteers (positive-testers) were told that if the strip turned green in ten to sixty seconds, then they had the enzyme deficiency. Other volunteers (negative-testers) were told that if the strip turned green in ten to sixty seconds, then they didn’t have the enzyme deficiency. Although the strip was an ordinary piece of paper and hence never turned green, the negative-testers waited much longer than the positive-testers before deciding that the test was complete. In other words, the volunteers gave the test strip plenty of time to prove that they were well but much less time to prove that they were ill. Apparently it doesn’t take much to convince us that we are smart and healthy, but it takes a whole lotta facts to convince us of the opposite. We ask whether facts allow us to believe our favored conclusions and whether they compel us to believe our disfavored conclusions. Not surprisingly, disfavored conclusions have a much tougher time meeting this more rigorous standard of proof. - Intense suffering is one factor that can trigger our defenses and thus influence our experiences in ways we don’t anticipate. But there are others. For example, why do we forgive our siblings for behavior we would never tolerate in a friend? Why aren’t we disturbed when the president does something that would have kept us from voting for him had he done it before the election? Why do we overlook an employee’s chronic tardiness but refuse to hire a job seeker who is two minutes late for the interview? One possibility is that blood is thicker than water, flags were made to be rallied around, and first impressions matter most. But another possibility is that we are more likely to look for and find a positive view of the things we’re stuck with than of the things we’re not. Friends come and go, and changing candidates is as easy as changing socks. But siblings and presidents are ours, for better or for worse, and there’s not much we can do about it once they’ve been born or elected. When the experience we are having is not the experience we want to be having, our first reaction is to go out and have a different one, which is why we return unsatisfactory rental cars, check out of bad hotels, and stop hanging around with people who pick their noses in public. It is only when we cannot change the experience that we look for ways to change our view of the experience, which is why we love the clunker in the driveway, the shabby cabin that’s been in the family for years, and Uncle Sheldon despite his predilection for nasal spelunking. We find silver linings only when we must, which is why people experience an increase in happiness when genetic tests reveal that they don’t have a dangerous genetic defect, or when the tests reveal that they do have a dangerous genetic defect, but not when the tests are inconclusive. We just can’t make the best of a fate until it is inescapably, inevitably, and irrevocably ours.
- Most of us will pay a premium today for the opportunity to change our minds tomorrow, and sometimes it makes sense to do so. A few days spent test-driving a little red roadster tells us a lot about what it might be like to own one, thus it is sometimes wise to pay a modest premium for a contract that includes a short refund period. But if keeping our options open has benefits, it also has costs. Little red roadsters are naturally cramped, and while the committed owner will find positive ways to view that fact (“Wow! It feels like a fighter jet!”), the buyer whose contract includes an escape clause may not (“This car is so tiny. Maybe I should return it”). Committed owners attend to a car’s virtues and overlook its flaws, thus cooking the facts to produce a banquet of satisfaction, but the buyer for whom escape is still possible (and whose defenses have not yet been triggered) is likely to evaluate the new car more critically, paying special attention to its imperfections as she tries to decide whether to keep it. The costs and benefits of freedom are clear — but alas, they are not equally clear: We have no trouble anticipating the advantages that freedom may provide, but we seem blind to the joys it can undermine.
- Unexplained events have two qualities that amplify and extend their emotional impact. First, they strike us as rare and unusual. If I told you that my brother, my sister, and I were all born on the same day, you’d probably consider that a rare and unusual occurrence. Once I explained that we were triplets, you’d find it considerably less so. In fact, just about any explanation I offered (“By same day I meant we were all born on a Thursday” or “We were all delivered by cesarean section, so Mom and Dad timed our births for maximum tax benefits”) would tend to reduce the amazingness of the coincidence and make the event seem more probable. Explanations allow us to understand how and why an event happened, which immediately allows us to see how and why it might happen again. Indeed, whenever we say that something can’t happen — for example, mind reading or levitation or a law that limits the power of incumbents — we usually just mean that we’d have no way to explain it if it did. Unexplained events seem rare, and rare events naturally have a greater emotional impact than common events do. We are awed by a solar eclipse but merely impressed by a sunset despite the fact that the latter is by far the more spectacular visual treat.
The second reason why unexplained events have a disproportionate emotional impact is that we are especially likely to keep thinking about them. People spontaneously try to explain events, and studies show that when people do not complete the things they set out to do, they are especially likely to think about and remember their unfinished business. Once we explain an event, we can fold it up like freshly washed laundry, put it away in memory’s drawer, and move on to the next one; but if an event defies explanation, it becomes a mystery or a conundrum — and if there’s one thing we all know about mysterious conundrums, it is that they generally refuse to stay in the back of our minds. Filmmakers and novelists often capitalize on this fact by fitting their narratives with mysterious endings, and research shows that people are, in fact, more likely to keep thinking about a movie when they can’t explain what happened to the main character. And if they liked the movie, this morsel of mystery causes them to remain happy longer. - We remember feeling as we believe we must have felt. The problem with this error of retrospection is that it can keep us from discovering our errors of prospection. Consider the case of the 2000 U.S. presidential election. Voters went to the polls on November 7, 2000, to decide whether George Bush or Al Gore would become the forty-third president of the United States, but it quickly became clear that the election was too close to call and that its outcome would take weeks to decide. The next day, November 8, researchers asked some voters to predict how happy they would be on the day the election was ultimately decided for or against their favored candidate. On December 13 Al Gore conceded to George Bush, and the next day, December 14, the researchers measured the actual happiness of the voters. Four months later, in April 2001, the researchers contacted the voters again and asked them to recall how they had felt on December 14. As figure 22 shows, the study revealed three things. First, on the day after the election, pro-Gore voters expected to be devastated and pro-Bush voters expected to be elated if George Bush was ultimately declared the winner. Second, when George Bush was ultimately declared the winner, pro-Gore voters were less devastated and pro-Bush voters were less elated than they had expected to be (a tendency you’ve seen before in other chapters). But third and most important, a few months after the election was decided, both groups of voters remembered feeling as they had expected to feel, and not as they had actually felt. Apparently, prospections and retrospections can be in perfect agreement despite the fact that neither accurately describes our actual experience. The theories that lead us to predict that an event will make us happy (“If Bush wins, I’ll be elated”) also lead us to remember that it did (“When Bush won, I was elated”), thereby eliminating evidence of their own inaccuracy. This makes it unusually difficult for us to discover that our predictions were wrong. We overestimate how happy we will be on our birthdays, we underestimate how happy we will be on Monday mornings, and we make these mundane but erroneous predictions again and again, despite their regular disconfirmation. Our inability to recall how we really felt is one of the reasons why our wealth of experience so often turns out to be a poverty of riches.
- Because if you are like most people, then like most people, you don’t know you’re like most people. Science has given us a lot of facts about the average person, and one of the most reliable of these facts is that the average person doesn’t see herself as average. Most students see themselves as more intelligent than the average student, most business managers see themselves as more competent than the average business manager, and most football players see themselves as having better “football sense” than their teammates. Ninety percent of motorists consider themselves to be safer-than-average drivers, and 94 percent of college professors consider themselves to be better-than-average teachers. Ironically, the bias toward seeing ourselves as better than average causes us to see ourselves as less biased than average too. As one research team concluded, “Most of us appear to believe that we are more athletic, intelligent, organized, ethical, logical, interesting, fair-minded, and healthy — not to mention more attractive — than the average person.” This tendency to think of ourselves as better than others is not necessarily a manifestation of our unfettered narcissism but may instead be an instance of a more general tendency to think of ourselves as different from others — often for better but sometimes for worse. When people are asked about generosity, they claim to perform a greater number of generous acts than others do; but when they are asked about selfishness, they claim to perform a greater number of selfish acts than others do. When people are asked about their ability to perform an easy task, such as driving a car or riding a bike, they rate themselves as better than others; but when they are asked about their ability to perform a difficult task, such as juggling or playing chess, they rate themselves as worse than others. We don’t always see ourselves as superior, but we almost always see ourselves as unique. Even when we do precisely what others do, we tend to think that we’re doing it for unique reasons. For instance, we tend to attribute other people’s choices to features of the chooser (“Phil picked this class because he’s one of those literary types”), but we tend to attribute our own choices to features of the options (“But I picked it because it was easier than economics”). We recognize that our decisions are influenced by social norms (“I was too embarrassed to raise my hand in class even though I was terribly confused”), but fail to recognize that others’ decisions were similarly influenced (“No one else raised a hand because no one else was as confused as I was”). We know that our choices sometimes reflect our aversions (“I voted for Kerry because I couldn’t stand Bush”), but we assume that other people’s choices reflect their appetites (“If Rebecca voted for Kerry, then she must have liked him”). The list of differences is long but the conclusion to be drawn from it is short: The self considers itself to be a very special person.
If you liked the above content, I'd definitely recommend reading the whole book. đź’Ż
Until We Meet Again...
đź–– swap
Don't miss what's next. Subscribe to swap's musings: