There are lots of ways to change behavior (again read Dr. Susan Weinschenk’s book “How To Get People To Do Stuff”). But let’s talk specifically about a war of ideas. Information, such as a news report, can change your opinion on a topic, or at least in theory it’s supposed to.
As long as humans have been around there have been wars of ideas between us. Think of Communist propaganda, or wars of religion. Turning people to your side using words alone has been one of humanities earliest mind weapons.
I want to focus on saving electricity to reduce greenhouse gas emissions.
Behavior change is hard. You can tell people all day long that climate change is real, give them facts, but it’s very, very hard to turn that into using less electricity. Is it even possible to achieve behavior change with pure information? What information could you give that would turn into behavior change?
There’s a nice study by Nolan, Schulz, Cialdini, Goldstein, and Griskevicius that set out to study that question. They attempted to answer two questions. First, are people self-aware enough to know what sort of information can change their behavior? And second, what information actually works? The paper is called (spoiler alert) “Normative Social Influence is Underdetected”. I kinda wish they would have buried the lead, or at least made it so convoluted that you, the reader, wouldn’t understand it.
Nolan, et.al created 5 experimental messages (check out Study 2 for more information). The first was environmental protection, the second was social responsibility, the third self-interest, the fourth social norms, and the fifth was an information-only control.
Each had a different way of trying to convince people to save electricity. The researchers had research helpers go out door to door and give doorhangers with information, conduct interviews, read electrical meters, and more; it was a well-funded study.
After providing the information, the researchers then measured the subjects’ electrical consumption over the short term and over the long term.
The results were interesting.
Take a look at this graph. In the left column are the different conditions. The middle two columns show the short term effect in daily kilowatt hours (kWh, a measure of electricity usage). M stands for the median which is what we are interested in, and SE stands for standard error, which you can ignore. The right two columns show the long term effect median.
Notice that the ONLY condition that seems to have any major effect is social norms.
I’ll quote from the paper:
“Despite the fact that participants believed that the behavior of their neighbors – the [social] norm – had the least impact on their own energy conservation, results showed that the [social] norm actually had the strongest effect on participants’ energy conservation behaviors.”
All the arguments for social responsibility, save the planet, or self-interest (save money), etc… none of them really worked. Maybe a tiny little sliver of something. But telling people how much electricity their neighbors used, and showing them that they, themselves were using more? That had a large and lasting impact.
The best and perhaps only way to change behavior when your only tool available to you is information, is to give people information demonstrating how “bad” a job they are doing in relationship to other people.
People just want to be cool. They want to fit in. Social pressure is much, much stronger of a motivator than any sort of love for the environment.
If you’re designing a campaign to change minds; use social pressure. Turn it into a “club” a “group”, a tribe. Setting out what a tribe is and saying what it takes to be “in” the tribe is the best way to get people to do stuff. Us humans will do almost anything (*ahem cough* Nazi’s *cough*) to be part of the group or a group society thinks is important.
Give it a try. It’s also cheaper than rewards so that’s a plus 😊.
Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008). Normative Social Influence is Underdetected. Personality and Social Psychology Bulletin, 34(7), 913-923. doi:10.1177/0146167208316691
Quick post to talk about affect. Affect is primarily a verb meaning make a difference. This is different from effect which essentially means result (the effect of the great pitching was a win).
Affective primacy hypothesis asserts that positive and negative affective reactions can be evoked with minimal stimulus input and virtually no cognitive processing.
It’s mind control okay? It’s evil subliminal messaging and imposing bursts to your brain so fast it doesn’t have time to adequately process them. Let’s get into the details with an oldie but a famous paper by Murphy and Zajonc called “Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures.”
They set out to see if they could change people’s mood without them really being aware of what was happening.
Study 1 had two conditions. Half of the subjects were exposed to the optimal exposure condition, and half to the suboptimal exposure condition.
The “cover” of the study was that the subjects would be presented with an assortment of Chinese characters to rate. But there was a secret the subjects didn’t know. Right before they were to rate the Chinese characters, a slide of a face would be flashed. The faces were either male or female faces expressing happiness or anger.
In the optimal condition the faces were show for 1000ms (1 second). People were able to clearly see the primes but were told to ignore them. In the super-secret suboptimal condition, the faces were flashed for only 4ms.
The theory is that because the faces were flashed so fast in the suboptimal condition, only 4 ms, that the processing happens entirely unconsciously. Your brain has synapses that need to fire before you recognize something. And by the time you can react it’s already gone so the image stays in your unconscious processing.
Here’s the results:
These bars in the chart are how highly the Chinese characters were rated. The black bar is when a negative face (mad) was shown right before the ranking. When a positive face (smiling) was shown right before is the white bar.
In the optimal conditions (1000 ms) we have fairly close parity. However, in the suboptimal where the faces were only flashed for 4ms, we have a noticeable reaction. There’s quite a large difference where the mean rating for faces shown after a negative face was only 2.75, where as if a positive face was shown before the rating was almost 3.50.
What an interesting result. The researchers dove deeper.
Maybe it’s a fluke? Maybe people just didn’t like faces lording over them.
So in Study 2 they re-did Study 1, but this time people rated only if they thought the object was “good” or “bad”. Now these are random Chinese characters. There should be no association one way or the other. Results? Same as Study 1.
We see a little movement in the optimal (1000 ms), but a huge difference in the suboptimal (4 ms). When people were flashed with a little something and are not able to process it fully, it makes a difference somehow.
Does this work with other primes besides happy or sad?
In Study 3 subjects were asked to rate the size of an object, where 1 was small like a mouse, and 5 was big like a tree.
However, instead of being primed with a face, subjects were primed with either a large shape or a small shape. Again, the optimal condition got the picture for 1000 ms, and the suboptimal only got the shape for 4ms.
The results were very significant (with an F value of over 20 for you econ kids out there). But different than what the researchers were expecting:
Again, small primes lead to smaller overall ratings, BUT ONLY FOR THE OPTIMAL CASE (1000 ms). There was no change for the suboptimal (4 ms).
It is this author’s opinion that this is because of the fusiform facial area, or FFA. What we now know is that there’s a small little part of your brain whose job is to identify faces incredibly fast, and “feel” what that face is feeling (it’s located in the mid brain near other emotional processing).
When you see a face you instantly process if the expression shows that the person is happy, sad, worried, etc… From an evolutionary perspective the FFA may be incredibly important to our social skills. There’s a theory that the reason people with autism have trouble identifying the moods of others is that their FFA is not connected in the same way to the amygdala where emotions are processed.
People with autism can “see” the face, but not “feel” the emotion. Our FFA is so good it can instantly see a face even when the image is not actually a face. Cloud in the sky? Stain on some bricks? Smiley face? Frowny face hand drawing? Emoticon? Front of a car? Dogs. Cats. Cartoons. Our FFA lights up and fires insanely fast.
Sorry for the tangent but that’s the important part here. The FFA is designed to basically be a fast pre-processing area. It fires very quickly.
To process other attributes in a picture other than a face with emotion it takes more time. The image has to be rolled around through the visual cortex and then to somewhere else, then something else etc… It takes more than 4ms. So it “doesn’t register” with the brain.
One quick note to point out is that I am not saying that the FFA is done firing in 4 ms, rather that the FFA only needs 4 ms exposure time to fire and process, whereas other areas of the brain may require more exposure time. It is also possible that it’s the amygdala that is working super quickly with short exposure time as well.
When an image is around longer, as in the 1000 ms optimal condition, it mucks around with the framing parts of our brain. We are susceptible to framing. And if you show something that’s large, and then ask me is this squiggle large, I’ll lean towards yes? So you see quite a large jump in the optimal case from about 3.1 size rating when prompted with the small image, up to 4.0 size rating when prompted with the large.
Study 4 also followed Study 3, but this time were asked to judge the symmetry of objects, so 1 would be not symmetrical, 5 would be a circle.
Again, there was no difference in the suboptimal (4 ms), but a significant different in the optimal (1000 ms).
From this the researchers guess that geometric shapes also require a longer exposure time to have an emotional stimulus.
Study 5 tested masculine vs. feminine, and again, there was no change in the suboptimal which was primed for only 4ms, but a shift in rating something masculine vs. feminine in the optimal.
So general conclusions then. I quote form the paper:
“Primes shown as briefly as 4ms can allow subjects to discriminate between faces that differ in emotional polarity. Distinct faces that do not differ in affective polarity, even if they differ in such obvious ways as gender, cannot be accurately discriminated from one another if they are exposed for only 4 ms.”
What can this tell us? If you want to use evil subliminal messaging, use faces with expressions. This is why faces are so powerful because they are processed so quickly. They are “lower down the brainstem” (I didn’t invent that phrase, I borrowed it). Faces are more primal, and we have less control over our reaction to them.
If you want to use other pictures to help frame emotions you need to have their attention for at least 1 second. And I know that sounds short but in a world of advertising, a “one-mississippi” can be quite a while.
We are social creatures and our brains are evolved to put an emphasis on social cues, food, danger, and sex. Don’t underestimate how strongly they get our attention.
Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology, 64(5), 723-739. doi:10.1037//0022-3514.64.5.723
Have you ever seen a chameleon? They instantly change color to adapt to whatever their background is.
There’s a theory that humans can do this too. Of course, we can’t change our skin color to match our environment but the theory is that if you see someone behave in a certain way, you’ll follow that behavior. Public myths including yawning being contagious, or sneezing. Indifference to suffering. There’s lots of myths floating around.
Well for each myth there has been a study and since we’re romping around in behavioral economics land I wanted to look at a paper by Chartland and Bargh entitled aptly “The chameleon effect: The perception-behavior link and social interaction.”
Their experiments require a “primer”. Usually they used what in psychology they call a confederate; which is someone who appears to be part of the study along with everyone else… but is actually an inside imposter, planted by the researchers to get interesting results.
In Experiment 1 subjects participated in two consecutive sessions. Both had a 10-minute interaction with a confederate. They were told to describe photos in the sessions but of course the photos were simply a distraction from the real study.
The confederates, who were trained actors, varied their mannerisms throughout the interactions. During Session 1, the confederate either rubbed his or her face, or shook his or her foot. During Session 2, the confederate did the inverse of Session 1; so if Session 1 was a foot shake, Session 2 would be a face rub.
Afterwards they did a post-experiment interview and only 1 person guessed the other person was a confederate, and no one guessed what the confederate was up to.
Here are the results:
Even though no one noticed the confederates doing face rubs or foot shakes, when the confederate rubbed their face, participant face rubbing increased about 25%. And when the confederate instead shook their foot, participant foot shaking more than doubled (108%).
So clearly there is some sort of monkey see monkey do unconscious thought going on. I’ll discuss why after we go through the other Experiments. One more interesting data point for Experiment 1 involved smiling. Participants smiled more times per minute when with the active confederate (median smiles per minute of 1.03), than with a neutral confederate (median smiles per minute .36). I should also note that the confederates were instructed not to make friends; only to smile.
Further, participants performed the intended action more times with the nonsmiling confederate than with the smiling confederate (median .56 vs median .40). Very interesting indeed…. Put a pin in this and let’s move on to Experiment 2.
Experiment 2 (electric boogaloo) was all about the “need to belong”. Now Dr. Susan Weinschenk has written extensively about this in her book “How To Get People To Do Stuff” as it is one of the 7 drivers of motivation.
The goal of this experiment was to see if they could unconsciously manipulate people into enjoying their interaction with a confederate. After a 15-minute session with a confederate people were asked to report how much they liked the confederate and how smoothly the session went on a 9-point scale, with 1 being extremely awkward or unlikeable, and 9 being extremely smooth or likeable.
The confederates either engaged in neutral nondescript mannerisms, which acted as the control, or the confederate mirrored the mannerisms of the participant.
I think this is a brilliant evil theory that people like people who are like them. If a participant folded their hands, the confederate would fold theirs, etc…
The confederates, being talented actors, played their part beautifully. Only 1 person “figured out” that they were being mimicked, and an outside panel of judges was used to rank the openness and friendliness to the participants.
This is very important to point out; there was NO difference in scores between the neutral control, and the mimicking. It is not the case that the mimickers were being more friendly, making more eye contact, smiling, or were judged to like the participant more. This was controlled for. So the results are not simply friendly vs. not-friendly.
The results are fun. Participants in the experimental condition reported liking the confederates more (M=6.62) than the control (M=5.91), and also reported a smoother interaction (M=6.76) than the control (M=6.02).
Now there are certainly potential large implications in this study; from politics to sales to friendship. Let’s again put a pin in this and talk about the last Experiment before we sum everything up. The thing to take away from Experiment 2 is that human interactions go smoother and are more positive if you just mimic the movements and actions of the other person.
Experiment 3 was the same as Experiments 1 and 2, except that subjects were also given an empathy questionnaire (a perspective-taking subscale). For example, “when I’m upset with someone I try to put myself in their perspective”, sort of thing.
What they found was that people who were high perspectivers (highly empathetic) joined in with the face rubbing and foot shaking more times per minute than low perspectivers (M=1.30 vs M=.85 and M=.40 vs M=.29).
This makes sense. People who are highly empathetic find it easier to feel what you’re feeling. When you feel empathetic the same parts of your brain light up that are lighting up in the brain of the person you’re observing.
If you see someone hurt their leg, a small ghost reflection of mirror neurons in the leg area of your brain will also light up. It’s why stories are so powerful. It happens completely unconsciously. And if you’re the type of person who can more easily slip into that state, then you are more prone to have an unconscious reaction to the external stimuli of others.
Okay so our first pin was that people in Experiment 1 performed the action more when the Confederate was NOT smiling. My theory is that we are always looking for a way to bond unconsciously. We as humans want to relate, we want to connect on whatever level we can. Obviously smiling and laughing together is our natural go-to. But when that’s not available our brains may slide into other ways to bond, such as face rubbing and foot tapping. BECAUSE as we find out in Experiment 2, our second Pin, people like interactions more with people who are mimicking them.
Maybe we know and understand this innately so our brains are one step ahead of the research. Perhaps Experiment 1’s outcome occurred exactly BECAUSE of the results in Experiment 2. We like people more who mimic us. We crave people to like us (unconsciously), and so we (unconsciously) mimic others to the extent we can to get them to like us. It’s an empathy circle.
And those who are the most empathetic also reach out the most to connect, so they mimic unconsciously the most.
There is a long list of practical applications. Sorority bonding, concerts or sporting events in unison, people in fields with lots of human interaction being more animated and reactionary (HR, sales, customer service). If you want people to like you and try to bond with you; try mimicking their energy, behaviors, and mannerisms.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology, 76(6), 893-910. doi:10.1037//0022-3514.76.6.893
What is economics good for? I think there’s a lot of confusion as to what it can do and what its limitations are.
The problem us economists face is that we must always have answers, and they must always be accurate. Anything short of that means that the entire science is bogus.
But economics is only as good as the data it relies on, and data is always imperfect in some way.
The way I like to talk about economics to the general public is that it helps tell you where to look, and also if you’ve found what you’re searching for.
The first part is like advanced geology mapping equipment. Let’s pretend that you’re looking for gold in your back yard. Now you can stumble around blindly and just dig here or there, and depending on how much gold you have you might find some. But economics can point you to where the best spot to dig would be.
Economics achieves this by figuring out which way data are facing. That is to say, to maximize profit, should we decrease prices? Well if you do that you calculate that you’ll sell more products, but make less money per sale. Is it worth it? Economics can give you your answer.
But it’s not perfect because your data isn’t perfect. Maybe your sales estimation model is off. Maybe reducing your prices doesn’t lead to as many new sales as you thought. Just like how you can miss the gold vein, sometimes you can end up with the wrong result. But it helps you get close. And with more refinement you can often strike something.
The second way it can help is to verify what you’ve found. So you pull a strange rock out of the ground. Does it have gold in it? Economics can help you test if the strategy you’ve discovered is indeed a winning one.
The way economics can tell you with certainty what it is that you have is with the magic of the p values. You’ll see this in most econ literature. The p stands for probability, and it’s the probability that the effect you are seeing is because of random chance. The lower the p-value, the more confident you can be that an effect is “real”. The higher the p-value, the more likely the result is just the randomness of data.
A quick example is the fastest way to illustrate the point. You’re flipping a coin, heads or tails. My hypothesis is that the coin is rigged to always land on heads.
You flip the coin and it lands on heads twice in a row. Well that’s data in the direction of my hypothesis, but you could get the same result with a normal coin easily. So the p-value would be maybe .5 or a 50% chance that the coin is rigged (yes economists, I know this isn’t how p-value is directly calculated but I’m trying to keep things simple to illustrate the point), but also a 50% chance that the flips of a coin are random.
The next two flips are tails. Wow. The chance that a rigged coin would “misfire” twice in a row is pretty unlikely. Our p-value jumps to maybe .99, or 99%. We’re almost certain it’s not a rigged coin based on the data that we have.
Then the next 10 flips are heads. Every single one; right in a row. That is statistically fairly unlikely, but not impossible (probability of about .01%). So our p-value jumps down to maybe .07. Then the next 10 are heads. 20 head flips in a row? That’s really very unlikely to be a “localized streak” (probability of about .0001%). There’s almost certainly some connection between the coin and these flips; it almost certainly can’t be random chance!
For our example let’s assume our p-value falls to .04, or 4%. It is generally accepted in the scientific community that a p-value under 5% is “statistically significant”. That is to say, we’ve crossed a magical threshold. I can tell you with reasonable certainty that the something with these flips is indeed rigged. It’s still possible that I’m wrong, but so unlikely, that I can say with reasonable certainty that the rigging of the toss is real.
Then we flip heads another 10 times in a row. Well now we’ve flipped 2 heads, 2 tails, and 30 heads in a row. The chances of a coin being flipped 30 times heads in a row are astronomically small. I mean like .0000001%. Another way to think about it is you can expect to have a run like this if you did a series of 10000000000 coin flips (I may have missed a zero or two it’s hard to keep track). We’re talking rare.
So our p-value now jumps below .01, maybe to .009, or .9% that the effect is due to chance (in reality it might be much lower with 30 flips but stay with my analogy). We can be almost positive that our results are in fact real. There is something rigged about the tosses. The chance that they are not connected is practically, but not entirely, zero. There is truth to the famous Mac (from IASIP) quote that “with God, all things are possible”. And that’s certainly true. But really our data suggests we have a fact. Under 1%, or <.01 p-value is the next generally accepted threshold for scientists. Usually when 5% gets a * (to mark it’s significant), 1% gets ** (two stars)!
Okay so let’s flip the coins some more, and let’s in fact say that you flip heads another 70 times.
That’s a run of 100 head flips in a row. The odds become… impossible on a near universal scale. Like. A .00000000000000000000000000007% chance. I mean it’s such a small number it’s insane. In the scientific community we’d just say your p-value is now <.001. This is generally regarded as the last stop and is given *** (3 stars) to denote its statistical significance. There’s generally no point in going smaller because at just a .1% chance of being due to a random streak of data we can say with confidence that this effect is real.
Certain applications of statistics will push for an even lower p-value, but it’s really just to make a point. At <.001 whatever you are trying to prove is a fact.
Here’s another way to think about it. Lavar Ball says his son Lonzo Ball is going to play for the Lakers at Lonzo’s birth. The chances of someone playing in the NBA are amazingly tiny, and to play for a specific team are tinier still. If Lavar Ball’s statement had a p-value of <.001 however, then even if Lonzo’s birth existed in 1001 different universes, he would play for the Lakers in 1000 of them.
At that point you can just say it’s destiny that Lonzo is in fact going to play for the Los Angeles Lakers. There is a cosmic connection, or a rigged system. It’s not up to chance.
And the same can be said for our coin.
Economics and p-values are powerful tools. And I’m only scratching the surface. There’s R-values and T-values and regression analysis to tell you all sorts of fun stuff.
But for the general layperson out there, this is the basis of the power of economics. To give you a general map of what is really going on, and then to test to prove that the gold you found is really gold, and not fools gold.
I want to take you back to maybe 6th grade math? Ratios! A ratio just so you remember is for example, the number of shots made by a player in a basketball game, say 9 for 14, or 9/14 (a nice efficient game).
But ratios really mess people up; quick, what’s better 71/331 or 42/199? It’s not easy to solve.
A “paper with the best name nominee”, “Six of One, Half Dozen of the Other” by Burson and Larrick set out to find the weird human behavior that arises when people are confronted by ratios.
The biggest part of a ratio that messes people up is when comparing two equal ratios as they change. It’s a variation of the Subjective Probability issue we’ve talked about previously. People misjudge the value of proportional increases.
Here’s a simple example to illustrate the point Burson and Larrick were trying to accomplish. Would you rather increase your score from 80 points to 100? Or 4 points to 5? Their hypothesis is that people like the 80 points to 100 increase more, because again, increasing 20 points is better than increasing 1. Of course, the ratios are the same so it’s an equivalent relative increase in both situations.
In Study 1 Burson and Larrick had subjects evaluate cell phone plans in the first scenario, and a movie plan in the second. Here’s the original tables so you can see how it was all set up:
Let me explain this to you. Start with condition 1 in Table 1. As (I hope) you can see, you should notice that both Plan A, and Plan B are slightly different; one is cheaper, but the other has more value. In Condition 2, the plans are EXACTLY THE SAME. This is very important. The only thing that has changed is the scale of the ratios. One is by a factor of 10x, the other, price per year vs. price per month. Again. They are exactly the same.
The same happens in Table 2 with the movies. Plan A is cheaper, but Plan B has more movies. It’s a reasonable tradeoff. In Condition 2, the ONLY thing that is different is that the number of movies is expressed per year instead of per week.
There should be no preference for one plan over another. Preference for Plan A should be the same whether it’s the price per month or the price per year, right? It’s all the same.
Well framing is everything. For cell phones, Plan B (the cheaper plan) was preferred 53% to 31% when it had a lower price per year. Most likely because the difference in price looks bigger ($60 instead of $5/month).
Meanwhile Plan A (higher quality plan) was preferred 69% to 23% when It had many more dropped calls… per 1000 instead of per 100.
For the movie plan there was the same result. The only variable that changed was number of movies per week vs. per year (the price stayed monthly). People preferred Plan A (the cheaper plan) 57% to 33% when the number of movies was given per week, because the difference between 7 and 9 is small.
But people preferred Plan B (the higher quality plan) 56% to 38% when the number of new movies was given yearly.
The bottom line from Study 1: framing is important, and people will think that bigger numerical differences create a relatively bigger movement, even when the ratio is exactly the same. This is a tried and true marketing technique: “For only $3 a day you could have premium life insurance” is used instead of “For only $1,080 a year you could have premium life insurance”.
The other classic example: “Only 4 easy payments of $22.95”.
To sum up in a slightly different way: As attribute expansion increases, preference also increases.
Burson and Larrick didn’t stop there. Study 2 re-examined the issue by asking participants what they would be willing to pay.
Participants were again exposed to different movie plans. They were given what an “average plan” costs, and how many new movies they get per week, as well as the “target plan” (aka, the researchers’ target), that only gave the number of movies per week. The price was empty, and subjects were asked to fill it in with what they would be willing to pay.
For example, in Condition 1, the average plan gave you 9 new movies per week for a price of $12/month. If you were to only get 7 movies per week what would you be willing to pay? The average by the way was $9.20 which feels fairly reasonable.
A quick note. This technique is pretty standard for behavioral economists. What we call the “willingness to pay” is a great way to measure how attractive an option is. If the willingness to pay goes up, then the offer must have become more attractive.
There were four Conditions in Study 2.
Two gave the number of new movies per week (per my example in Condition 1). One had a target plan with fewer new movies per week than the average plan, and one had a target plan with more new movies per week than the average plan.
Condition 3 and 4 were identical to Conditions 1 and 2, except the numbers of new movies were given in years not weeks.
Again. The plans and their costs are identical. The ONLY thing that has changed in Condition 3 and 4 is that the number of new movies is now being expressed on a yearly basis.
The goal was to see if there is a difference in what people are willing to pay. The plans are the same. People should pay the same for the same number of movies, whether they’re given per week or per year. It’s the same number of movies! The price is even the same for goodness sake.
Results?
This graph is a little hard to read. The first two dots on the left are the plans given with movies per week (Conditions 1 and 2). The dot at the bottom left of $9.20 is the plan we alluded to before with fewer movies than the average plan in Condition 1. The dot on the top left of $11.55 is the plan with more movies than the average plan. Obviously people should pay more for the plan with more movies compared to the average and that’s what they do.
What gets very interesting is when you take the EXACT same plans, and just expand them to the number of movies per year, which is what the dots on the right are. They should be the same price! It’s silly to expect people to pay less, or more, for the same number of movies but that’s exactly what happens.
The average willingness to pay for the lower movie plan drops to $8.83 when expressed in movies per year, and the willingness to pay for the higher movie plan bumps up to $13.82.
These are considerable movements. While I would not assume you can achieve this level of change in your application or organization, the researchers here were able to get about a 5% drop in relative value for cheap plans if expressed annually, or about a 20% increase in value when expressed annually.
I will quote the paper’s final conclusions:
“Attribute expansion inflated the perceived difference between two alternatives on that attribute, and thereby increased its weight relative to the other attributes.”
So big takeaways:
When you’re comparing your product to the “average” competitor and your product is better than average in a category, make that interval of time as big as possible to maximize the number, and therefore the benefit.
A great example is what student loan companies do. I get letters in the mail from the SoFi’s in the world that say you could save $40,000 today! That number is huge! Of course they get that by comparing your early payoff in 10 years with minimum average payments you’d make for federal student loans over 30. They’ve stretched the window for savings as far as possible to maximize the benefit, and it certainly makes a huge impression.
If you or your customer is comparing your product to the average and your product is worse than average in a category, make that interval of time as small as possible to minimize the number, and therefore the difference of the negative attribute.
If your product is $2880 per year, and your competitor’s product is $2520, don’t use annual prices. Instead say “they” are $7 per day but have no features. Your product is only $8 per day, only one extra dollar, but has this whole list of expanded features!
We’ll talk a lot more about segmentation later. But this is another great example of how framing and segmenting work. Give it a try. It’s all about the numbers.
Burson, K. A., Larrick, R. P., & Lynch, J. G. (2009). Six of One, Half Dozen of the Other. Psychological Science, 20(9), 1074-1078. doi:10.1111/j.1467-9280.2009.02394.x
I hope you’ve heard of System 1 and System 2 thinking. It’s an idea originally put together by Daniel Kahneman. System 1 is our normal state of brain activity. Watching TV, driving, looking at a picture of a sad face. It’s simple, effortless, and our favorite mode to be in. System 2 is heavy thinking, such as solving a tough math problem, or taking the bar exam to be a lawyer (which this author did and passed, so there). It’s hard, uncomfortable, and actually uses up more calories. It’s literally more work.
The idea that there are two different processing systems in the brain is not new. And it’s probably a much better analogy of how the brain works rather than the traditional “the brain is a computer” metaphor that isn’t accurate.
Much like System 1 and System 2, in 1992 Kirkpatrick and Epstein proposed another way of thinking about these networks in their paper “Cognitive-experiential self-theory and subjective probability: Further evidence for two conceptual systems.”
They propose the idea that there are two modes of processing info, one with an experiential conceptual system, and one with a rational conceptual system. Let me try and simplify this.
The first mode is an experiential conceptual system. Note, this is not experimental, it’s experiential which means observed or perceived. Our experiential system encodes information as “concrete representations” (thanks BEGUIDE 2016). Take this mind journey with me:
Think of a door alone in a long hallway. A single closed door in an empty space.
Through the magic of the brain, you have conjured up an image of a door. You can see its color, how it opens. The space around it. It’s a physical object.
In your mind journey keep thinking about the door, but walk closer. Get so close to the door you can almost smell it. Lean up close to it right before you touch it, and blow softly on it.
I’ll bet your brain made a solid door. Your breath didn’t go through. It’s a real object in your mind.
In the cognitive-experiential self-theory you’ve used your experiential conceptual system to create something observable; it’s an object.
Now instead let’s put you in front of a tricky math problem you have to solve by hand. Say (47*16)/19.
I want you to visualize the answer. What is it? Well. Unless you’re an autistic savant can’t visualize the answer right away. You can’t “see” the answer in the same way you can see the door because you’re using a different system. You have to use the rational conceptual system. You have to remember math and the strategies to multiply and do long division. It’s a different system. It feels different.
Kirkpatrick and Epstein wanted to see if any weird human brain stuff went on when humans had to switch between the two systems. So here’s the experiment they set up (for you purists, I’m skipping to Experiment 3 in their study):
There were two bowls with red and white jelly beans. One was the Big Bowl that had 100 jelly beans, and one was the Small Bowl with only 10 jelly beans.
They set up a game where if you randomly pick a jelly bean and it’s red, you win some money (like $4); but if it’s white you win nothing.
They then put their subjects into one of four conditions. Condition 1 had (and told subjects) there was a 10% win rate. So that means 10 red jelly beans and 90 white jelly beans in the Big Bowl, and 1 red jelly bean and 9 white jelly beans in the Small Bowl.
The odds are the same; either 10/90 or 1/9.
Condition 2 had (and told subjects) there was a 90% win rate. With 9/1 jelly beans in the Small Bowl, and again 90/10 jelly beans in the Big Bowl.
Again, the odds are the same; either 90/10 or 9/1.
Conditions 3 and 4 were the same as Conditions 1 and 2, except the odds were framed as losing. Condition 3 had a 10% lose rate (so the odds and bowls were the same as Condition 2, 9/1 and 90/10), and Condition 4 had a 90% lose rate (so the odds and bowls were the same as Condition 1, 1/9 and 10/90).
Subjects were then put in front of the Big Bowl and Small Bowl and could decide which bowls they wanted to bet on. Here’s the important thing to remember; THE ODDS IN THE BOWLS ARE EXACTLY THE SAME. In every condition the odds for the Big Bowl and Small Bowl are Identical. It’s just that the big bowl has 10x the number of Jelly Beans.
Statistically it makes NO DIFFERENCE which bowl you bet on. If you gave this problem to a computer (and perhaps this is a great question for my Turing Test, to see if you’re AI or a human), it would bet randomly, or 50/50 on the Big or Small bowls. The odds are the same. You make no more or less money betting on one over the other.
So that’s what people did right? Of course not!
When presented with low odds of winning (the 10% win, or 90% lose conditions), about 75% of people chose to bet in the Big Bowl (73.1% for 90% lose and 76.9% for 10% win).
Conversely when presented with high odds of winning (the 90% win, or 10% lose conditions), only about 30% chose to bet in the Big Bowl (30.8% for the 10% lose condition, and 36.5% for the 90% win condition).
When presented with low odds of winning, most people wanted to gamble on a Big Bowl with lots of jelly beans, but when presented with high odds of winning, most people wanted to gamble on a Small Bowl with very few jelly beans.
This provides very strong support for the theory that there are two different systems. Rationally we know the odds are the same, but then our experiential system kicks in. I quote from the BEGUIDE 2016: “our experiential system – unlike the rational system – encodes information as concrete representations, and absolute numbers are more concrete than ratios or percentages.”
When we’re faced with a simply ratio-based math problem we use our rational system. But when we are standing in front of bowls with jelly beans it’s not 90%; it’s 9 out of 10. That kicks us into experiential.
9 out of 10 is almost a sure win; it’s really concrete. Our brains tell us that we want the small bowl because there are “fewer” chances to lose because there are fewer jelly beans. There’s only one loser jelly bean! We only have to avoid one bad bean, but in the Big Bowl we have to avoid 10! Your brain says, “oh, 1 is smaller than 10, that feels better, bet on that”. And this happens even while the rational system tells you they’re the same.
We walk around in non-rational, experiential mode, so people bet the small bowl.
Conversely, when it is only a 1 out of 10 chance of winning, oh man, there’s only one winner jelly bean in the whole Small Bowl. I’d rather have 10 chances of winning, and the big bowl has 10 winner jelly beans, so 10 is more than 1, so let’s bet in the Big Bowl.
Even while the rational system says they’re the same.
People go with their feelings.
Takeaways then. Welp. It’s another nail in the coffin of human rational decision making. If you want people to feel better about making a choice that has small odds of success, they’ll feel better if there are lots of possible winners, even if there are also proportionally just as many chances to lose.
Conversely, if you want people to feel better about making a choice that has high odds of success, minimize the number of losing tickets, even if that means reducing the number of winning tickets. People feel much better when they see numerically only one losing ticket.
Kirkpatrick, L. A., & Epstein, S. (1992). Cognitive-experiential self-theory and subjective probability: Further evidence for two conceptual systems. Journal of Personality and Social Psychology, 63(4), 534-544. doi:10.1037//0022-3514.63.4.534
Let’s assume I’m evil. What I want to do is INDUCE COMPLIANCE. I want people to do what I want.
Well that might be hard to do. But what if I could get people to comply with a request? That may be simple and effective. Dr. Susan Weinschenk wrote a whole book on how to get people to do stuff, but in this case I just want people to comply to a request I make.
There’s a paper (of course there is), that’s an oldie but a goodie. It’s entitled “Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique” written by Cialdini, et. al. in 1975.
Through a series of experiments the researchers tried to induce people to take a specific action. What was the best way to do that?
In the first experiment they asked people if they would work as a voluntary non-paid counselor at the jail, or if they’d volunteer at the zoo. Their goal was to get people to volunteer at the zoo.
Working at the jail was the “extreme request”. If you just walk up to someone and say “heyyy come on down to the local jail and work for free”, you’re going to get a lot of no’s. But hanging out at the zoo? That was the small request.
They had three conditions. The first was called the rejection-moderation condition. After hearing the experimenter make the first extreme request (jail), which was almost always rejected, the experimenter would then say “oh, no worries, there’s this other program” and make the smaller request (zoo).
The second control was the exposure control, so the experimenter first described the extreme request (jail) and the small request (zoo), and then requested they do either one.
The third was a small request only control, in which, straight forwardly enough, they’d only ask about the zoo.
Results? First, no subject agreed to do the jail volunteer. However, compliance with the smaller request varied dramatically.
As you can see, they DOUBLED their compliance numbers simply by requesting the jail first.
They essentially tricked the participants into being more likely to comply with their request to visit the zoo by using the tactic of rejection-moderation. I quote from the paper:
“Starting with an extreme initial request which is sure to be rejected and then moving to a smaller request significantly increases the probability of a target person’s agreement to the second request.”
Sounds like a simple framing effect right? The jail feels like a large request, so the zoo feels small. But it’s much more than just framing. The authors of the paper argue that it is only when the second favor can be considered to be a concession that compliance is increased.
Next the researchers ran Experiment 2 to test for framing. This time the participant was approached by two experimenters. Sneakily a third then came up talking about an upcoming exam (the research was done on a college campus).
Again, there were three conditions. The first was the rejection-moderation condition. In this condition participants heard the first experimenter ask for the extreme favor, and then ask for the second smaller one; the same as in Experiment 1.
The second condition was the two-requester control. This was the same as the first condition (rejection-moderation) but this time upon refusal of the extreme request, the first experimenter thanked the participant and walked away. The sneaky third experimenter that had come up later then would make the smaller request.
If it really was framing, if just being exposed to the more extreme request framed the participants in a way that made the zoo feel better, than this should work as well as the first condition.
The third was the smaller request-only control; the same as in Experiment 1.
Results?
Fascinatingly, when the request was asked by a different person there was very poor compliance rates. In order for the “magic” to work, thesmaller request must be made by the same person who made the larger (rejected) request.
Again, I quote from the paper:
“Only when the extreme and smaller favors were asked by the same requester was compliance enhanced.”
It wasn’t framing. Exposing the participants to the two different requests had no effect, or even backfired. It is much more about feeling bad about turning someone down, and wanting to give them a concession.
On to the last experiment, Experiment 3. The researchers were looking to disprove that it’s simply persistence that is the cause of the persuasion. In theory, maybe the reason people are breaking down is just the constant asking.
Experiment 3 was set up the same as Experiment 1. The participants were put in three conditions. The first was rejection-moderation, again. After hearing and rejecting an extreme request, the participant then heard the same person make a smaller request. This worked well in Experiment 1.
The second control was an equivalent request. The participant heard a requester initially request for them to be a chaperone (small request), then request to do the zoo (small request).
If the higher compliance rates were due to pure persistence, aka wearing people down by bugging them, then a high percentage of people would agree to the second small zoo request after being asked to chaperone.
The last condition was the smaller request only control that was the same as before (only asking if people would go to the zoo).
Results?
Asking for a smaller favor first, and then coming in again had no effect over the control. It made no difference. It was NOT simple persistence.
It was the rejection followed by concession that made people feel indebted to someone. Rejection then concession is the magic secret. If you want to get people to comply to your request, you need to have people reject you and feel bad. You can exploit their guilty feelings to ask a smaller favor that they are more likely to accept out of guilt.
That’s why the researchers call it the reciprocal concession model. Both parties make a concession in reciprocity to each other.
So again. If you’re evil and you want people to COMPLY WITH YOUR REQUEST, follow these steps.
Step one. Make a big request. Step two, when the big request is turned down, make the small request you actually want people to take. Importantly, the person who is asking must be the same. I quote from the paper:
“Only when the proposal of the second favor can be considered a concession on the part of the requester is compliance increased.”
That’s how you drive behavior and compliance. You use norms and feelings of “owing” something to another person. Ironically compliance is driven best through empathy and compassion.
Of course, things get interesting when your compliance request is to harm others, or not prevent harm to others. When people think of compliance I think it is inevitable to think towards dystopian futures and the Nazis and standing up for what you believe in. That compassion can drive compliance behavior is interesting. But remember it’s not compassion towards a third party that gets results. It has to be compassion towards whomever is making the request.
Try the steps! See if you get better results and let me know.
A quick caveat about this study. It was done a while ago, probably with only white college students. It is possible that results may vary between societies. Otherwise I bet it works! Now give me $10000 of work. No? How about $1? You owe me. Paypal guthrie@theteamw.com 😊 thanks.
Cialdini, R. B., & Et al. (1975). Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique. Journal of Personality and Social Psychology, 31(2), 206-215. doi:10.1037/h0076284
Let’s pretend I’m an evil version of Google that cares nothing about privacy (is this an allegory about the real Alphabet… you be the judge). Anti-Google. And my slogan is “Always Be Evil”.
What I want to do is get customers to disclose all of their private information to me. I want to have access to all of their social media accounts, emails, basically I want them to tell me, or disclose, all sorts of information.
But I also have to do so legally, and there are (pesky) laws that require me to get consent; laws that require the user to authorize me to use their information. So, what can I do? I can use behavioral science!
One behavioral science trick is to limit the number of disclosure events. You’ll get more compliance if you only ask the user once. Multiple decision points are more opportunities for the user to restrict their data.
I want to focus on another strategy using a paper on this exact topic. In “Slights of privacy”, by Adjerid, Acquisti, Brandimarte, and Loewenstein, they try to figure out the effect of privacy notices.
In the first experiment they manipulated changes in privacy notices by increasing or decreasing protections. The idea is that you can change behavior by changing the notices.
People were asked to give up (disclose) various information about themselves.
In the low protection condition people were informed that their responses would be actively linked to their university email accounts. This is more “big brothery” because personal information could be more easily gathered.
In the high protection condition people were told the accounts would not be actively linked to their university email addresses. Not being linked to an email address gives the user more privacy by protecting from the aggregation of personal data.
What they found was a 10% increase in the propensity to disclose other information when participants were given increasing (high) protections. And I quote from the paper:
“Similarly for decreasing protections conditions, we found that participants were, on average, 14% less likely to disclose, with some questions having as high as a 40% reduction in the propensity to disclose.”
This is not a surprising result. People are more likely to speak up if they feel a certain level of anonymity. If you’re trying to get specific information out of someone, make really strong protections to not use or attach that info to other information you don’t care about. That’s a great takeaway. Further, people care about privacy, and people don’t want to disclose all of their personal information.
That’s why, in Experiment 2, the researchers tried to get people to disclose lots of personal information.
Today the game is often that companies are trying to get people to disclose personal information, and people try to resist doing so.
Participants were told they were participating in a research study to create an online social network and were asked to create a profile in a college setting. They would have to disclose lots of personal information about themselves (exactly what Anti-Google would want). All the juicy details.
In the control case, people were taken (online) straight to the disclosure decisions after reading the privacy notice in a regular way.
In the other conditions, people were played with. Instead of going straight to the disclosure decisions, they were presented with one of four different mis-directions after the privacy notice before filling out the same profile fields.
For example, the first misdirection was a simple 15 second delay between the privacy notice and the disclosures (author note – 15 seconds is forever when browsing the internet).
What were the results? In the control, the disclosure rate was significantly less when presented with a riskier privacy notice (disclosure rate of about 0.5 for more risky vs. 0.7 for less risky). This was the same result that occurred in Experiment 1.
However, that difference almost completely went away with a slight misdirection, I quote from the study:
“In our second experiment, we found that the downward impact of riskier privacy notices on disclosure can be muted or significantly reduced by a slight misdirection which does not alter the objective risk of disclosure.”
With a little bit of misdirection, the entire effect of people wanting to disclose less disappears! People didn’t care. For the vast majority, privacy disclosures are simply not that important if they have to spend the inconvenience of kicking up into System 2 mode to actually think and follow through on a decision.
After waiting 15 seconds, they got bored, and just went ahead and filled out the stupid profile to be done with it. The ideas about “oh privacy and what does this mean for my future”… it’s too hard to make a calculated decision on, and it certainly doesn’t affect people in the present, so they don’t make the calculation and they just do what the form asks.
The author’s hunch is that this strategy works well in all sorts of situations. When people complain, or are worried about taking an action that affects them in the far future, all that is needed for most of them to put down the pitchfork and become docile sheep is a simple 15 second misdirection. It is so unconformable to stay in System 2 thinking mode for 15 seconds, that the majority of people would rather not care and face the consequences to jump back into System 1, than to sit in System 2 and continue to care strongly.
The other misdirections all worked just as well, like having them make some other decision that was perhaps important but not related to their disclosure risk at all. Think of waiving a dog toy in front of a puppy to distract it from whatever and you get the idea.
Evil Anti-Googles of the world rejoice! It’s easy to get people to waive their principles. All it takes is a little bit of behavioral science and you’ll be on your way.
Adjerid, I., Acquisti, A., Brandimarte, L., & Loewenstein, G. (2013). Sleights of privacy. Proceedings of the Ninth Symposium on Usable Privacy and Security – SOUPS ’13. doi:10.1145/2501604.2501613
I want to walk you through a rather complicated paper that I think is pretty important; it’s called “Bringing the Frame Into Focus: The Influence of Regulatory Fit on Processing Fluency and Persuasion”. It’s by Lee and Aaker from 2004.
The focus of the paper was the importance of what they call “regulatory fit”. Now this is not a term I would have invented, I personally think it’s clunky and doesn’t actually explain the concept, but I didn’t invent it, so I don’t get to name it.
The person who did invent it was researcher E. Tory Higgins in the late 1990’s. The regulatory fit theory examines the motivation of people (what they want), and how they go about achieving that goal (how do I get what I want?).
And just like there are liberal and conservative solutions to the same problems, regulatory fit theory says that people “orient” themselves when they solve a problem to either prevention, or promotion.
Unlike politics, people don’t always go with prevention, or always go promotion; it depends on the situation and the problem.
Promotion strategies, also known as “promotion focus” emphasize the pursuit of gains, or at least avoiding non-gains. Promotion focus is based on “aspirations towards ideals.”
Prevention strategies, also known as “prevention focus” try to accomplish the same goal, but from a different mindset. Prevention tries to reduce losses, or pursue non-losses. It often is invoked during the fulfillment of obligations.
Let me give you an example. Let’s analyze a road trip from Washington D.C. to Chicago from two different situations. The goal for both is the same, drive from the nation’s capital to Chicago.
In one group is a newlywed couple from Sweden taking a holiday in the United States for the summer. In the second group are two people who work for a Heating and Air Conditioning (HVAC) company. They have to make a series of repairs for their commercial clients, and therefore have been sent on this driving route from Washington D.C. to Chicago.
Both groups have the same trip, same stops, same time. So, in theory, their approach might be the same, but if you look at the situation from a regulatory fit theory analysis, you get different results.
The fun Swedish couple are probably using a promotion strategy. They want to have fun! They want to maximize their time on the trip and see as many cool things as possible. They want to take risks and climb mountains and drive on the Blue Ridge Highway (as this author can attest is very cool). They want to see Gettysburg and stay at weird hotels along the way. They have aspirations. They want to maximize gains.
The HVAC repair folks are probably using a prevention strategy. They just want the trip to go smoothly, and their clients to be happy. They don’t want hiccups, they don’t want flat tires, and they don’t want anything bad to happen. They want to minimize losses.
In both cases it’s the same trip, and both times people want the trip to go as well as possible; but they are oriented in different directions.
The same can hold true in a variety of political contexts. Right now, as I type this, immigration in the US is a huge issue. It’s a “hot-button issue” as they say. Generally, liberals in the US in the form of the Democratic party orient themselves in a promotion strategy on immigration. They are looking to maximize gains and talk about the benefits immigration can bring; more small business, greater cultural diversity, and higher economic growth for most (personal note from the economist writer, immigration is a net positive economically for the United States, but is a negative for some groups, mainly non-college educated white males).
Conservatives in the form mainly as the Republican party take a prevention strategy on immigration. They talk more in terms of a prevention orientation to reduce loss, such as reducing drug imports, stopping terrorist threats, reducing job losses, and not overcrowding the social safety net.
The reason the study that I mentioned earlier, “Bringing Frame Into Focus”, is important is that it dives into the effects orientation can have on how much a person likes a certain solution. The hypothesis they wanted to test was: do people treat solutions that are framed in the direction of their viewpoint more favorably? Does a better problem “fit” (either promotion or prevention) lead to a higher rating of the quality of that solution?
We’ve had a lot of talk recently about the “ideas bubble”. If you’re conservative you only follow conservative people on Twitter, and only get your news from conservative news sources. And if you’re liberal you are only friends with other liberals and only get your news from liberal sources. The effect being that both sides are shouting past each other because there is no sharing of ideas.
Many see this as a problem. I don’t want to frame it as positive or negative but it certainly is a “thing” that exists now. I feel confident in saying that the vast majority of Americans feel more polarized and split into factions, especially politically, than they have in the past.
I think this paper gives a big clue into why this is happening on an individual level. Why it is happening now is a much bigger conversation about trust in social institutions and technology, and a whole host of other topics I won’t get into now. But to have a good mechanism for why people like to be so tribal in their solutions is important.
To those who do see this polarization as a problem and want to try and fix it, let me give you this advice. A friend of mine specializes in racial inequality and gave me an interesting thought. We all have unconscious racial biases (check out https://implicit.harvard.edu/implicit/selectatest.html to take a test for yourself and see). She told me that having racial biases is okay on a personal level because we all are a little racist. What’s important is that we recognize in what ways we have racial biases, and then work and act to negate those instincts. The important work that you can do to stop racism is not to stop the negative biases that exist because those are often already imprinted into us through society at a young age. Our brains automatically make “us” and “them” categorizations. Only the passage of time can defeat that by redefining the “us” as all humans, or at least not seeing “us” and “them” on the basis of race. Rather, the work you can do in this moment is understand the racist biases you have, be honest with yourself and with others, and then work to not make decisions based on those feelings. Understand, accept, and account for them. It’s sharing that understanding that will actually work towards ending racism, not pretending that the feelings don’t exist.
In the same vein, if you want to help stop the polarization it’s important is to understand, accept, and account for your self-regulatory orientation biases. To understand which way you are facing, and if the message you inherently “feel” bad or good about is logical, or just a feeling. Only by spreading that understanding, acceptance, and accounting for your orientation bias can the polarization be stopped. The brain will always win…
And that’s why framing is so important. We’ve talked about framing a lot, and this is another example that works qualitatively. The bias in how ideas are presented is fascinating because it is so antithetical to how we perceive ourselves. When we talk about number framing for example, it’s very interesting, and unconscious, but it’s sort of a mind trick. Look at this nifty magic trick I can do to make you act a certain way.
But we take our core believes very seriously. The idea that I could manipulate what strategy you think would best enact your core beliefs based solely on how I presented my ideas, how I “framed” my ideas, is scary! And insanely useful to people out there who work in the marketing field. Again, it’s because of this orientation and fit theory. Ideas presented in a way that are in the same orientation you are in will “feel” like a better fit, and therefore you’ll be more receptive to them.
So what did Lee and Aaker find in their research? It’s time to walk through it now.
Their first experiment had small groups of 5-10 people presented with ads for Welch’s Grape Juice. After the ads people rated a few questions on a 7-point scale including their attitudes towards the brand, with 7 being highest and 1 being the lowest.
People were split into a 2×2 condition. The first split was to get either a promotion condition, or a prevention condition.
The promotion condition had language in the ad such as “growing evidence suggests that diets rich in Vitamin C and Iron lead to higher energy levels,” and other gain maximizing language.
The ad in the prevention condition had language such as “growing evidence suggests that diets rich in antioxidants may reduce the risk of some cancers and heart disease”, and other language to minimize loss (of life due to a heart attack or cancer).
The second split in addition to the promotion vs prevention condition was the framing condition. People were even given a tagline for example of “prevent clogged arteries!” in the gain frame, and “don’t miss out on preventing clogged arteries”, in the loss frame.
As you can see, there was a nice split. Those who were prompted with a promotional regulatory focus had a better response when presented with opportunity for gain, and those who were prompted with a prevention regulatory focus had a better response when presented with the reduction of loss.
Both methods were effective, but how the message was framed changed based on the orientation.
Interesting stuff, but there’s lots more. Experiment 2 and 3 were similar as Experiment 1, but they included a perception of risk.
There again was an ad about mononucleosis (mono) this time, a relatively common but not fun disease. Exposure risk was manipulated by conveying that one could get mono from either frequent, or infrequent behaviors.
People in the “high risk” condition were told that they would be at high risk of getting mono from kissing, any kind of sexual activity, or sharing a toothbrush, razor, water, or soda, etc…
People in the “low risk” condition were told they were at high risk of getting mono if they got a tattoo, used needles, had a blood transfusion, or multiple sexual partners at the same time, etc…
The ads were then framed either in a gain condition or a loss condition. The gain frame ads said “enjoy life!”, and “know that you are free from mononuclousis.” The loss frame said things like “don’t miss out on enjoying life”, etc…
Results? Appeals that are low in risk are more effective when presented in a gain or promotion frame. Appeals that are high in risk are more effective when presented in a prevention or loss frame.
And this makes sense. When the risk of loss is low, like the newlywed couple, whose worse outcome is they have a “meh” vacation, we humans look to maximize gain. It’s a great biological adaptation strategy. Go take risks and maximize your potential rewards now while it’s safe. We naturally turn to a promotion orientation.
When the risk of loss is high, like the HVAC repair team, whose worst outcome is that they destroy millions of dollars in business and get fired and foreclose on their house, the great biological adaptation strategy is “be safe”. Minimize your losses; just get out alive. We naturally turn to a prevention orientation strategy.
This explains so many of our political framings as well. As I said earlier, immigration is a net positive for many groups of Americans. They adopt a promotion orientation.
But especially for those populations who experience immigration as a much larger threat to their livelihood, their community, and their career opportunities (again strongest among non-college educated white males), they take a prevention orientation. They are worried about losing their job to outside competition. They have a much higher fear of loss.
Therefore, messages that are oriented in the same direction that they are already facing will be much stronger.
Donald Trump was so effective with his message because so much of the discourse his supporters were hearing from other candidates was not in the same orientation they were in. They didn’t want to hear all these messages about how great the US economy was doing after the recession, or all the great things other establishment Republicans were going to do once they were elected. They were, and are, in a prevention orientation. They were trying to minimize losses.
President Trump soared in with a prevention message, that he would “make America great again.” That he’d stop drugs and people coming over the borders. That Washington D.C. was a corrupt swamp that needed to stop hurting America. His message was really, really effective. Very few other politicians were aligned in the same regulatory orientation as Trump and it carried him to the White House.
It’s the flip side of the wave President Obama carried to the White House in 2006-2008; “Hope and Change.” Here was a very upbeat message, that if elected he can maximize the gains American already has. But it was even stronger than his rivals and did especially well with the young people in his base that were in a promotion, gain maximizing, orientation. This author’s bet is that he would not have done nearly as well had the election occurred in 2009, in the heart of the Great Recession when more people had probably politically switched to a prevention orientation on many political topics.
There are countless more examples where this applies. But why is it so strong?
The theory is that people have an underlying perception about what message “feels right”. I quote the authors:
“When a message frame is consistent versus inconsistent with the way in which individuals naturally think about issues that involve positive versus negative outcomes, the message becomes easier to process. This ease of processing is subsequently transferred to more favorable attitudes”.
Connor Diemand Yauman, researched this idea that when people feel that information is easy to process then they process it differently (fluency) than when they feel that the information is difficult to process (disfluency).
It’s a brilliant idea so I want to make sure you caught it. When a message is in the same orientation you are in, the message literally becomes easier to process. The brain doesn’t have to spend time and energy and resources figuring out why this information doesn’t align with what I’m already thinking. It all makes perfect sense in the world, and the brain speeds it along. It’s familiar. And when things are familiar, they are processed faster, which makes them “feel” better, and more correct.
We’ve already covered a few studies in which recognition leads to more positive receptions. You process it fast, it feels good, and it fits with your self story. The orientation regulatory bias is that your brain simply says, okay, cool, that sounds right. I agree with that. And you move on.
You like messages you don’t have to think about. You like messages that fit and make sense to your self-story.
The smart researchers decided to test this theory! Because here we don’t simply spout ideas about why the world is the way we think it is… WE BACK IT UP WITH DATA! They wanted to test if indeed faster processing of a message (which they call “processing fluency”) when the message was aligned with their regulatory orientation.
The researchers used the same setup as Experiment 1, with the Welch’s grape juice. However, this time they did so on a computer, with words that flashed on the screen that they had to write down. It’s called a perception test and is pretty common. Because the words only flash briefly (we’re talking 50 milliseconds), the idea is that if you process some words faster than others, you’ll be able to perceive and write down more of those words. Simple enough right?
There were lots of random words that flashed, and then 8 target words. Four were promotion focused (enjoy, active, energy, vitamins), and four were prevention focused (disease, arteries, cancer, and clogged).
Remember the promotion group was told juice would give them more energy, and the prevention focus told they would reduce the risk of disease.
Results?
You can see that in the promotion control group far more words associated with promotion were perceived, and in the prevention, far more prevention words were perceived. This is clear evidence that supports the hypothesis that faster processing of a message occurs when the framing was in the same orientation as person.
The research paper quotes: “In sum, results from Experiments 4A and 4B provide evidence that participants experimented greater processing fluency when message frame was compatible with regulatory focus.”
In Experiment 5, they asked for how effective the message was. And I’ll let the paper’s authors sum this Experiment up quickly for you (you’ve already been through so much):
“[I]n high regulatory fit conditions, more support reasons came to mind, and heightened effectiveness was perceived by participants. However, it was the perceived effectiveness that appeared to directly impact brand attitudes, thereby shedding light on the specific nature of the processing fluency mechanism.”
So to tie it all together then:
“Our results demonstrate that enhanced processing fluency associated with higher eagerness and higher vigilance under regulatory fit conditions leads to more favorable attitudes. Thus, the current research shows that processing fluency may contribute to the “feeling right” experience that is transferred to subsequent evaluations of the product.”
What they are saying here is what I’ve already explained. The processing fluency, aka, the ease in which a message that is oriented in the same way your regulatory orientation already is contributes to the “feeling right” experience. Because it “feels right”, you rate that product, or that message as more favorable.
Obviously this has loads of marketing potential. But it’s very important to know which orientation your audience is, or your message won’t land. That’s why it’s so easy to tell people what they want to hear. Selling Coke to people who already drink Coke is easy because that population of people already like Coke. It’s a much harder task to try and get people who think soda is bad for you to drink Coke.
Okay, so obviously there are huge political implications, and important marketing implications. Let’s sum things up with some takeaways:
People have self-regulatory orientations. On different topics they can either have promotion orientation, to maximize gains, or prevention orientation, to minimize losses.
When messages are framed in the same orientation people are in, they are more effective and better received. This is because messages in the same orientation are processed faster, and therefore “feel” better.
If you want to be successful in any sort of voting contest where it is between a few choices, it is best to use a message that is framed in the correct orientation as your target audience. If everyone is in the same direction, including your competition, be the loudest voice. Either be the most preventing loss, or the most maximizing gain to make yourself stand out to a “base”.
If at all possible, do both! Be preventing losses to one crowd and maximizing gain to another.
Lee, A. Y., & Aaker, J. L. (2004). Bringing the Frame Into Focus: The Influence of Regulatory Fit on Processing Fluency and Persuasion. Journal of Personality and Social Psychology, 86(2), 205-218. doi:10.1037/0022-3514.86.2.205
I want to give credit to an old paper that was quite ahead of its time. In 1956 Herbert Simon in his paper “Rational choice and the structure of the environment” had some of the ideas of behavioral economics before the field had really developed.
His take on some of the interesting human behaviors was a word entitled “satisficing”. It’s a combination of sufficing (good enough), and satisfying. Behavioral scientists often will use different phrases today like cognitive biases and prospect utility, but you’ll still hear economists mention the phrase satisficing now and then.
Satisficing is the idea that when people make decisions, they don’t optimize for maximum enjoyment the way an economist would expect. Rather, as the Behavioral Economics Guide of 2016 summarized, people “choose options that meet their most basic decision criteria.”
For example, if you really want fancy Mexican food, and eating fancy Mexican food would give you the most happiness, and maximum utility, the traditional economist would predict that you get fancy Mexican food. But, of course, we don’t do that, we do what is satisfying, and sufficing; satisficing. So instead of getting fancy Mexican, you go to Chipotle and get a burrito. It’s enough.
Later Tversky and Kahneman would come along and invent prospect theory, and a much more solid behavioral economic model and base on which the modern version of behavioral science is founded. But Herbert Simon gets lots of credit for being really far ahead of his time and putting down a ton of ideas that influenced future thinkers.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129-138. doi:10.1037/h0042769