Episode 19: Subjective Probability (Part 2)

I want to take you back to maybe 6th grade math? Ratios! A ratio just so you remember is for example, the number of shots made by a player in a basketball game, say 9 for 14, or 9/14 (a nice efficient game).

But ratios really mess people up; quick, what’s better 71/331 or 42/199? It’s not easy to solve.

A “paper with the best name nominee”, “Six of One, Half Dozen of the Other” by Burson and Larrick set out to find the weird human behavior that arises when people are confronted by ratios.

The biggest part of a ratio that messes people up is when comparing two equal ratios as they change. It’s a variation of the Subjective Probability issue we’ve talked about previously. People misjudge the value of proportional increases.

Here’s a simple example to illustrate the point Burson and Larrick were trying to accomplish. Would you rather increase your score from 80 points to 100? Or 4 points to 5? Their hypothesis is that people like the 80 points to 100 increase more, because again, increasing 20 points is better than increasing 1. Of course, the ratios are the same so it’s an equivalent relative increase in both situations.

In Study 1 Burson and Larrick had subjects evaluate cell phone plans in the first scenario, and a movie plan in the second. Here’s the original tables so you can see how it was all set up:

Let me explain this to you. Start with condition 1 in Table 1. As (I hope) you can see, you should notice that both Plan A, and Plan B are slightly different; one is cheaper, but the other has more value.  In Condition 2, the plans are EXACTLY THE SAME. This is very important. The only thing that has changed is the scale of the ratios. One is by a factor of 10x, the other, price per year vs. price per month. Again. They are exactly the same.

The same happens in Table 2 with the movies. Plan A is cheaper, but Plan B has more movies. It’s a reasonable tradeoff. In Condition 2, the ONLY thing that is different is that the number of movies is expressed per year instead of per week.

There should be no preference for one plan over another. Preference for Plan A should be the same whether it’s the price per month or the price per year, right? It’s all the same.

Well framing is everything. For cell phones, Plan B (the cheaper plan) was preferred 53% to 31% when it had a lower price per year. Most likely because the difference in price looks bigger ($60 instead of $5/month).

Meanwhile Plan A (higher quality plan) was preferred 69% to 23% when It had many more dropped calls… per 1000 instead of per 100.

For the movie plan there was the same result. The only variable that changed was number of movies per week vs. per year (the price stayed monthly). People preferred Plan A (the cheaper plan) 57% to 33% when the number of movies was given per week, because the difference between 7 and 9 is small.

But people preferred Plan B (the higher quality plan) 56% to 38% when the number of new movies was given yearly.

The bottom line from Study 1: framing is important, and people will think that bigger numerical differences create a relatively bigger movement, even when the ratio is exactly the same. This is a tried and true marketing technique: “For only $3 a day you could have premium life insurance” is used instead of “For only $1,080 a year you could have premium life insurance”.

The other classic example: “Only 4 easy payments of $22.95”.

To sum up in a slightly different way: As attribute expansion increases, preference also increases.

Burson and Larrick didn’t stop there. Study 2 re-examined the issue by asking participants what they would be willing to pay.

Participants were again exposed to different movie plans. They were given what an “average plan” costs, and how many new movies they get per week, as well as the “target plan” (aka, the researchers’ target), that only gave the number of movies per week. The price was empty, and subjects were asked to fill it in with what they would be willing to pay.

For example, in Condition 1, the average plan gave you 9 new movies per week for a price of $12/month. If you were to only get 7 movies per week what would you be willing to pay? The average by the way was $9.20 which feels fairly reasonable.

A quick note. This technique is pretty standard for behavioral economists. What we call the “willingness to pay” is a great way to measure how attractive an option is. If the willingness to pay goes up, then the offer must have become more attractive.

There were four Conditions in Study 2.

Two gave the number of new movies per week (per my example in Condition 1). One had a target plan with fewer new movies per week than the average plan, and one had a target plan with more new movies per week than the average plan.

Condition 3 and 4 were identical to Conditions 1 and 2, except the numbers of new movies were given in years not weeks.

Again. The plans and their costs are identical. The ONLY thing that has changed in Condition 3 and 4 is that the number of new movies is now being expressed on a yearly basis.

The goal was to see if there is a difference in what people are willing to pay. The plans are the same. People should pay the same for the same number of movies, whether they’re given per week or per year. It’s the same number of movies! The price is even the same for goodness sake.

Results?

This graph is a little hard to read. The first two dots on the left are the plans given with movies per week (Conditions 1 and 2). The dot at the bottom left of $9.20 is the plan we alluded to before with fewer movies than the average plan in Condition 1. The dot on the top left of $11.55 is the plan with more movies than the average plan.  Obviously people should pay more for the plan with more movies compared to the average and that’s what they do.

What gets very interesting is when you take the EXACT same plans, and just expand them to the number of movies per year, which is what the dots on the right are. They should be the same price! It’s silly to expect people to pay less, or more, for the same number of movies but that’s exactly what happens.

The average willingness to pay for the lower movie plan drops to $8.83 when expressed in movies per year, and the willingness to pay for the higher movie plan bumps up to $13.82.

These are considerable movements. While I would not assume you can achieve this level of change in your application or organization, the researchers here were able to get about a 5% drop in relative value for cheap plans if expressed annually, or about a 20% increase in value when expressed annually.

I will quote the paper’s final conclusions:

“Attribute expansion inflated the perceived difference between two alternatives on that attribute, and thereby increased its weight relative to the other attributes.”

So big takeaways:

When you’re comparing your product to the “average” competitor and your product is better than average in a category, make that interval of time as big as possible to maximize the number, and therefore the benefit.

A great example is what student loan companies do. I get letters in the mail from the SoFi’s in the world that say you could save $40,000 today! That number is huge! Of course they get that by comparing your early payoff in 10 years with minimum average payments you’d make for federal student loans over 30. They’ve stretched the window for savings as far as possible to maximize the benefit, and it certainly makes a huge impression.

If you or your customer is comparing your product to the average and your product is worse than average in a category, make that interval of time as small as possible to minimize the number, and therefore the difference of the negative attribute.

If your product is $2880 per year, and your competitor’s product is $2520, don’t use annual prices. Instead say “they” are $7 per day but have no features. Your product is only $8 per day, only one extra dollar, but has this whole list of expanded features!

We’ll talk a lot more about segmentation later. But this is another great example of how framing and segmenting work. Give it a try. It’s all about the numbers.

Burson, K. A., Larrick, R. P., & Lynch, J. G. (2009). Six of One, Half Dozen of the Other. Psychological Science20(9), 1074-1078. doi:10.1111/j.1467-9280.2009.02394.x

Episode 16: How To Induce Compliance


Let’s assume I’m evil.  What I want to do is INDUCE COMPLIANCE. I want people to do what I want.

Well that might be hard to do. But what if I could get people to comply with a request? That may be simple and effective.  Dr. Susan Weinschenk wrote a whole book on how to get people to do stuff, but in this case I just want people to comply to a request I make.

There’s a paper (of course there is), that’s an oldie but a goodie. It’s entitled “Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique” written by Cialdini, et. al. in 1975.

Through a series of experiments the researchers tried to induce people to take a specific action. What was the best way to do that?

In the first experiment they asked people if they would work as a voluntary non-paid counselor at the jail, or if they’d volunteer at the zoo. Their goal was to get people to volunteer at the zoo.

Working at the jail was the “extreme request”. If you just walk up to someone and say “heyyy come on down to the local jail and work for free”, you’re going to get a lot of no’s. But hanging out at the zoo? That was the small request.

They had three conditions. The first was called the rejection-moderation condition. After hearing the experimenter make the first extreme request (jail), which was almost always rejected, the experimenter would then say “oh, no worries, there’s this other program” and make the smaller request (zoo).

The second control was the exposure control, so the experimenter first described the extreme request (jail) and the small request (zoo), and then requested they do either one.

The third was a small request only control, in which, straight forwardly enough, they’d only ask about the zoo.

Results? First, no subject agreed to do the jail volunteer. However, compliance with the smaller request varied dramatically.

As you can see, they DOUBLED their compliance numbers simply by requesting the jail first.

They essentially tricked the participants into being more likely to comply with their request to visit the zoo by using the tactic of rejection-moderation. I quote from the paper:

“Starting with an extreme initial request which is sure to be rejected and then moving to a smaller request significantly increases the probability of a target person’s agreement to the second request.”

Sounds like a simple framing effect right? The jail feels like a large request, so the zoo feels small. But it’s much more than just framing. The authors of the paper argue that it is only when the second favor can be considered to be a concession that compliance is increased.

Next the researchers ran Experiment 2 to test for framing. This time the participant was approached by two experimenters. Sneakily a third then came up talking about an upcoming exam (the research was done on a college campus).

Again, there were three conditions. The first was the rejection-moderation condition. In this condition participants heard the first experimenter ask for the extreme favor, and then ask for the second smaller one; the same as in Experiment 1.

The second condition was the two-requester control. This was the same as the first condition (rejection-moderation) but this time upon refusal of the extreme request, the first experimenter thanked the participant and walked away. The sneaky third experimenter that had come up later then would make the smaller request.

If it really was framing, if just being exposed to the more extreme request framed the participants in a way that made the zoo feel better, than this should work as well as the first condition.

The third was the smaller request-only control; the same as in Experiment 1.

Results?

Fascinatingly, when the request was asked by a different person there was very poor compliance rates. In order for the “magic” to work, the smaller request must be made by the same person who made the larger (rejected) request.

Again, I quote from the paper:

“Only when the extreme and smaller favors were asked by the same requester was compliance enhanced.”

It wasn’t framing. Exposing the participants to the two different requests had no effect, or even backfired. It is much more about feeling bad about turning someone down, and wanting to give them a concession.

On to the last experiment, Experiment 3. The researchers were looking to disprove that it’s simply persistence that is the cause of the persuasion. In theory, maybe the reason people are breaking down is just the constant asking.

Experiment 3 was set up the same as Experiment 1. The participants were put in three conditions. The first was rejection-moderation, again. After hearing and rejecting an extreme request, the participant then heard the same person make a smaller request. This worked well in Experiment 1.

The second control was an equivalent request. The participant heard a requester initially request for them to be a chaperone (small request), then request to do the zoo (small request).

If the higher compliance rates were due to pure persistence, aka wearing people down by bugging them, then a high percentage of people would agree to the second small zoo request after being asked to chaperone.

The last condition was the smaller request only control that was the same as before (only asking if people would go to the zoo).

Results?

Asking for a smaller favor first, and then coming in again had no effect over the control. It made no difference. It was NOT simple persistence.

It was the rejection followed by concession that made people feel indebted to someone. Rejection then concession is the magic secret. If you want to get people to comply to your request, you need to have people reject you and feel bad. You can exploit their guilty feelings to ask a smaller favor that they are more likely to accept out of guilt.

That’s why the researchers call it the reciprocal concession model. Both parties make a concession in reciprocity to each other.

So again. If you’re evil and you want people to COMPLY WITH YOUR REQUEST, follow these steps.

Step one. Make a big request. Step two, when the big request is turned down, make the small request you actually want people to take. Importantly, the person who is asking must be the same. I quote from the paper:

“Only when the proposal of the second favor can be considered a concession on the part of the requester is compliance increased.”

That’s how you drive behavior and compliance.  You use norms and feelings of “owing” something to another person. Ironically compliance is driven best through empathy and compassion.

Of course, things get interesting when your compliance request is to harm others, or not prevent harm to others. When people think of compliance I think it is inevitable to think towards dystopian futures and the Nazis and standing up for what you believe in. That compassion can drive compliance behavior is interesting. But remember it’s not compassion towards a third party that gets results. It has to be compassion towards whomever is making the request.

Try the steps! See if you get better results and let me know.

A quick caveat about this study. It was done a while ago, probably with only white college students. It is possible that results may vary between societies. Otherwise I bet it works! Now give me $10000 of work. No? How about $1? You owe me. Paypal guthrie@theteamw.com 😊 thanks.

 

Cialdini, R. B., & Et al. (1975). Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique. Journal of Personality and Social Psychology31(2), 206-215. doi:10.1037/h0076284

Episode 15: How to Make Users Ignore Privacy Warnings


Let’s pretend I’m an evil version of Google that cares nothing about privacy (is this an allegory about the real Alphabet… you be the judge). Anti-Google. And my slogan is “Always Be Evil”.

What I want to do is get customers to disclose all of their private information to me. I want to have access to all of their social media accounts, emails, basically I want them to tell me, or disclose, all sorts of information.

But I also have to do so legally, and there are (pesky) laws that require me to get consent; laws that require the user to authorize me to use their information. So, what can I do? I can use behavioral science!

One behavioral science trick is to limit the number of disclosure events. You’ll get more compliance if you only ask the user once. Multiple decision points are more opportunities for the user to restrict their data.

I want to focus on another strategy using a paper on this exact topic. In “Slights of privacy”, by Adjerid, Acquisti, Brandimarte, and Loewenstein, they try to figure out the effect of privacy notices.

In the first experiment they manipulated changes in privacy notices by increasing or decreasing protections. The idea is that you can change behavior by changing the notices.

People were asked to give up (disclose) various information about themselves.

In the low protection condition people were informed that their responses would be actively linked to their university email accounts. This is more “big brothery” because personal information could be more easily gathered.

In the high protection condition people were told the accounts would not be actively linked to their university email addresses. Not being linked to an email address gives the user more privacy by protecting from the aggregation of personal data.

What they found was a 10% increase in the propensity to disclose other information when participants were given increasing (high) protections. And I quote from the paper:

“Similarly for decreasing protections conditions, we found that participants were, on average, 14% less likely to disclose, with some questions having as high as a 40% reduction in the propensity to disclose.”

This is not a surprising result. People are more likely to speak up if they feel a certain level of anonymity. If you’re trying to get specific information out of someone, make really strong protections to not use or attach that info to other information you don’t care about. That’s a great takeaway. Further, people care about privacy, and people don’t want to disclose all of their personal information.

That’s why, in Experiment 2, the researchers tried to get people to disclose lots of personal information.

Today the game is often that companies are trying to get people to disclose personal information, and people try to resist doing so.

Participants were told they were participating in a research study to create an online social network and were asked to create a profile in a college setting. They would have to disclose lots of personal information about themselves (exactly what Anti-Google would want). All the juicy details.

In the control case, people were taken (online) straight to the disclosure decisions after reading the privacy notice in a regular way.

In the other conditions, people were played with. Instead of going straight to the disclosure decisions, they were presented with one of four different mis-directions after the privacy notice before filling out the same profile fields.

For example, the first misdirection was a simple 15 second delay between the privacy notice and the disclosures (author note – 15 seconds is forever when browsing the internet).

What were the results? In the control, the disclosure rate was significantly less when presented with a riskier privacy notice (disclosure rate of about 0.5 for more risky vs. 0.7 for less risky). This was the same result that occurred in Experiment 1.

However, that difference almost completely went away with a slight misdirection, I quote from the study:

“In our second experiment, we found that the downward impact of riskier privacy notices on disclosure can be muted or significantly reduced by a slight misdirection which does not alter the objective risk of disclosure.”

With a little bit of misdirection, the entire effect of people wanting to disclose less disappears! People didn’t care. For the vast majority, privacy disclosures are simply not that important if they have to spend the inconvenience of kicking up into System 2 mode to actually think and follow through on a decision.

After waiting 15 seconds, they got bored, and just went ahead and filled out the stupid profile to be done with it. The ideas about “oh privacy and what does this mean for my future”… it’s too hard to make a calculated decision on, and it certainly doesn’t affect people in the present, so they don’t make the calculation and they just do what the form asks.

The author’s hunch is that this strategy works well in all sorts of situations. When people complain, or are worried about taking an action that affects them in the far future, all that is needed for most of them to put down the pitchfork and become docile sheep is a simple 15 second misdirection. It is so unconformable to stay in System 2 thinking mode for 15 seconds, that the majority of people would rather not care and face the consequences to jump back into System 1, than to sit in System 2 and continue to care strongly.

The other misdirections all worked just as well, like having them make some other decision that was perhaps important but not related to their disclosure risk at all. Think of waiving a dog toy in front of a puppy to distract it from whatever and you get the idea.

Evil Anti-Googles of the world rejoice! It’s easy to get people to waive their principles. All it takes is a little bit of behavioral science and you’ll be on your way.

 

Adjerid, I., Acquisti, A., Brandimarte, L., & Loewenstein, G. (2013). Sleights of privacy. Proceedings of the Ninth Symposium on Usable Privacy and Security – SOUPS ’13. doi:10.1145/2501604.2501613

Episode 14: The Big Reason People Are Only Open To Their Own “Group-Think” Ideas: Self-Regulatory Fit and Persuasion


I want to walk you through a rather complicated paper that I think is pretty important; it’s called “Bringing the Frame Into Focus: The Influence of Regulatory Fit on Processing Fluency and Persuasion”.  It’s by Lee and Aaker from 2004.

The focus of the paper was the importance of what they call “regulatory fit”. Now this is not a term I would have invented, I personally think it’s clunky and doesn’t actually explain the concept, but I didn’t invent it, so I don’t get to name it.

The person who did invent it was researcher E. Tory Higgins in the late 1990’s. The regulatory fit theory examines the motivation of people (what they want), and how they go about achieving that goal (how do I get what I want?).

And just like there are liberal and conservative solutions to the same problems, regulatory fit theory says that people “orient” themselves when they solve a problem to either prevention, or promotion.

Unlike politics, people don’t always go with prevention, or always go promotion; it depends on the situation and the problem.

Promotion strategies, also known as “promotion focus” emphasize the pursuit of gains, or at least avoiding non-gains. Promotion focus is based on “aspirations towards ideals.”

Prevention strategies, also known as “prevention focus” try to accomplish the same goal, but from a different mindset. Prevention tries to reduce losses, or pursue non-losses. It often is invoked during the fulfillment of obligations.

Let me give you an example. Let’s analyze a road trip from Washington D.C. to Chicago from two different situations. The goal for both is the same, drive from the nation’s capital to Chicago.

In one group is a newlywed couple from Sweden taking a holiday in the United States for the summer. In the second group are two people who work for a Heating and Air Conditioning (HVAC) company. They have to make a series of repairs for their commercial clients, and therefore have been sent on this driving route from Washington D.C. to Chicago.

Both groups have the same trip, same stops, same time. So, in theory, their approach might be the same, but if you look at the situation from a regulatory fit theory analysis, you get different results.

The fun Swedish couple are probably using a promotion strategy. They want to have fun! They want to maximize their time on the trip and see as many cool things as possible. They want to take risks and climb mountains and drive on the Blue Ridge Highway (as this author can attest is very cool). They want to see Gettysburg and stay at weird hotels along the way. They have aspirations. They want to maximize gains.

The HVAC repair folks are probably using a prevention strategy. They just want the trip to go smoothly, and their clients to be happy. They don’t want hiccups, they don’t want flat tires, and they don’t want anything bad to happen. They want to minimize losses.

In both cases it’s the same trip, and both times people want the trip to go as well as possible; but they are oriented in different directions.

The same can hold true in a variety of political contexts. Right now, as I type this, immigration in the US is a huge issue. It’s a “hot-button issue” as they say. Generally, liberals in the US in the form of the Democratic party orient themselves in a promotion strategy on immigration. They are looking to maximize gains and talk about the benefits immigration can bring; more small business, greater cultural diversity, and higher economic growth for most (personal note from the economist writer, immigration is a net positive economically for the United States, but is a negative for some groups, mainly non-college educated white males).

Conservatives in the form mainly as the Republican party take a prevention strategy on immigration. They talk more in terms of a prevention orientation to reduce loss, such as reducing drug imports, stopping terrorist threats, reducing job losses, and not overcrowding the social safety net.

The reason the study that I mentioned earlier, “Bringing Frame Into Focus”, is important is that it dives into the effects orientation can have on how much a person likes a certain solution. The hypothesis they wanted to test was: do people treat solutions that are framed in the direction of their viewpoint more favorably? Does a better problem “fit” (either promotion or prevention) lead to a higher rating of the quality of that solution?

We’ve had a lot of talk recently about the “ideas bubble”. If you’re conservative you only follow conservative people on Twitter, and only get your news from conservative news sources. And if you’re liberal you are only friends with other liberals and only get your news from liberal sources. The effect being that both sides are shouting past each other because there is no sharing of ideas.

Many see this as a problem. I don’t want to frame it as positive or negative but it certainly is a “thing” that exists now. I feel confident in saying that the vast majority of Americans feel more polarized and split into factions, especially politically, than they have in the past.

I think this paper gives a big clue into why this is happening on an individual level. Why it is happening now is a much bigger conversation about trust in social institutions and technology, and a whole host of other topics I won’t get into now. But to have a good mechanism for why people like to be so tribal in their solutions is important.

To those who do see this polarization as a problem and want to try and fix it, let me give you this advice. A friend of mine specializes in racial inequality and gave me an interesting thought. We all have unconscious racial biases (check out https://implicit.harvard.edu/implicit/selectatest.html to take a test for yourself and see). She told me that having racial biases is okay on a personal level because we all are a little racist.  What’s important is that we recognize in what ways we have racial biases, and then work and act to negate those instincts. The important work that you can do to stop racism is not to stop the negative biases that exist because those are often already imprinted into us through society at a young age. Our brains automatically make “us” and “them” categorizations. Only the passage of time can defeat that by redefining the “us” as all humans, or at least not seeing “us” and “them” on the basis of race. Rather, the work you can do in this moment is understand the racist biases you have, be honest with yourself and with others, and then work to not make decisions based on those feelings. Understand, accept, and account for them. It’s sharing that understanding that will actually work towards ending racism, not pretending that the feelings don’t exist.

In the same vein, if you want to help stop the polarization it’s important is to understand, accept, and account for your self-regulatory orientation biases. To understand which way you are facing, and if the message you inherently “feel” bad or good about is logical, or just a feeling. Only by spreading that understanding, acceptance, and accounting for your orientation bias can the polarization be stopped. The brain will always win…

And that’s why framing is so important. We’ve talked about framing a lot, and this is another example that works qualitatively. The bias in how ideas are presented is fascinating because it is so antithetical to how we perceive ourselves. When we talk about number framing for example, it’s very interesting, and unconscious, but it’s sort of a mind trick. Look at this nifty magic trick I can do to make you act a certain way.

But we take our core believes very seriously. The idea that I could manipulate what strategy you think would best enact your core beliefs based solely on how I presented my ideas, how I “framed” my ideas, is scary! And insanely useful to people out there who work in the marketing field. Again, it’s because of this orientation and fit theory. Ideas presented in a way that are in the same orientation you are in will “feel” like a better fit, and therefore you’ll be more receptive to them.

So what did Lee and Aaker find in their research? It’s time to walk through it now.

Their first experiment had small groups of 5-10 people presented with ads for Welch’s Grape Juice. After the ads people rated a few questions on a 7-point scale including their attitudes towards the brand, with 7 being highest and 1 being the lowest.

People were split into a 2×2 condition. The first split was to get either a promotion condition, or a prevention condition.

The promotion condition had language in the ad such as “growing evidence suggests that diets rich in Vitamin C and Iron lead to higher energy levels,” and other gain maximizing language.

The ad in the prevention condition had language such as “growing evidence suggests that diets rich in antioxidants may reduce the risk of some cancers and heart disease”, and other language to minimize loss (of life due to a heart attack or cancer).

The second split in addition to the promotion vs prevention condition was the framing condition. People were even given a tagline for example of “prevent clogged arteries!” in the gain frame, and “don’t miss out on preventing clogged arteries”, in the loss frame.

As you can see, there was a nice split. Those who were prompted with a promotional regulatory focus had a better response when presented with opportunity for gain, and those who were prompted with a prevention regulatory focus had a better response when presented with the reduction of loss.

Both methods were effective, but how the message was framed changed based on the orientation.

Interesting stuff, but there’s lots more. Experiment 2 and 3 were similar as Experiment 1, but they included a perception of risk.

There again was an ad about mononucleosis (mono) this time, a relatively common but not fun disease. Exposure risk was manipulated by conveying that one could get mono from either frequent, or infrequent behaviors.

People in the “high risk” condition were told that they would be at high risk of getting mono from kissing, any kind of sexual activity, or sharing a toothbrush, razor, water, or soda, etc…

People in the “low risk” condition were told they were at high risk of getting mono if they got a tattoo, used needles, had a blood transfusion, or multiple sexual partners at the same time, etc…

The ads were then framed either in a gain condition or a loss condition. The gain frame ads said “enjoy life!”, and “know that you are free from mononuclousis.” The loss frame said things like “don’t miss out on enjoying life”, etc…

Results? Appeals that are low in risk are more effective when presented in a gain or promotion frame. Appeals that are high in risk are more effective when presented in a prevention or loss frame.

And this makes sense. When the risk of loss is low, like the newlywed couple, whose worse outcome is they have a “meh” vacation, we humans look to maximize gain. It’s a great biological adaptation strategy. Go take risks and maximize your potential rewards now while it’s safe. We naturally turn to a promotion orientation.

When the risk of loss is high, like the HVAC repair team, whose worst outcome is that they destroy millions of dollars in business and get fired and foreclose on their house, the great biological adaptation strategy is “be safe”. Minimize your losses; just get out alive. We naturally turn to a prevention orientation strategy.

This explains so many of our political framings as well. As I said earlier, immigration is a net positive for many groups of Americans. They adopt a promotion orientation.

But especially for those populations who experience immigration as a much larger threat to their livelihood, their community, and their career opportunities (again strongest among non-college educated white males), they take a prevention orientation. They are worried about losing their job to outside competition. They have a much higher fear of loss.

Therefore, messages that are oriented in the same direction that they are already facing will be much stronger.

Donald Trump was so effective with his message because so much of the discourse his supporters were hearing from other candidates was not in the same orientation they were in. They didn’t want to hear all these messages about how great the US economy was doing after the recession, or all the great things other establishment Republicans were going to do once they were elected. They were, and are, in a prevention orientation. They were trying to minimize losses.

President Trump soared in with a prevention message, that he would “make America great again.” That he’d stop drugs and people coming over the borders. That Washington D.C. was a corrupt swamp that needed to stop hurting America. His message was really, really effective. Very few other politicians were aligned in the same regulatory orientation as Trump and it carried him to the White House.

It’s the flip side of the wave President Obama carried to the White House in 2006-2008; “Hope and Change.” Here was a very upbeat message, that if elected he can maximize the gains American already has. But it was even stronger than his rivals and did especially well with the young people in his base that were in a promotion, gain maximizing, orientation. This author’s bet is that he would not have done nearly as well had the election occurred in 2009, in the heart of the Great Recession when more people had probably politically switched to a prevention orientation on many political topics.

There are countless more examples where this applies. But why is it so strong?

The theory is that people have an underlying perception about what message “feels right”. I quote the authors:

“When a message frame is consistent versus inconsistent with the way in which individuals naturally think about issues that involve positive versus negative outcomes, the message becomes easier to process. This ease of processing is subsequently transferred to more favorable attitudes”.

Connor Diemand Yauman, researched this idea that when people feel that information is easy to process then they process it differently (fluency) than when they feel that the information is difficult to process (disfluency).

It’s a brilliant idea so I want to make sure you caught it. When a message is in the same orientation you are in, the message literally becomes easier to process. The brain doesn’t have to spend time and energy and resources figuring out why this information doesn’t align with what I’m already thinking. It all makes perfect sense in the world, and the brain speeds it along. It’s familiar. And when things are familiar, they are processed faster, which makes them “feel” better, and more correct.

We’ve already covered a few studies in which recognition leads to more positive receptions. You process it fast, it feels good, and it fits with your self story. The orientation regulatory bias is that your brain simply says, okay, cool, that sounds right. I agree with that. And you move on.

You like messages you don’t have to think about. You like messages that fit and make sense to your self-story.

The smart researchers decided to test this theory! Because here we don’t simply spout ideas about why the world is the way we think it is… WE BACK IT UP WITH DATA! They wanted to test if indeed faster processing of a message (which they call “processing fluency”) when the message was aligned with their regulatory orientation.

The researchers used the same setup as Experiment 1, with the Welch’s grape juice. However, this time they did so on a computer, with words that flashed on the screen that they had to write down. It’s called a perception test and is pretty common. Because the words only flash briefly (we’re talking 50 milliseconds), the idea is that if you process some words faster than others, you’ll be able to perceive and write down more of those words. Simple enough right?

There were lots of random words that flashed, and then 8 target words. Four were promotion focused (enjoy, active, energy, vitamins), and four were prevention focused (disease, arteries, cancer, and clogged).

Remember the promotion group was told juice would give them more energy, and the prevention focus told they would reduce the risk of disease.

Results?

You can see that in the promotion control group far more words associated with promotion were perceived, and in the prevention, far more prevention words were perceived. This is clear evidence that supports the hypothesis that faster processing of a message occurs when the framing was in the same orientation as person.

The research paper quotes: “In sum, results from Experiments 4A and 4B provide evidence that participants experimented greater processing fluency when message frame was compatible with regulatory focus.”

In Experiment 5, they asked for how effective the message was. And I’ll let the paper’s authors sum this Experiment up quickly for you (you’ve already been through so much):

“[I]n high regulatory fit conditions, more support reasons came to mind, and heightened effectiveness was perceived by participants. However, it was the perceived effectiveness that appeared to directly impact brand attitudes, thereby shedding light on the specific nature of the processing fluency mechanism.”

So to tie it all together then:

“Our results demonstrate that enhanced processing fluency associated with higher eagerness and higher vigilance under regulatory fit conditions leads to more favorable attitudes. Thus, the current research shows that processing fluency may contribute to the “feeling right” experience that is transferred to subsequent evaluations of the product.”

What they are saying here is what I’ve already explained. The processing fluency, aka, the ease in which a message that is oriented in the same way your regulatory orientation already is contributes to the “feeling right” experience. Because it “feels right”, you rate that product, or that message as more favorable.

Obviously this has loads of marketing potential. But it’s very important to know which orientation your audience is, or your message won’t land. That’s why it’s so easy to tell people what they want to hear. Selling Coke to people who already drink Coke is easy because that population of people already like Coke. It’s a much harder task to try and get people who think soda is bad for you to drink Coke.

Okay, so obviously there are huge political implications, and important marketing implications. Let’s sum things up with some takeaways:

People have self-regulatory orientations. On different topics they can either have promotion orientation, to maximize gains, or prevention orientation, to minimize losses.

When messages are framed in the same orientation people are in, they are more effective and better received. This is because messages in the same orientation are processed faster, and therefore “feel” better.

If you want to be successful in any sort of voting contest where it is between a few choices, it is best to use a message that is framed in the correct orientation as your target audience. If everyone is in the same direction, including your competition, be the loudest voice. Either be the most preventing loss, or the most maximizing gain to make yourself stand out to a “base”.

If at all possible, do both! Be preventing losses to one crowd and maximizing gain to another.

 

Lee, A. Y., & Aaker, J. L. (2004). Bringing the Frame Into Focus: The Influence of Regulatory Fit on Processing Fluency and Persuasion. Journal of Personality and Social Psychology86(2), 205-218. doi:10.1037/0022-3514.86.2.205

Episode 11: Status Quo Bias


Hey, here’s a suggestion. Go workout. Right now. Go! Whatever you do, pound iron, run super hard, walk around the block; whatever it is, go get after it for half an hour. Okay bye!

Hey so that was great? Did you do the workout? Right now, did you actually get up and change whatever you were doing and work out?

I’m going to go out on a limb and guess that you didn’t do that! I’m going to guess you just stuck with the status quo. And that’s status quo bias.

In a paper entitled “Status quo bias in decision making” by Samuelson and Zeckhauser, they do an exhaustive summary of a ton of studies. A metric ton of studies. I shall quote the conclusion from their study of studies:

“In choosing among alternatives individuals display a bias toward sticking with the status quo.”

If you want to call this human laziness, if you want to call this human biology to conserve energy, call it what you will. But humans much prefer the current situation to stay as is. We don’t like when odds, circumstances, prospects, or anything else change.

There’s a theory that dopamine is not just our pleasure chemical, rather it is our seeking chemical, we’ll go seek new adventures and experiences and pleasure with it, rather than the other way around.

What are some ways you can take advantage of this? Assume change is going to be hard. Anyone who has tried to get a department to switch to a new version of software or a different program knows the pain. Assume that someone needs an impetus to take an action. If you want someone to switch from something to something else, give them a point of action or a trigger so they are forced to reevaluate their decision. On the flip side, if you don’t want people to make a change, don’t rock the boat. Don’t give them an opportunity to make a change. Just keep rolling same old same old.

People will still take action if the status quo becomes too much to handle… But when in doubt people will stick with the status quo.

 

Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty1(1), 7-59. doi:10.1007/bf00055564

Episode 10: How Sunk Costs Work

One of, if not the most, important motivator in behavioral economics is the fear of loss. We’ve talked about this extensively and it takes many different shapes. One of my personal favorites is what’s known as a “sunk cost”.

Compared to a lot of behavioral economics terms this is pretty popular and well known, but just in case let me mind-journey for you.

You’re really hungry. It’s a hot Chicago summer day and you’ve had a long day at work.

On your walk home on the corner is a friendly neighborhood torta foodtruck. Oh my god. You love tortas. The thought of savory Mexican jamon (ham), topped with avocado and all the fixings with amazing green chili hot sauce and crema between bread carries you away. The smell hits you and you’re again reminded that there is a god. The cost is $8 for a sandwich. Even though you have leftovers at home that you can eat, you decide to stop and order one. You’re hungry and it’s after work; time to treat yourself (#treatyoself).

You wait in line, pay your $8 cash, and get your perfect torta. Warm juices of salsa and jamon dribble down the sides. Your mouth waters. However, unbeknownst to you, years of freezing ground has pushed up a part of the sidewalk by 3 inches. Your foot catches the ledge and your perfect torta flies out of your hand into a dirty puddle; gone forever.

You turn around and there’s now a line around the block to get another one. What do you do? If you had not just already stood in line for forever you would have been overjoyed to stand in line and pay $8 for a torta. But dejected you droop your head and go home to your leftovers.

Okay! Mind-journey over. What is interesting about that story is that if you wrote a computer program that would make decisions for you the calculation is different than what happened in real life.

You were hungry and willing to wait in line and pay money for a torta. You lost a torta so you didn’t eat one and are still just as hungry. Your situation hasn’t changed, and a computer program would say that humans would get back in line and wait again.

But your lost torta is a sunk cost. You paid for it, and it’s gone. In theory, sunk costs should have no impact on your next decision. It’s sunk, it’s gone. But, of course it has an impact on your decision-making process. It shouldn’t but it does.

It’s partially because we humans have a fear of loss, and partially because we have trouble segmenting time and decisions. We lump things together. So even though the calculation should be would I pay $8 for a torta, because we’ve already lost one we can’t help feeling like we’re paying $16 for one torta, even though that previous loss is immaterial to our next decision.

This effect creeps up all over the place. We’ll talk about it more later, with for example the gamblers fallacy, where gamblers feel that if they’re on a losing streak they should keep gambling because their luck is going to turn around…

Okay so how does this idea manifest itself? When humans make decisions, they can weigh the emotional experience of a sunk cost as value and make what on paper is an “irrational” choice. Or a choice that is against their own self-interest.

There are a lot of great papers about real life experiments demonstrating the effect. I’ll stick with one by Arkes and Blumer from 1985, The Psychology of Sunk Cost. They asked a series of questions. Here’s a slightly modified version of the first experiment:

“You have spent $100 on a ticket for a weekend ski trip to Michigan. Several weeks later you buy a $50 ticket for a weekend ski trip to Wisconsin.

You think you will enjoy the Wisconsin ski trip more than the Michigan ski trip. Suddenly you realized your just-purchased Wisconsin ski trip is the same weekend as the Michigan trip! It’s too late to sell either ticket, and you cannot return either one. You must use one ticket and not the other. Which ski trip will you go on?

$100 ski trip to Michigan OR $50 ski trip to Wisconsin?”

Think about it and write your answer down.

THE ANSWER SHOULD ALWAYS BE WISCONSIN!

It literally says, “you think you will enjoy the Wisconsin ski trip more”.  Forget the price or what you purchased. The Wisconsin trip is the better trip! Go on the better vacation! Why would any pick a worse vacation?

Of course, 54% of respondents picked the Michigan vacation, and only 46% picked Wisconsin.

People who pick the Michigan trip say it’s because they don’t want to “waste the money.” But that money is already gone. It was spent and is a sunk cost.

Obviously, a fascinating result. And there are other questions that are variations on this theme.

The other interesting result was an experiment researchers did at the Ohio University Theater. People who bought season tickets were put into one of three conditions. One group got the normal price, one got a $2 discount (on each $15 ticket), and the third got each ticket for $7.

Results? The no-discount group used significantly more tickets (4.11) than both the $2 discount group (3.32) and the $7 discount group (3.29); but only for the first half of the season. There was no significant difference in the second half.

Now there are potentially a few factors at play. The first is sunk cost. After purchasing a season pass, those that paid full price felt compelled to go and not “waste money”. Even though it’s a sunk cost. Those who paid less had a smaller sunk cost effect.

The second main factor that may be operating is that people who pay more may give the concert tickets a higher value. The higher value is because it’s “worth” more for those who bought at full price than those who bought at reduced price.

Either way, the fact that the effect was time limited and faded after a few months is another interesting twist. The study doesn’t present a causal link, but there are a number of behavioral economics effects that seem to fade over time.

The takeaway here is that the sadness of a sunk cost may fade over time, and eventually fades away to a point where it truly is sunk. Maybe it’s just people’s way of processing grief. I know this author would grieve a lost torta.

So again, your mileage may vary. The effect may be strongest at the moment of “sunk” (think torta on the ground). And then its magnitude (technically amplitude) fades over time as the person goes through acceptance.

What are some real world applications?

Use the fear of sunk costs to stop customers from switching. Try breaking a payment in half or into smaller payments, and make each payment final.  For example, tell the customer that they can’t complete the training without finishing the entire payment. This will trigger the fear of loss due to a sunk cost. The customers will want to pay to finish the training.

Conversely, if a customer feels like their payment has been wasted somehow, they will be much less likely re-up to get back to their starting position. This can generate a lot of negative emotions; even if it’s not even your fault.

For example, let’s say someone gets $800 of car work done and then the next week finds out that they need another $800 done on a different part of the car. They’ll be very hesitant to do so because of the sunk cost even though the two are unrelated. They should pay for both repairs, but the sunk cost and fear of loss will make them warry of spending more.

So that’s sunk costs. I hope you sunk some effort and energy into this topic! You’ll never get it back; it’s sunk.

On a personal note. This can apply to relationship and other personal feelings too. Letting go is hard. It’s a feeling of loss that we can’t get back. It really sucks. But this author’s personal advice is to take a deep breath and remind yourself that a sunk cost is gone. It’s just that; sunk and unsalvageable. You can’t get it back, and feeling bad about things doesn’t help anything. Use some mental scissors. Clip away the old circumstances and make your current situation the only factor that matters and make your decision from there. It’s the best way to move forward; even if it’s hard to do.

 

Arkes, H. R., & Blumer, C. (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes35(1), 124-140. doi:10.1016/0749-5978(85)90049-4

Episode 9: What Are Heuristics?

I have a confession to make. It took me years to understand the concept of heuristics. I don’t know why. I mean, I’m a smart guy, who obviously understands this economic mumbo-jumbo far better than the ordinary person. And heuristics is/are one of the foundational ideas of behavioral economics.

Maybe it’s the name. Too Greek? A lot of behavioral economists who have written books explaining some of these ideas to the masses have done a pretty good job at explaining heuristics. I like the summary behavioraleconomics.com uses. They define a heuristic as a cognitive shortcut, a process in which a person substitutes a difficult question with an easy one (they cite Kahneman, D. (2003)). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93, 1449-1475.).

We humans do this intellectually, but also athletically. Let me give you an example by way of a mind-journey.

You’re back in 7th grade playing little league softball. It’s the summer tournament and it’s the first game of the summer season. Mid-afternoon, warm sunshine, pretty grass. Your parents are in the stands, though you will of course completely ignore them all game (you’re cool).

You’ve been at a few practices before the first game and the coach has enough sense to figure out, even at this early stage, that you’re not going to make it to the majors… So out to the outfield you go. That said, you’re not the worst person on the team, so at least they don’t put you in left field (left daydream more-like), they put you in right.

So far the game has gone pretty smoothly. It’s 2-1, your team is up in the middle of the third inning. You’ve already been up to bat, and actually managed to softly dribble a ground ball into the outfield and got on base! Made it over to second but then it was three outs, and you didn’t get to score.

Every little league team has that one kid that actually is good. Just far and away better at softball than other kids. Early puberty I suppose. Good hand-eye coordination. Parents are big into sports. Well they are now up at the plate and you’re a little nervous. So far no one has hit a ball to you. It’s little league and you’re in the outfield. Most runs are scored on errors throwing to first base. But this kid… Could launch one out to you and everyone will be watching. There are already two people on base, first and second, so it’s a big moment in the game.

Nervously you wait. Ball one. Strike one. Ball two. Next pitch is crushed. Right field. A high arcing sky-high hit. Now you’ve backed up a fair way, and no one hits home runs (it’s 7th grade), so it’s going to be up to you to catch it.

If a computer programmer, an engineer, or an economist were faced with this problem of getting your glove to the same spot of where the ball is projected to land (well, right before it lands), they would do the only thing that makes sense. The moment the ball is hit you can clearly see the flight trajectory. Based on the speed of the ball and the angle it is hit off the bat there is a clear concave flight pattern. You calculate the flight path, adjust slightly for wind, and determine the exact location the ball will land. Run to that spot, wait for the ball, and catch it when it gets to your glove. Easy peasy.

But if a human attempts to do that calculation in real time they always miss the ball. Human perception will misjudge the exact velocity.  The ark and weight of the ball will change how it falls, so it won’t be perfectly uniform. Wind and air humidity will influence exactly where it will land. The precision required to calculate where it will land is immense! Further, an outfielder needs to be precise to maybe 5 square inches. Maybe even 5 square centimeters.

It’s a nearly impossible problem for the human brain to solve in the 5 seconds of flight time. So humans don’t solve it. We take a short cut. We use a heuristic.

Right now (do this), put your hand up in front of you as if you were going to catch a pop fly. As long as your glove is “blocking” the ball as it’s in the air, you’re in the right place.

Imagine if you saw the ball under your glove, you’re too far back, it will fall in front of you. Conversely, imagine the ball is much higher than your glove, you’re too far forward, it’ll land over your head. And the same left or right. So long as you keep the ball at the same “spot” in your field of vision, you’re going to catch it. If it’s not at the correct height, or left/right, you need to run to get it back into position.

No humans calculate flight trajectories to figure out where the ball will land. They just use a thousand little, quick micro-adjustments to keep the ball at the right angle in air. And at the last second make a slight adjustment before it gets to the glove for the final placement. It’s a much easier calculation.

So this is what you do. Luckily you don’t have to move too far, just run a little in and towards your left. Even with the sun out you can see the ball, you track it, keeping it at a consistent angle. The “correct” angle says your brain. With your glove out, you let you reflexes take over at the last instant, moving the glove over ever so slightly, to correct the errors leftover from your heuristic. Instead of a huge error of maybe 30 meters, you’ve narrowed it down to a fraction of a meter.

You catch the ball. Overjoyed and excited you can’t help but look over to your parents who both gasp and clap and smile. You try to pretend you don’t see them because, duh, you are still being cool. The crowd claps and the other team groans that you didn’t drop it. But no one really cares except your parents, I mean, you’re in right field, it’s your job to catch balls that come to you. What sort of right fielder would you be if you missed fly balls? But you did it, another day another dollar. Your unconscious brain is trying to get your attention. Something you’re forgetting?

Oh! That’s right, we’re playing softball I need to throw it back into the infield! You do, and your throw is horribly off target and short by like 15 feet. But this is softball in 7th grade. No one is stealing bases. The second basement trots out to grab your pathetic attempt at a fastball and relays it to the pitcher who also drops the ball. Again. Softball, in 7th grade.

Play resumes and the parents continue to talk about this cool place they found out in the country that makes its own Chardonnay!

Ah yes, little league softball, those were the days…

Snap back to reality. Oh, there goes gravity (as an example). Oh, there goes Guthrie, he overwrote, you’re so mad, but he won’t give up that easy, no, just gotta lose yourself in the mind-journey, don’t you ever let it go. You only got one chance, do not, drop the ball. Use a heuristic! (The author groaned with he saw he had wrote this, but decided to keep it in because it’s so groan worthy…)

The process of catching a softball is a simple explanation of a heuristic and how it works.

Heuristics can be cognitive as well as physical. In fact, perhaps the most important heuristics you will come across are cognitive. Educated guesses, intuitive judgements, guesstimates, profiling, stereotyping, or most mental shortcuts are all examples of heuristics.

Here’s a quick example:

You are in charge of designing the new website for your small business. Your boss comes to you and asks you, “Should the main menu be horizontal or vertical? “

To truly figure out the correct answer would take modeling, and user testing, and analytics and all sorts of tough thinking. But you can simply say horizontal because you’ve seen other websites with horizontal menus and you like them. You’ve used a heuristic to save a lot of time and decide.

There is often a perception that taking a heuristic shortcut is bad, or lazy. But there is research that suggests that you can get better results if you use a heuristic.

I want to talk about the “take-the-best” and the “recognition” heuristic as described by Gigerenzer and Gaissmaier in Heuristic Decision Making in 2011 and Models of ecological rationally: The recognition heuristic by Goldstein and Gigerenzer from 2002.

They very carefully outline the model of the take-the-best heuristic.

Here’s how the (very simple) take-the-best heuristic works:

You’re forced to pick between two choices. One of the choices “feels” good. Don’t think about it, just pick it. That’s all there is to it.

The reason this works is because the alternative with the positive cue (“feels good”) has a higher value. Pick it, trust your unconscious and move on.

The recognition heuristic is basically the same as take-the-best, but with a slight difference. When faced with a choice don’t pick what “feels” the best but pick whichever answer you recognize first.

Most of the time when you use the recognition heuristic you will end up with the same result as if you use take-the-best. This is because answers that come to you quickly often feel the best, and answer you recognize will come to you more quickly.

It may seem weird that using these simple heuristics would actually lead you to a right answer more often than if you think about it. But let me tell you very briefly about the research.

In their studies the researchers asked people two questions. The first was to pick which German cities had larger populations, and the second which mammal lifespans were longer.

German Cities and Mammal Lifespans
German Cities and Mammal Lifespans

They then told participants to use all sorts of various tactics to make their decision. The take-the-best heuristic got the best results as you can see on the graph in Figure 1 (each line on the graph is a different model, the higher the lines the better the accuracy).

The researchers later gave a question about German city size and then American city size. They asked most participants to use the recognition heuristic (if you recognize it, choose it).

Here’s the crazy part, German participants did better on the American cities test than German cities, and Americans did better on German cities than American cities!

Sometimes when you go with your gut, it really is the best choice. By overthinking the answer using more knowledge about cities in their own country people got worse results.

From these results the researchers came up with the very short and simple “fast and frugal” rules you can use to come to better answers, quickly.

First, search for clues, or information that would be useful in making a decision.

Second, stop searching when the costs of further search exceed the benefits. That is to say, stop searching when simple searches fail to provide you with useful information. Excess information is bad; you only want the bare minimum.

And third, make an inference or decision when the search is stopped. Don’t think too hard about it; just make a decision and move on.

Even though that sounds silly and not well thought out, it can often lead to better results than a long-drawn-out process.

We’ll cover more studies later about why heuristics often can create more accurate answers even though they take less thought and effort.

The short answer is that using a heuristic stops your brain from consciously thinking too much. The more you consciously think, the more your biases get in the way. And the more you are misguided by your cognitive biases, the easier it is to come to the wrong result.

If you take the fast and easy solution you skip that whole process.

In conclusion, here are some real-world takeaways:

It’s important to know what a heuristic is and how people think. We use these all the time, but it’s okay! Shortcuts for humans sometimes work best.

Don’t overthink things, it can be less accurate and takes much longer.

Utilize “fast and frugal” heuristic rules when you need to be relatively accurate quickly and en masse.

Let me know if you have seen too much thinking get in the way of the best result.

 

 

Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review109(1), 75-90. doi:10.1037//0033-295x.109.1.75

Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic Decision Making. Annual Review of Psychology62(1), 451-482. doi:10.1146/annurev-psych-120709-145346

Episode 8: Time Discounting and Time Preference


What is the nature of time? Oh how I love Carl Sagan. But we’re not talking about spacetime, rather, we’re talking about humantime. Or how humans value time.

How humans perceive time is a tricky question, and one that I am not going to answer today. Too much unknown and unexplored psychology and not enough behavioral economics. Maybe how humans perceive time is a more interesting question, I’m not sure.

But what is easier to measure is how much you are willing to be paid to wait. In that way you can put a value on how much it is worth to get something sooner. This will be my first post in a series on time and what economists call time discounting.

Time discounting as an economic concept is pretty simple. I can pay $6 to get an online order to me today, or $5 to receive it tomorrow. There’s a discount if I wait.

In later posts I’m going to dig into some of the specific research that gives some concrete numbers on time discounting. But today I am going to stick with the theoretical.

Just like you can make a choice between things, you can make a choice to give or get something between times. The technical term for this is “intertemporal choices”. In “Time Discounting and Time Preference: A Critical Review” by Frederick, Loewenstein and O’Donoghue they define intertemporal choices as decisions involving tradeoffs among costs and benefits, that occur at different times. Choice means choice, intertemporal means between times.

How have economists dealt with this problem in the past? Like the rest of economics, the answer is they did so by oversimplifying the situation. Let’s start with the idea of discount utility because it’s very straightforward. Mind journey!

It’s a nice crisp fall day and you know what time of year it is… Pumpkin latte season! Your favorite. As you walk into your neighborhood coffee shop you can smell the strong aroma of nutmeg and pumpkin pie spice drift over you. You check the big board menu for ideas but let’s be honest; you’re getting your first pumpkin latte of the year and you’re really excited about it.

After waiting your turn in line, you give your order to the barista. At checkout you’re asked if you would like your order right now, or tomorrow. “Say what?” you ask. “Well, we’re giving you a choice. We can give you your order now, or you can choose to pick it up tomorrow. It’s the same price, and you have to pay now regardless of your choice”.

Ummm… You’re getting that latte right then and there. It’d be silly to wait until tomorrow! But what if they gave you a discount off your bill, say, $1 to wait until tomorrow. Or $2. At some point they can bribe you enough to wait.

That’s what is called your discount utility: the amount of value (utility) that you derive from getting your latte that instant.

Further, let’s assume you’re willing to take $1 to wait one day. The assumption in the old economics world is that time preferences are linear. Therefore, if you would take $1 to wait one day, then you would accept $2 to wait two days.

There are many psychology research studies that suggest that the discount utility model of time is wrong, and, in fact, in “Time Discounting and Time Preference: A Critical Review”, they painstakingly look at academic evidence for the traditional view of discount utility. And conclude that the old model has little empirical support.

“Economics has always been both an art and a science” says the paper’s authors. And that is a statement this author also agrees with. Simple discount utility is far too simple to accurately model human behavior.

This idea is only the tip of the iceberg.  There is research that shows that dopamine is released in anticipation of a reward, not when a reward is actually received (CITE), so sometimes it’s more fun and addictive to be forced to wait.

Here’s a video where Sapolsky goes through this research: http://library.fora.tv/2011/02/15/Robert_Sapolsky_Are_Humans_Just_Another_Primate

Humans certainly don’t value time linearly, and there are a lot of behavioral science papers about that I’ll be discussing later in this series. Briefly, making me wait one week is going to be expensive. But making me wait 10 weeks is not going to be 10x as expensive. It’s at least not linear.

And then there are the other wonderful human traits of scheming, plotting, planning, investing, gratification, and being lazy. There are a lot of competing factors about how we value time.

Frederick, Loewenstein and O’Donoghue encourage economists to abandon using the fundamental idea of discount utility altogether since it doesn’t seem to line up with what is happening in human heads. I quote from the paper:

“In sum, we believe that economists’ understanding of intertemporal choices will progress most rapidly by continuing to import insights from psychology, by relinquishing the assumption that the key to understanding intertemporal choices is finding the right discount rate (or even the right discount function), and by readopting the view that intertemporal choices reflect many distinct considerations and often involve the interplay of several competing motives.”

Sorry for the long, complicated paragraph, but I feel it is a great summary. Use insights from psychology. Stop trying to find a magical discount formula. Accept the messiness and strangeness of human decision making.

That’s great you may say, but if not discount utility, then what?

There is a competing way to define time discounting in economics and it’s called hyperbolic discounting. Pardon the name. But it’s a general understanding that human discounting is not time-consistent. Humans probably don’t even perceive time linearly, or consistently; much less value it that way. Its value is random and weird, just like humans.

Sure, there might be patterns that behavioral economists can find, but humans just don’t value or devalue things in linear, simple ways. It’s messy.

Again, I plan on talking about more specifics about how behavioral economists should think about humans and how we value time. But for now I just wanted to cover the big idea of even thinking about how to measure the value of time using economics.

In sum, if you are working on anything that involves time passing, don’t assume that there will be a lot of rational consistency in how people value waiting.

Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time Discounting and Time Preference: A Critical Review. Journal of Economic Literature40(2), 351-401. doi:10.1257/002205102320161311

Fiorello: Fiorillo C. D., Tobler P. N., Schultz W. (2003). Discrete coding of reward probability and uncertainty by dopamine neurons. Science 299, 1898–1902 10.1126/science.1077349 [PubMed] [Cross Ref] (dopamine anticipation)

Episode 17: Cooperation and Punishment

Did you go to college? Hopefully a liberal arts college? Even if you didn’t, think back to some late night with your buddies, maybe a little weed was smoked. Or in some dopy poly-science class with the one know-it-all jerk who would always shoot up their hand to give some long running opinion about society?

Then maybe you have heard of the idea of the social contract. We humans give up some of our freedoms and autonomy to the “state”, or society, in exchange for security. We do this because more things can be done with collective action; there are more benefits to working together than working apart.

But to enforce that “social contract” you must play by society’s rules. No murder, or postal fraud; whatever is the rules are. If you violate those rules, “society” in the form of government, or police, or tribunal elders, or whatever, will punish you to keep you in line.

What does this have to do with behavioral science? When we start thinking about the dynamics of teamwork or working together, then behavioral science gets involved.

And a lot of our interesting social biases show up when we’re trying to do things with other people; especially cooperating. It’s an especially interesting field of research.

The specific topic I want to cover is “crime and punishment”. A great name for a book, and a great idea for a behavioral economics paper. People HATE being a sucker. Let’s go on a mind journey.

You’re a serf in Eastern Europe in the mid 1500’s. You live in a wooden shack in a small rural town with your spouse and three small children. You and about 15 others are woodcutters. You live near a wooded, hilly region so it’s easy to collect small firewood sticks.

With basic hatchets you cut down small trees and chop off small branches. You break those down into yet smaller bits, and smaller sticks yet from those. The sticks are put into carts and pushed by hand up the hill to the governor’s house, who owns the land.

He is in charge of the local region, collects the taxes, maintains order, and generally runs everyone’s life; especially the lives of serfs like yourself.

The governor provides for each woodcutter and their family with a livable amount of grain and other food each week, as well as a small amount of money. Sometimes you get a little bit of gamebird. Or fresh fruit or cabbage if it’s in season. On occasion some butter. Extra supplies like clothing, or nails may also be acquired with special permission, however they are rare.

The governor is more rewarding to the serfs who provide him with more firewood. Firewood is important as it keeps people’s small homes warm in tough winters and provides critical cooking heat. The top choppers get a bit extra here and there as well as first priority for certain favors.

You and the 15 other firewood choppers, over many years of chopping, have realized that one of the biggest waste of daylight is stacking the bits of wood into your cart, and then pushing the carts up to the governor’s storage sheds. The push can be made much faster and easier if all the woodchoppers combine the firewood together into larger carts that can be pushed by multiple people. There is less sorting by size, faster moving, and fewer carts, which means more time during the day for actual chopping.

So, you gather all the choppers together, and after talking to everyone, you all decide to work together.

The plan is to combine some of the firewood together, and then once at the governor’s sheds, divide that wood up amongst yourselves. Everyone has a rough quota they have to fill. Once they fill up their quota for the group, then they can continue chopping for themselves. This way those who cut more still get the credit they deserve, but everyone gets more time to chop more wood to get more food.

There’s one troublemaker, Ciszko (real name I checked historical records at about the time), who recently has been taking extra from the group cart. Every day, when he thinks people aren’t looking, he grabs a bunch of wood off the group cart to claim as his own. But he’s been way too selfish, and has gone from a stick or two, to whole bundles he is claiming for himself.

A few of the woodcutters have confronted him, and he says he’ll stop, but doesn’t. Each day he takes more and more of the group’s wood. Wood you spent your hard hours chopping. Ciszko is a lazy, slimy dirtbag. You worry that if something isn’t done others will start to steal and your whole group haul will fall apart. He even has had the nerve to ask the guard for extra wool and was given it. Dirty, slimy Ciszko. He lies to your face and steals behind your back. You feel like a sucker. Ciszko needs to be punished. He needs to be taught a lesson to prevent others from stealing from the group as well.

Let’s stop this narrative now and move on before this gets too Medieval (in the narrative in the author’s head Ciszko ends up being threatened with the loss of a hand and ends up a finger short).

There is value in punishment. What Ciszko is doing is known in behavioral economics, or political science as “free-riding”. Others are doing work, and he is riding off the backs of the work of others. People in today’s modern society really don’t like this. Charity is one thing but being taking advantage of is another. It triggers anger.

If there’s anything we humans do really well it’s anger and punishment. We’re really good at it. We love to punish. Why? Because it’s easy. It’s really just the inverse of rewards; the easiest and laziest motivator.

It takes very little effort to punish compared to other methods of behavior change. It can “right a wrong”, which satisfies deep emotional feelings we primates have. We are among the few species on earth that go to war or commit genocide. We are tribal, and if someone is undermining the tribe, punishment can be a collective way to restore unity to a group.

What’s fascinating is that people like punishment so much that they will punish free-riders even if it is costly for them to do so. Or to put it another way, people will punish even if it is against their own self-interest.

Fehr and Gächter studied this interesting effect in a paper called “Cooperation and Punishment in Public Goods Experiments.”

They strategically set up a series of games in their experiments with complicated payoff schemes, and times, or opportunities, to see how a group collectively punishes.

Experiment 1 had two groups. The first was the “Stranger” group, which was played with random people each round, and the second was the “Partner” group, which was played with the same people each round (10 rounds, or periods as the study called them).

In a classic 2×2 condition setup, there were multiple Stranger groups, and multiple Partner groups. The difference between them was that some groups played a game where there were no punishment opportunities, and some played a game that had punishment opportunities.

The rules of the game, while simple, are only complicated because of the payoff structure. Each period, each subject in a group gets 20 tokens. They then decide to keep the tokens, or invest the tokens into the “project”. Everyone makes their decisions simultaneously for each round (you won’t know what anyone else does until the big “reveal”).

Money that is put into the “project” is magnified, and then split equally between everyone, even if you don’t pay into it. Therefore, while total payout is maximized if everyone fully cooperates by putting all 20 of their tokens into the project, you can make more if you “free-ride”. In game theory we’d say that full free riding is the “dominant strategy”.

In laymen’s terms, it means your optimal outcome is to keep all of your tokens to yourself but have everyone ELSE put all of their tokens into the project. Because then you get to keep your own coins, but also get a big slice from the project payout that everyone else paid into. You’re keeping your cake and eating theirs too. It’s a classic free rider game.

The rub is that everyone knows this. You and everyone else thinks hmmm… If I put my coins into the project they’ll just be going to everyone else. No one else is going to put their coins in, so why should I? In this game, the dominant strategy per game theory (the strategy that will always happen), is that everyone keeps their coins. Everyone free rides.

But that’s without punishment. And that’s why there is a second decision stage. After everyone keeps or puts in their tokens to the project, and the big reveal happens, subjects are given the opportunity to punish each other by assigning so-called punishment points. This also happens simultaneously, so there’s a big reveal to see who is punishing whom all at once.

If you are given a punishment point, your payout is reduced 10%, all the way down to 0%. So if people don’t like you, they can send you home with nothing (10 punishment points means your payout is reduced 100%, or down to 0).

As a small side note just to show off and look cool, this is the payoff of the game:

game payoff

So what happened? To measure cooperation the researchers used the median and average contribution to the project each period. Median again is like average, but instead of adding together and dividing by the number of things, you just put each result in a line and pick the middle-est number. That’s the median.

Let’s start with the “Stranger” groups where each round had different people. I quote from the paper:

“The existence of punishment opportunities causes a large rise in the average contribution level in the Stranger-treatment.”

graph 1

As you can see, when there was punishment, many more people cooperated by contributing their tokens to the project. In all groups without punishment, the average contribution starts decent, around about 8, but then falls off to around 2. There’s still some jolly goodhearted people who just want to work together, but by the end of the game, everyone figures out the dominant strategy, which is to be selfish and keep all of your tokens.

Meanwhile, in the punishment rounds everyone figures it out pretty fast. Pay your tokens into the project, or you’re probably going to get punished. Sure, someone will try and be cute every round or so and try to grab some here and there and get away with it, but most cooperate.

Let’s look at the Partner groups’ graph.

graph 2

Unsurprisingly, the effect is even stronger because you play with these people multiple times. You know who the trouble makers are, and the group can quickly come together and act to punish because of the bonds of trust of working together in the past.

Whereas the highest the stranger contribution rounds ever got to was about 14 tokens contributed with punishment, average contribution per person for the Partner rounds was over 19, almost 20, or complete cooperation.

That’s an interesting insight. But the really fun stuff is when the researchers looked at when and how people decided to punish. It’s probably not something you would have thought about or mapped out. Most people would dole out punishment when it felt right. So, when does it feel right? What do people feel is just?

The magic number it turns out was not how much someone gave to the project. The magic number was how much someone gave relative to the average contribution of other group members.

The researchers looked specifically at how far away each person was for each round from the average contribution to the project, and how many punishment points were applied. For those who tried to freeload 2-8 tokens less than the average, they received on average 3 punishment points, and in the Partner group it was slightly higher than the Stranger group.

For those who tried to freeload between 8-14 tokens less than the average tokens contributed, those people were punished with about 5 punishment points (again with the Partner group being slightly higher). And for those who freeloaded between 14 and 20 points less than the average (the most anti-society), they were hit with the same average 5 punishment points in the Stranger group, but walloped with an average of 7 punishment points in the Partner group.

Remember, for each punishment point you get, you lose 10% of your tokens, so getting 4 punishment points is twice the punishment as getting 2 points.

All sorts of interesting gems can be learned from this.

When it comes to strangers, not playing along is bad, and we will punish strangers, but at a certain point there is a leveling off. So takeaway, if you’re going to freeload, or steal from strangers, freeload either small enough to get away with it, or big enough for the punishment not to matter.

A possible real world example could be international corporations in a new country using some unseemly business practices to drill for a bunch of oil while ignoring some local laws. This study would suggest that if that is indeed your position, either do small stuff or keep it under the public eye to get away with it, or do it huge, get all of the resources out, and get punished. The punishment will be moderate whether or not you transgress moderately, or severely.

However, if you are freeloading, or stealing, from people you know, aka, part of your community, the harshness of the punishment knows no limit.

The worst punishments are reserved for those who know the societal rules and ignore them. Perhaps strangers are given the benefit of the doubt that they are ignorant of the societal rules, and therefore are not punished as harshly in extreme circumstances. Perhaps, when it comes to strangers, there is a natural inclination to not burn bridges. We ought to punish this stranger so he or she understands our societal rules, but not so severely as to completely turn them against us. Perhaps the intuition goes, if we are moderate with a stranger, they will learn and assimilate into our cultural norms.

Maybe that’s how societies and cultures grow and flourish; through the moderate punishment of strangers.

Perhaps we assume strangers are out to get us (stranger danger!), so when they act wrongly there is no surprise, and therefore no shock, and therefore moderate punishment. But when a “friend” (someone within the social circle) breaks those societal rules it is a surprise, and therefore a shock, and feels worse because of the framing. And that leads to harsher punishments.

I quote from the paper:

“It is interesting that in the Partner-treatment it is only the negative deviation that affects punishment levels systematically, where as the level of the others’ average contribution has no significant impact… [this] suggests that only deviations from the average were punished. This may be taken as evidence that in the Partner-treatment subjects quickly established a common group standard that did not change over time.”

Next takeaway, and I quote the paper: “The more an individual negatively deviates from the contributions of the other group members, the heavier the punishment.” So when you are in a group, or making a decision as an organization that’s in a bigger group, look to everyone else. If you want to stand out, just figure out what everyone thinks the average is, and then stick to that.

It doesn’t actually matter what the real number is, the only thing that matters to avoid punishment is what the mini-society thinks is the real number.

For example, let’s take tech company’s privacy policies. If a majority of American’s believe that large tech companies have little or no policies for consumer privacy, that’s the societal standard; even if in fact most large tech companies do provide many consumer protections to protect users’ privacy.

Behavioral economics theory would suggest that if you’re a new company looking to maximize profit you should have little to no consumer privacy policies to make more money. The group members (the public) do not see you as deviating from the average and you will not be punished.

Now you might lose business to other companies, but that’s because privacy is part of the product value. It’s an economic argument over value, not a punishment risk.

Here’s another interesting takeaway, and it’s about consistency. The Stranger groups did not contribute to the project at high rates. Therefore, when punishment was doled out the overall income of all the players combined went down. At least in the Partner group the overall income could go up because punishment of freeloaders leads to increased project contributions, and therefore overall higher incomes.

But if one punishment opportunity is missed, and people feel they can “get away with it”, everyone runs to their “own interest” corners and the cooperation breaks down. To achieve maximum social good, it requires consistent and reliable punishment 100% of the time.

There are very good arguments to be made that the criminal justice system is often rather inefficient at stopping crime  because of the inconsistency of the punishment. Cocaine use is illegal and heavily punished by the penal codes, but only a tiny fraction of people using cocaine are ever actually punished by society for their use (they don’t get caught). And when they are caught the punishments are often so harsh they can turn members of the group against the punishment.

Conversely, professional sports strongly relies on the consistency of punishments. Players know exactly how much they will be punished (ideally) when they transgress, and they know the punishment will be immediate.

If you want to stop goaltending, call it every time and award a basket to the other team on a shot attempt. The action almost immediately stops. Meanwhile travelling in the NBA is called very sporadically, and players often commit small travels without consistent punishment. The result? Lots of players travel, even though the punishment is about on par with a goaltend (I would imagine both are worth about on average 1.1 points, the value of the average possession in the NBA).

And one last take-away. If you want to destroy a society, from a parent-teacher organization, to the Galactic Senate, and completely collapse it from within, all you have to do is figure out how to make punishments for breaking the social norms inconsistent. As soon as that happens everyone will run to their own best interest corners, and the society will lose its economic collective advantage and disintegrate.

The best and most famous example in history perhaps is the appeasement strategy leading up to World War II. After World War I the League of Nations and been formed, and with it a society of nations to collectively punish those rogue states that broke the norms of the world. It worked for two decades, but as soon as it was tested (mainly by Hitler during his annexation of Austria, and further expansions) and was not punished consistently, the actors who wanted to break world nation norms did so (Japan invading the Pacific, Italy, the USSR, etc…), and the League of Nations collapsed. It was replaced by a new society (the Allies), and later, by the UN. But the strategy of deterrence, or consistent punishment if norms are broken, has been the most effective strategy in the world of political science.

Let me know if any of these many lessons from this study have made it to your society, and if a change from you helped stop freeloaders.

Remember, if you want to create a culture of trust and cooperation, the group needs punishment to form collective action.

 

Fehr, E., & Gächter, S. (2000). Cooperation and Punishment in Public Goods Experiments. American Economic Review90(4), 980-994. doi:10.1257/aer.90.4.980

Episode 7: How Using the Ultimatum Bargaining Game Proves That Cultures of Trust Require Public Retaliation (NOT Altruism)

Game theory. Or should I sayyy LAME THEORY. Ayyyyyyy….

This post is about one small game, the ultimatum bargaining game, that’s useful in explaining the tools behavioral scientists can use to measure the reactions of other humans.

Did you ever watch the (now old) movie A Beautiful Mind? It’s about a mathematician named John Nash who developed the now famous Nash Equilibrium. That’s the beginning of the field of game theory. And game theory can be quite useful, as I said earlier, as a tool to measure how humans rate and react to choices.

I’m not going to actually tell you anything about game theory because it’s complicated and hard and there are 100 other posts and videos on Youtube that would do a much better job than I could. I just want you to be familiar with what it is and understand some of the simple games that are commonly used.

Okay, now that I’ve sufficiently buried the lead… The Ultimatum Bargaining Game! Güth, Schmittberger, and Schwartze in 1982 published a paper titled An experimental analysis of ultimatum bargaining. Now I’m not sure if they invented the ultimatum bargaining game, but they certainly get the credit for popularizing it. It goes like this:

There are two players and some money. One person has all the money and makes an offer to the other person.

If the other person accepts the offer, they get the amounts that were offered, but if they reject the offer, both people get nothing.

For example. We start the game and I have $30. I offer you a split where I keep $20, but you get $10. You’re not super happy about it but hey $10 is better than nothing, so you accept and we both get paid.

Next time I have $30, but I offer a split where I keep $29, and you get $1. ”Screw you!” you say. I’m such a jerk. You reject the offer out of spite and no one gets anything. Obviously, you can see the interesting behavioral economics twist.

Classical economic theory would say that the second person always accepts, because any amount of money, be it $5 or $1 or whatever, is more than nothing. The rational person (“actor”) always takes more over less.

And, of course, in the real world why this game is so brilliant is that it doesn’t happen that way.

People reject offers out of spite; especially when multiple rounds are played and there’s a history with someone. This is a classic decision of people making choices against their own self-interest! If I told you that you could make $1 just by accepting the dollar, wow! Sounds too good to be true. But if I tell you someone split $100 and gives you only $1… Not so much. It’s fascinating stuff.

I want to tell you about another paper entitled “Trust, Reciprocity, and Social History” by Berg, Dickhaut, and McCabe. They ran an experiment using a derivative of the Ultimatum game. Subjects in room A and room B are each given $10.

In room B, they pocket their money. In room A, they must decide how much to send to their (anonymous) counterpart in room B. Whatever amount A sends to B is tripled.

B then gets to choose how much money to return.

This second half of the game is a dictator game, because the room B person doesn’t have to give any money back to the other person in room A.

The optimal strategy for A is to never send any money because there is no guarantee they can get anything back. It’s an experiment in trust. If B doesn’t give back to A, next time they worry A won’t give anything to B.

In 55 out of 60 times running this experiment, A sent money to B. And I quote from the paper:

“In conclusion, experiments on ultimatum game, repeated prisoners’ dilemma games, and other extensive form games provide strong evidence that people do punish inappropriate behavior even thought this is personally costly.”

I’ll talk more about punishment later. Never underestimate the power of humans to make decisions not in their best interest, out of spite, and also give to others not out of kindness, or altruism, but out of fear of spite.

One theory of why 55 out of 60 people sent money even when they may have been better off not giving, was altruism. Altruism is the idea that humans do things that are purely good because we enjoy helping other people.

However, in a follow-up study in 2012, a different group re-investigated the game in “Does the trust game measure trust?” by Brulhart and Usunier. They found that none of their altruism measures were statistically significant, and I quote from the paper:

“In sum, our results suggest that altruism is not a statistically significant motivating force in determining “trust-like” behavior, both across all subjects and for specific groups of players.”

Trust was not formed through kindness, rather it was formed from fear of retribution. Altruism had nothing to do with trust in their Study.

How does this apply to the real world? Well, when people are anonymous weird stuff happens. People aren’t altruistic most of the time, especially when they can directly benefit by keeping money to themselves.

How then do you change behavior? How do you encourage altruistic behavior? Maybe you have a cause that you’d like to promote, or you are trying to create change somewhere.

If you want to create a culture of trust and sharing you must easily allow for public shaming and retaliation. Even if that retaliation ends up being a loss for everyone. People will hit the button that says “Well, if you won’t be nice to me, I won’t help you either even if it hurts me.”

Retaliation does not have to be in money. It could be in PR loss, or some other type. But it is critical that you create an environment that says clearly that these are the rules “we” the members of the community have agreed to. If you violate these rules the community, together, will punish you.

If the rest of the community does not band together to collectively punish the selfish; the selfish act will almost always win. And in systems and markets with especially greedy or immoral behavior you often see that the community does not take action against a bad actor to enforce community standards.

Economists can learn a lot about the process of human decision make through games. I wanted to introduce the idea of a few interesting games where the Nash equilibriums may indicate a different result than what we see in the real world.

I love games and have always found various setups like this exciting and fun. We’ll explore more fun games like the Ultimatum game in the future because it is so useful at eliciting human behavior.

 

Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, Reciprocity, and Social History. Games and Economic Behavior10(1), 122-142. doi:10.1006/game.1995.1027

Brülhart, M., & Usunier, J. (2012). Does the trust game measure trust? Economics Letters115(1), 20-23. doi:10.1016/j.econlet.2011.11.039

Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization3(4), 367-388. doi:10.1016/0167-2681(82)90011-7