How do you build a culture of trust?


Logo for HumanTech podcast
How’s the trust quotient where you work? Or in the country where you live? How do you build a culture where people trust each other?

We talk about the research on cooperation, punishment and trust in this episode of Human Tech.

For more details on the topic after you listen to the podcast, you may want to check out the blog post and video post on the same topic.


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Episode 17: Cooperation and Punishment

Did you go to college? Hopefully a liberal arts college? Even if you didn’t, think back to some late night with your buddies, maybe a little weed was smoked. Or in some dopy poly-science class with the one know-it-all jerk who would always shoot up their hand to give some long running opinion about society?

Then maybe you have heard of the idea of the social contract. We humans give up some of our freedoms and autonomy to the “state”, or society, in exchange for security. We do this because more things can be done with collective action; there are more benefits to working together than working apart.

But to enforce that “social contract” you must play by society’s rules. No murder, or postal fraud; whatever is the rules are. If you violate those rules, “society” in the form of government, or police, or tribunal elders, or whatever, will punish you to keep you in line.

What does this have to do with behavioral science? When we start thinking about the dynamics of teamwork or working together, then behavioral science gets involved.

And a lot of our interesting social biases show up when we’re trying to do things with other people; especially cooperating. It’s an especially interesting field of research.

The specific topic I want to cover is “crime and punishment”. A great name for a book, and a great idea for a behavioral economics paper. People HATE being a sucker. Let’s go on a mind journey.

You’re a serf in Eastern Europe in the mid 1500’s. You live in a wooden shack in a small rural town with your spouse and three small children. You and about 15 others are woodcutters. You live near a wooded, hilly region so it’s easy to collect small firewood sticks.

With basic hatchets you cut down small trees and chop off small branches. You break those down into yet smaller bits, and smaller sticks yet from those. The sticks are put into carts and pushed by hand up the hill to the governor’s house, who owns the land.

He is in charge of the local region, collects the taxes, maintains order, and generally runs everyone’s life; especially the lives of serfs like yourself.

The governor provides for each woodcutter and their family with a livable amount of grain and other food each week, as well as a small amount of money. Sometimes you get a little bit of gamebird. Or fresh fruit or cabbage if it’s in season. On occasion some butter. Extra supplies like clothing, or nails may also be acquired with special permission, however they are rare.

The governor is more rewarding to the serfs who provide him with more firewood. Firewood is important as it keeps people’s small homes warm in tough winters and provides critical cooking heat. The top choppers get a bit extra here and there as well as first priority for certain favors.

You and the 15 other firewood choppers, over many years of chopping, have realized that one of the biggest waste of daylight is stacking the bits of wood into your cart, and then pushing the carts up to the governor’s storage sheds. The push can be made much faster and easier if all the woodchoppers combine the firewood together into larger carts that can be pushed by multiple people. There is less sorting by size, faster moving, and fewer carts, which means more time during the day for actual chopping.

So, you gather all the choppers together, and after talking to everyone, you all decide to work together.

The plan is to combine some of the firewood together, and then once at the governor’s sheds, divide that wood up amongst yourselves. Everyone has a rough quota they have to fill. Once they fill up their quota for the group, then they can continue chopping for themselves. This way those who cut more still get the credit they deserve, but everyone gets more time to chop more wood to get more food.

There’s one troublemaker, Ciszko (real name I checked historical records at about the time), who recently has been taking extra from the group cart. Every day, when he thinks people aren’t looking, he grabs a bunch of wood off the group cart to claim as his own. But he’s been way too selfish, and has gone from a stick or two, to whole bundles he is claiming for himself.

A few of the woodcutters have confronted him, and he says he’ll stop, but doesn’t. Each day he takes more and more of the group’s wood. Wood you spent your hard hours chopping. Ciszko is a lazy, slimy dirtbag. You worry that if something isn’t done others will start to steal and your whole group haul will fall apart. He even has had the nerve to ask the guard for extra wool and was given it. Dirty, slimy Ciszko. He lies to your face and steals behind your back. You feel like a sucker. Ciszko needs to be punished. He needs to be taught a lesson to prevent others from stealing from the group as well.

Let’s stop this narrative now and move on before this gets too Medieval (in the narrative in the author’s head Ciszko ends up being threatened with the loss of a hand and ends up a finger short).

There is value in punishment. What Ciszko is doing is known in behavioral economics, or political science as “free-riding”. Others are doing work, and he is riding off the backs of the work of others. People in today’s modern society really don’t like this. Charity is one thing but being taking advantage of is another. It triggers anger.

If there’s anything we humans do really well it’s anger and punishment. We’re really good at it. We love to punish. Why? Because it’s easy. It’s really just the inverse of rewards; the easiest and laziest motivator.

It takes very little effort to punish compared to other methods of behavior change. It can “right a wrong”, which satisfies deep emotional feelings we primates have. We are among the few species on earth that go to war or commit genocide. We are tribal, and if someone is undermining the tribe, punishment can be a collective way to restore unity to a group.

What’s fascinating is that people like punishment so much that they will punish free-riders even if it is costly for them to do so. Or to put it another way, people will punish even if it is against their own self-interest.

Fehr and Gächter studied this interesting effect in a paper called “Cooperation and Punishment in Public Goods Experiments.”

They strategically set up a series of games in their experiments with complicated payoff schemes, and times, or opportunities, to see how a group collectively punishes.

Experiment 1 had two groups. The first was the “Stranger” group, which was played with random people each round, and the second was the “Partner” group, which was played with the same people each round (10 rounds, or periods as the study called them).

In a classic 2×2 condition setup, there were multiple Stranger groups, and multiple Partner groups. The difference between them was that some groups played a game where there were no punishment opportunities, and some played a game that had punishment opportunities.

The rules of the game, while simple, are only complicated because of the payoff structure. Each period, each subject in a group gets 20 tokens. They then decide to keep the tokens, or invest the tokens into the “project”. Everyone makes their decisions simultaneously for each round (you won’t know what anyone else does until the big “reveal”).

Money that is put into the “project” is magnified, and then split equally between everyone, even if you don’t pay into it. Therefore, while total payout is maximized if everyone fully cooperates by putting all 20 of their tokens into the project, you can make more if you “free-ride”. In game theory we’d say that full free riding is the “dominant strategy”.

In laymen’s terms, it means your optimal outcome is to keep all of your tokens to yourself but have everyone ELSE put all of their tokens into the project. Because then you get to keep your own coins, but also get a big slice from the project payout that everyone else paid into. You’re keeping your cake and eating theirs too. It’s a classic free rider game.

The rub is that everyone knows this. You and everyone else thinks hmmm… If I put my coins into the project they’ll just be going to everyone else. No one else is going to put their coins in, so why should I? In this game, the dominant strategy per game theory (the strategy that will always happen), is that everyone keeps their coins. Everyone free rides.

But that’s without punishment. And that’s why there is a second decision stage. After everyone keeps or puts in their tokens to the project, and the big reveal happens, subjects are given the opportunity to punish each other by assigning so-called punishment points. This also happens simultaneously, so there’s a big reveal to see who is punishing whom all at once.

If you are given a punishment point, your payout is reduced 10%, all the way down to 0%. So if people don’t like you, they can send you home with nothing (10 punishment points means your payout is reduced 100%, or down to 0).

As a small side note just to show off and look cool, this is the payoff of the game:

game payoff

So what happened? To measure cooperation the researchers used the median and average contribution to the project each period. Median again is like average, but instead of adding together and dividing by the number of things, you just put each result in a line and pick the middle-est number. That’s the median.

Let’s start with the “Stranger” groups where each round had different people. I quote from the paper:

“The existence of punishment opportunities causes a large rise in the average contribution level in the Stranger-treatment.”

graph 1

As you can see, when there was punishment, many more people cooperated by contributing their tokens to the project. In all groups without punishment, the average contribution starts decent, around about 8, but then falls off to around 2. There’s still some jolly goodhearted people who just want to work together, but by the end of the game, everyone figures out the dominant strategy, which is to be selfish and keep all of your tokens.

Meanwhile, in the punishment rounds everyone figures it out pretty fast. Pay your tokens into the project, or you’re probably going to get punished. Sure, someone will try and be cute every round or so and try to grab some here and there and get away with it, but most cooperate.

Let’s look at the Partner groups’ graph.

graph 2

Unsurprisingly, the effect is even stronger because you play with these people multiple times. You know who the trouble makers are, and the group can quickly come together and act to punish because of the bonds of trust of working together in the past.

Whereas the highest the stranger contribution rounds ever got to was about 14 tokens contributed with punishment, average contribution per person for the Partner rounds was over 19, almost 20, or complete cooperation.

That’s an interesting insight. But the really fun stuff is when the researchers looked at when and how people decided to punish. It’s probably not something you would have thought about or mapped out. Most people would dole out punishment when it felt right. So, when does it feel right? What do people feel is just?

The magic number it turns out was not how much someone gave to the project. The magic number was how much someone gave relative to the average contribution of other group members.

The researchers looked specifically at how far away each person was for each round from the average contribution to the project, and how many punishment points were applied. For those who tried to freeload 2-8 tokens less than the average, they received on average 3 punishment points, and in the Partner group it was slightly higher than the Stranger group.

For those who tried to freeload between 8-14 tokens less than the average tokens contributed, those people were punished with about 5 punishment points (again with the Partner group being slightly higher). And for those who freeloaded between 14 and 20 points less than the average (the most anti-society), they were hit with the same average 5 punishment points in the Stranger group, but walloped with an average of 7 punishment points in the Partner group.

Remember, for each punishment point you get, you lose 10% of your tokens, so getting 4 punishment points is twice the punishment as getting 2 points.

All sorts of interesting gems can be learned from this.

When it comes to strangers, not playing along is bad, and we will punish strangers, but at a certain point there is a leveling off. So takeaway, if you’re going to freeload, or steal from strangers, freeload either small enough to get away with it, or big enough for the punishment not to matter.

A possible real world example could be international corporations in a new country using some unseemly business practices to drill for a bunch of oil while ignoring some local laws. This study would suggest that if that is indeed your position, either do small stuff or keep it under the public eye to get away with it, or do it huge, get all of the resources out, and get punished. The punishment will be moderate whether or not you transgress moderately, or severely.

However, if you are freeloading, or stealing, from people you know, aka, part of your community, the harshness of the punishment knows no limit.

The worst punishments are reserved for those who know the societal rules and ignore them. Perhaps strangers are given the benefit of the doubt that they are ignorant of the societal rules, and therefore are not punished as harshly in extreme circumstances. Perhaps, when it comes to strangers, there is a natural inclination to not burn bridges. We ought to punish this stranger so he or she understands our societal rules, but not so severely as to completely turn them against us. Perhaps the intuition goes, if we are moderate with a stranger, they will learn and assimilate into our cultural norms.

Maybe that’s how societies and cultures grow and flourish; through the moderate punishment of strangers.

Perhaps we assume strangers are out to get us (stranger danger!), so when they act wrongly there is no surprise, and therefore no shock, and therefore moderate punishment. But when a “friend” (someone within the social circle) breaks those societal rules it is a surprise, and therefore a shock, and feels worse because of the framing. And that leads to harsher punishments.

I quote from the paper:

“It is interesting that in the Partner-treatment it is only the negative deviation that affects punishment levels systematically, where as the level of the others’ average contribution has no significant impact… [this] suggests that only deviations from the average were punished. This may be taken as evidence that in the Partner-treatment subjects quickly established a common group standard that did not change over time.”

Next takeaway, and I quote the paper: “The more an individual negatively deviates from the contributions of the other group members, the heavier the punishment.” So when you are in a group, or making a decision as an organization that’s in a bigger group, look to everyone else. If you want to stand out, just figure out what everyone thinks the average is, and then stick to that.

It doesn’t actually matter what the real number is, the only thing that matters to avoid punishment is what the mini-society thinks is the real number.

For example, let’s take tech company’s privacy policies. If a majority of American’s believe that large tech companies have little or no policies for consumer privacy, that’s the societal standard; even if in fact most large tech companies do provide many consumer protections to protect users’ privacy.

Behavioral economics theory would suggest that if you’re a new company looking to maximize profit you should have little to no consumer privacy policies to make more money. The group members (the public) do not see you as deviating from the average and you will not be punished.

Now you might lose business to other companies, but that’s because privacy is part of the product value. It’s an economic argument over value, not a punishment risk.

Here’s another interesting takeaway, and it’s about consistency. The Stranger groups did not contribute to the project at high rates. Therefore, when punishment was doled out the overall income of all the players combined went down. At least in the Partner group the overall income could go up because punishment of freeloaders leads to increased project contributions, and therefore overall higher incomes.

But if one punishment opportunity is missed, and people feel they can “get away with it”, everyone runs to their “own interest” corners and the cooperation breaks down. To achieve maximum social good, it requires consistent and reliable punishment 100% of the time.

There are very good arguments to be made that the criminal justice system is often rather inefficient at stopping crime  because of the inconsistency of the punishment. Cocaine use is illegal and heavily punished by the penal codes, but only a tiny fraction of people using cocaine are ever actually punished by society for their use (they don’t get caught). And when they are caught the punishments are often so harsh they can turn members of the group against the punishment.

Conversely, professional sports strongly relies on the consistency of punishments. Players know exactly how much they will be punished (ideally) when they transgress, and they know the punishment will be immediate.

If you want to stop goaltending, call it every time and award a basket to the other team on a shot attempt. The action almost immediately stops. Meanwhile travelling in the NBA is called very sporadically, and players often commit small travels without consistent punishment. The result? Lots of players travel, even though the punishment is about on par with a goaltend (I would imagine both are worth about on average 1.1 points, the value of the average possession in the NBA).

And one last take-away. If you want to destroy a society, from a parent-teacher organization, to the Galactic Senate, and completely collapse it from within, all you have to do is figure out how to make punishments for breaking the social norms inconsistent. As soon as that happens everyone will run to their own best interest corners, and the society will lose its economic collective advantage and disintegrate.

The best and most famous example in history perhaps is the appeasement strategy leading up to World War II. After World War I the League of Nations and been formed, and with it a society of nations to collectively punish those rogue states that broke the norms of the world. It worked for two decades, but as soon as it was tested (mainly by Hitler during his annexation of Austria, and further expansions) and was not punished consistently, the actors who wanted to break world nation norms did so (Japan invading the Pacific, Italy, the USSR, etc…), and the League of Nations collapsed. It was replaced by a new society (the Allies), and later, by the UN. But the strategy of deterrence, or consistent punishment if norms are broken, has been the most effective strategy in the world of political science.

Let me know if any of these many lessons from this study have made it to your society, and if a change from you helped stop freeloaders.

Remember, if you want to create a culture of trust and cooperation, the group needs punishment to form collective action.

 

Fehr, E., & Gächter, S. (2000). Cooperation and Punishment in Public Goods Experiments. American Economic Review90(4), 980-994. doi:10.1257/aer.90.4.980

Episode 7: How Using the Ultimatum Bargaining Game Proves That Cultures of Trust Require Public Retaliation (NOT Altruism)

Game theory. Or should I sayyy LAME THEORY. Ayyyyyyy….

This post is about one small game, the ultimatum bargaining game, that’s useful in explaining the tools behavioral scientists can use to measure the reactions of other humans.

Did you ever watch the (now old) movie A Beautiful Mind? It’s about a mathematician named John Nash who developed the now famous Nash Equilibrium. That’s the beginning of the field of game theory. And game theory can be quite useful, as I said earlier, as a tool to measure how humans rate and react to choices.

I’m not going to actually tell you anything about game theory because it’s complicated and hard and there are 100 other posts and videos on Youtube that would do a much better job than I could. I just want you to be familiar with what it is and understand some of the simple games that are commonly used.

Okay, now that I’ve sufficiently buried the lead… The Ultimatum Bargaining Game! Güth, Schmittberger, and Schwartze in 1982 published a paper titled An experimental analysis of ultimatum bargaining. Now I’m not sure if they invented the ultimatum bargaining game, but they certainly get the credit for popularizing it. It goes like this:

There are two players and some money. One person has all the money and makes an offer to the other person.

If the other person accepts the offer, they get the amounts that were offered, but if they reject the offer, both people get nothing.

For example. We start the game and I have $30. I offer you a split where I keep $20, but you get $10. You’re not super happy about it but hey $10 is better than nothing, so you accept and we both get paid.

Next time I have $30, but I offer a split where I keep $29, and you get $1. ”Screw you!” you say. I’m such a jerk. You reject the offer out of spite and no one gets anything. Obviously, you can see the interesting behavioral economics twist.

Classical economic theory would say that the second person always accepts, because any amount of money, be it $5 or $1 or whatever, is more than nothing. The rational person (“actor”) always takes more over less.

And, of course, in the real world why this game is so brilliant is that it doesn’t happen that way.

People reject offers out of spite; especially when multiple rounds are played and there’s a history with someone. This is a classic decision of people making choices against their own self-interest! If I told you that you could make $1 just by accepting the dollar, wow! Sounds too good to be true. But if I tell you someone split $100 and gives you only $1… Not so much. It’s fascinating stuff.

I want to tell you about another paper entitled “Trust, Reciprocity, and Social History” by Berg, Dickhaut, and McCabe. They ran an experiment using a derivative of the Ultimatum game. Subjects in room A and room B are each given $10.

In room B, they pocket their money. In room A, they must decide how much to send to their (anonymous) counterpart in room B. Whatever amount A sends to B is tripled.

B then gets to choose how much money to return.

This second half of the game is a dictator game, because the room B person doesn’t have to give any money back to the other person in room A.

The optimal strategy for A is to never send any money because there is no guarantee they can get anything back. It’s an experiment in trust. If B doesn’t give back to A, next time they worry A won’t give anything to B.

In 55 out of 60 times running this experiment, A sent money to B. And I quote from the paper:

“In conclusion, experiments on ultimatum game, repeated prisoners’ dilemma games, and other extensive form games provide strong evidence that people do punish inappropriate behavior even thought this is personally costly.”

I’ll talk more about punishment later. Never underestimate the power of humans to make decisions not in their best interest, out of spite, and also give to others not out of kindness, or altruism, but out of fear of spite.

One theory of why 55 out of 60 people sent money even when they may have been better off not giving, was altruism. Altruism is the idea that humans do things that are purely good because we enjoy helping other people.

However, in a follow-up study in 2012, a different group re-investigated the game in “Does the trust game measure trust?” by Brulhart and Usunier. They found that none of their altruism measures were statistically significant, and I quote from the paper:

“In sum, our results suggest that altruism is not a statistically significant motivating force in determining “trust-like” behavior, both across all subjects and for specific groups of players.”

Trust was not formed through kindness, rather it was formed from fear of retribution. Altruism had nothing to do with trust in their Study.

How does this apply to the real world? Well, when people are anonymous weird stuff happens. People aren’t altruistic most of the time, especially when they can directly benefit by keeping money to themselves.

How then do you change behavior? How do you encourage altruistic behavior? Maybe you have a cause that you’d like to promote, or you are trying to create change somewhere.

If you want to create a culture of trust and sharing you must easily allow for public shaming and retaliation. Even if that retaliation ends up being a loss for everyone. People will hit the button that says “Well, if you won’t be nice to me, I won’t help you either even if it hurts me.”

Retaliation does not have to be in money. It could be in PR loss, or some other type. But it is critical that you create an environment that says clearly that these are the rules “we” the members of the community have agreed to. If you violate these rules the community, together, will punish you.

If the rest of the community does not band together to collectively punish the selfish; the selfish act will almost always win. And in systems and markets with especially greedy or immoral behavior you often see that the community does not take action against a bad actor to enforce community standards.

Economists can learn a lot about the process of human decision make through games. I wanted to introduce the idea of a few interesting games where the Nash equilibriums may indicate a different result than what we see in the real world.

I love games and have always found various setups like this exciting and fun. We’ll explore more fun games like the Ultimatum game in the future because it is so useful at eliciting human behavior.

 

Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, Reciprocity, and Social History. Games and Economic Behavior10(1), 122-142. doi:10.1006/game.1995.1027

Brülhart, M., & Usunier, J. (2012). Does the trust game measure trust? Economics Letters115(1), 20-23. doi:10.1016/j.econlet.2011.11.039

Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization3(4), 367-388. doi:10.1016/0167-2681(82)90011-7

Nick Fine’s UX Psychology rant on the latest episode of the Human Tech podcast


Logo for HumanTech podcast
Has the field of user experience (UX) been invaded and co-opted by designers? Has it lost its way from its original roots in Psychology?

Nick Fine says “YES!”. In this episode of the Human Tech podcast we have a spirited conversation with Nick about his crusade to bring Psychology back in a big way to UX. We discuss what that means, why it’s important, and the need for large-scale user research projects.

If, after listening to this episode, you want to get involved, (and you may want to do that after you listen in), here are some ways to reach Nick:

LinkedIn:  https://www.linkedin.com/in/dr-nick-fine-6a65a3/

Twitter:  @doctorfine

YouTube video:  https://www.youtube.com/watch?v=9-wBqOhrpbk&t=5s


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Why You Should Kanban Your Life: An Interview With Amii LaPointe

Logo for HumanTech podcast In this episode of the Human Tech podcast we bring Amii LaPointe on the show. Amii is a Professor at the Milwaukee School Of Engineering where she teaches User Experience. We talk about the “old” days of writing documentation manuals, and her experiment in “kanbaning” her life.


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Episode 6: Using the idea of “utility” to calculate “value”

Economics gets a bad reputation for being wrong about things, or only measuring things in terms of dollars or GDP (gross domestic product).

But most of these “bad raps” are simply because people don’t understand what economics is, and what it is actually capable of.

When I talk about “economics”, I’m not talking about Adam Smith (Wealth of Nations), or Marx, or anything before the 1950’s really. Those guys were philosophers. They looked at the world, thought about things, and then made sweeping guesses about how the world worked.

They get credit for sometimes being right, but just because Aristotle philosophized that there must be some small finite particle because you couldn’t cut things in half forever, it doesn’t mean he discovered the quark!

We wouldn’t call Aristotle a nuclear physicist and we shouldn’t call Adam Smith an economist. Hard science research and philosophy are fundamentally different fields. The biggest difference? A lot of math. Statistics. Econometrics. Linear Algebra. Adam Smith drew some lines on a chart; it’s philosophy.

Modern Economics only really came into its own in the late 1940’s or 1950’s, with the Milton Friedman generation. That makes the science maybe 70 years old at most! And that’s nothing. Modern physics got started in maybe the very late 1800’s, so imagine the difference between what we knew about physics in 1970 (which was a lot, we had nuclear power, etc…), compared to today. It’s a whole different level of sophistication and understanding.

Economics has come a long way, but it is a much newer field and simply hasn’t had time to fully blossom. It also helps your field if the largest nations on earth is pouring billions into research to make weapons to blow other nations up (ahem physics, computing, chemistry, etc…). So you have to forgive the field of economics for being a little bit behind.

With that lengthy precursor; how then does economics calculate value?

When economists try and figure out which decisions people will take, they have to compare apples to apples. There are a few ways to do this. The oldest trick is money, or money equivalents. Would you prefer a massage or a hamburger? Idk. So I instead ask how much would you pay for one or the other.

Just give each a “value” in dollars and compare, poof. Now we’re cooking.

The evolution of this method of comparing values is the idea of “utility”. Instead of money, you figure out how much something is “worth” to a human, or the utility the human gets.

For example, when your spouse cooks you breakfast that has an inherent value. But because it is not a financial transaction there is no financial transaction where money changes hands; so you must turn to the level of “utility” (happiness essentially) the breakfast provides you.

The main way to measure this is still in dollars (money) instead of “units of utility”; which has little meaning. The best way to measure what a spouse cooked breakfast is worth is usually to illicit how much you would pay for someone else to make that same meal for you. But there are lots of different ways to calculate utility.

The main point is that utility more accurately represents human decision-making because humans make decisions in abstract ways.

We don’t boil everything down into dollars (money) and compare the two values every time we make a decision. And once you get into behavioral economics utility amounts become even more important.

This is because the traditional economic assumption was that humans try to maximize their utility. The axiom, or assumption we take to be true is that we are rational, we want what’s best, so we maximize our utility. If there is a simple way to make $5 we’ll do it because that’s more than $0.

But, of course, there are many many times when that doesn’t happen! Just read the rest of these blog posts. That’s behavioral economics.

The answer is of course that we’re just measuring utility wrong. Traditional economics MISSES critical variables. Mind journey time! Think of a paperclip on a beach.

You are walking down the street with 3 of your closest friends in high school. It’s the suburbs so not a lot is going on.

It’s a tree-lined street, and farther down the street there are kids learning how to bike on a training bike. The sun is out, and birds are singing. It’s a very nice day.

On the ground off to the side of the sidewalk, you spot a crinkled $5 bill. Dirty, but totally spendable. You note “Oh! Look it’s $5!”, the friend walking on your left turns to you and says “Ew, that’s covered in dirt, you weren’t really going to pick that up, were you? It could be poop!”

You glance at your other friend to your left, and then to the friend to your right. All of them are staring at you with one eye raised and a grimace of slight disgust on their face.

Classical economics says you pick up the $5 because your utility of $5 is greater than $0, but of course you don’t pick up the free money. There is a hidden cost that traditional economic theories miss, which is the “social utility”. There is a social cost to your friends thinking you’re weird. Or poor. Or dirty. And that can be insanely powerful, more than a free $5 powerful.

It’s not that economics is broken or doesn’t work; it’s just that often it isn’t advanced enough to correctly calculate all the variables appropriately.

The first MAJOR behavioral economic papers in the 1970’s and 1980’s were all about different ways to calculate utility. There’s transactional utility, social utility, discounted utility, etc… etc… etc… It’s all just trying to reframe what humans are weighing when making their decisions. Some of it is because of fear of loss, or laziness, or social pressures.

I’ll probably devote an entire other blog post just to Kahneman and Tversky’s seminal, groundbreaking, famous-making paper “Prospect Theory: An Analysis of Decision under Risk” from 1979. There’s a reason those two are really considered the grandfathers of the behavioral sciences, especially behavioral economics. This is one of a few famous papers that really defined the genre.

In sum, their whole point was that economists were doing it wrong! It’s not about linear choices or straight classic rational decision making. And I quote from that paper: “people normally perceive outcomes as gains and losses, rather than as final states of wealth or welfare.”

So sure, your final state after you pick up the $5 is +$5 but that’s not the calculation you go through. You feel the loss of your social status, you weigh that decision not as finite, but in the moment. It’s complicated and messy, and human.

And that’s hard to measure; but discovering the Higgs Boson was hard too. It just takes time and refinement. Maybe a few Nobel Prizes, and a few billions of dollars for a huge research facility (CERN Particle Accelerator but for Behavioral Econ) would go a long way.

So I’m positive about the future of the field. And the concept of utility is an important one, and one you should understand. So that’s a brief primer on it.

Btw, I have attached a picture of what real full-fledged economics looks from the original Prospect Theory paper from 1979. The good news is that later papers are… more concise and have more fun field work, although the economic models are more complicated.

This segment is not from some crazy appendix by the way, but from the heart of the paper, perhaps outlining one of the more important points, which is the concavity of u (utility). So just in case you were worried about what you were missing…

Also… this is a formula for the value of different prospects. Economics is so fun!

Don’t worry, they clarify this nicely later in plain English. I’d go through it, but I’ll save it for the post about Prospect Theory.

What Conference(s) Should You Go To This Year?

We speak at a lot of conferences, and attend a few too. In this episode of Human Tech we “review” many of the conferences we’ve been to (that have conference dates coming up in the next 12 months). Some you probably know, and others you may never have heard of. These are all in the US, UK, Israel, and Europe. Maybe one of these will be part of your next traveling adventure!


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

A New Course On Color And Design

I’m so excited to be adding courses on color and design to our online training curriculum.

Katie Stern, who has multiple degrees, books, and lots of experience designing with color and teaching others how to do the same, has put together the first course in what will be a whole curriculum on Color and User Experience (UX) Design.

The first course is Color Terms, Tools And More. I took it myself and learned so much from it. I highly recommend it.

Here’s a short introduction to the course:

If you use the promo code

colornews

when you register you will receive 35% off the regular price. This special price is for two weeks, March 1 to 15, 2018.

Here’s some more info about the course:

You will learn color terminology, the basics of color theory, and how to communicate color information with your team.

If you are analyzing or designing a product or website, then you are working with color. Making color choices can be unconscious or intentional, depending on how much thought you put into them. Identifying colors that will create a great user experience can be a daunting task, especially if you don’t have the vocabulary to communicate about color with your UX team members. When you learn color terminology you will be better able to communicate your color design intentions.

You will learn:

The challenges involved with naming colors
The difference between the additive and subtractive color systems
How pigment color is different than digital color
How monochromatic color schemes are built
The definition of tints, tones, and shades and how to create them
How the Adobe Color Picker and Paletton help build color schemes

and much, much more!

So check out the course and let us know if you have any questions.

The Dopamine Seeking-Reward Loop, or “Why Can’t I Stop Scrolling On My Newsfeed”

We’ve all been there. You glance at Instagram (or your twitter feed, or your Linked in feed, or Facebook, or your newspaper app…). You look at the first entry and then the next, and then swipe with your finger or thumb to see what comes next and then next, and before you know it 15 minutes has gone by.

You just became part of a dopamine seeking-reward loop.

Here’s a video I recently recorded about the dopamine seeking-reward loop and what to do about it. And below is a text summary of the video.

I wrote an article in 2012  about dopamine and how it helps you become “addicted” to texts and also to searching.  That was 2012 and by now stimulating the dopamine loop has become ubiquitous and is involved in almost everything you do on your smartphone. So let’s re-visit the dopamine loop:

Dopamine was “discovered” in 1958 by Arvid Carlsson and Nils-Ake Hillarp at the National Heart Institute of Sweden. Dopamine is created in various parts of the brain and is critical in all sorts of brain functions, including thinking, moving,  sleeping, mood, attention, and motivation.

The “seeking” brain chemical — Dopamine was originally thought of as critical in the “pleasure” systems of the brain. It was thought that dopamine makes you feel enjoyment and pleasure, thereby motivating you to seek out certain behaviors, such as food, sex, and drugs. But then research began to show that dopamine is also critical in causing seeking behavior. Dopamine causes you to want, desire, seek out, and search. It increases your general level of arousal and your goal-directed behavior. Dopamine makes you curious about ideas and fuels your searching for information.

Two systems —  According to researcher Kent Berridge, there are two systems, the “wanting”  and the “liking”  and these two system are complementary. Dopamine is part of the wanting system. It propels you to take action. The liking system makes you feel satisfied and therefore pause your seeking. But the dopamine wanting system  is stronger than the liking system. You tend to seek more than you are satisfied.  You can get into a dopamine loop. If your seeking isn’t turned off at least for a little while, then you start to run in an endless loop.

The scrolling dopamine loop — When  you bring up the feed on one of your favorite apps the dopamine loop has become engaged. With every photo you scroll through, headline you read, or link you go to you are feeding the loop which just makes you want more. It takes a lot to reach satiation, and in fact you might never be satisfied. Chances are what makes you stop is that someone interrupts you. It turns out the dopamine system doesn’t have satiety built in.

Anticipatory rewards and pavlovian cues — The dopamine system is especially sensitive to “cues” that a reward is coming (remember Ivan Pavlov?) If there is a small, specific cue that signifies that something is going to happen, that sets off our dopamine system. So when there is a sound (auditory cue) or a visual cue that a notification has arrived, that cue enhances the addictive effect. It’s not the reward itself that keeps the dopamine loop going; it’s the anticipation of the reward. Robert Sapolsky talks about this anticipation/dopamine connection in his research.

Or maybe turn off the device altogether for a while. Radical idea, I know.

 

Here are some references:

Arvid Carlsson and Nils-Ake Hillarp at the National Heart Institute of Sweden first “discovered” dopamine in 1958

Kent C. Berridge and Terry E. Robinson, What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience?: Brain Research Reviews, 28, 1998. 309–369.

Robert Sapolsky —

Dopamine Jackpot – Anticipating Reward

Alfonso de la Nuez On The Role Of User Research In The Future

Alfonso de la Nuez started in the field of usability with his small services company in Spain and ended up in California co-founding the user research software firm UserZoom. Last year UserZoom customers conducted more than 12,000 user research projects.

In this episode of the Human Tech podcast we talk with Alfonso, the CEO of UserZoom, about the current state and likely future of user research and testing, what makes user research successful inside a large enterprise, and much more.

You can check out UserZoom here, and you can email Alfonso at alfonso@userzoom.com


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.