What Research Tells Us About the Power of the Tribe

Below is a guest post from Steven Sloman and Mugur Geana

Image of Steven Sloman
Steven Sloman
Image of Mugur Geana
Mugur Geana

“…human beings are tribal all the way down, all the way to the processes that govern thought. In other words, we let others think for us.”

We thought we had a solution. After a year of cowering in our homes, science had come to the rescue in a way that sounded like science fiction. We had not one but two vaccines based on what seemed fantasmagorical methods from the cutting edge of molecular biology. They had greater efficacy than anyone had dreamed, upwards of 90% ability to prevent infection. Yet, as we write this, the US is experiencing a 4th Covid-19 surge. To those of us who raced to get vaccinated as soon as we could, something has gone terribly wrong.

We knew that a coronavirus variant might evolve that could wriggle its way around the vaccine’s defenses. Most experts believe that, to some extent, that is what the Delta variant is doing. It is almost surely more transmissible than previous variants, and therefore has become a dominant strain globally. But we don’t really know how transmissible this latest and greatest contender is because it isn’t facing the same obstacles that its immediate predecessors faced. Many Americans have decided that the pandemic is over. So it is hard for epidemiologists to measure the transmissibility of the Delta variant, because they have to do it in a society with full restaurants and bars, and with thousands of fans screaming their approval and disapproval at sports venues. Some of those patrons have been vaccinated. Some have not. The surge is occurring not just because of the new variant but because so many people have entirely given up on wearing masks and social distancing, never mind their unwillingness to vaccinate.

There are other reasons people may have given up, including the inconsistency of the messages put forward by public health authorities. The CDC did issue guidance that vaccinated people need not wear masks indoors when they are with other vaccinated people. But recent studies show that vaccinated people can become carriers (although substantially less often than unvaccinated people). Furthermore, people get their information about Covid from a multitude of sources, and misinformation abounds. Nevertheless, the vaccines have lowered the risk of getting and certainly of dying from Covid-19. So, are people just making rational risk assessments to decide whether or not the discomfort of masks and the isolation of social distancing is worth it? Alternatively, people might be ignoring scientific advice because they lack trust. Some come from racial minority groups that have historically been poorly served by the medical community, or their behavior is driven by information from sources that do not share the medical establishment’s views.

The word on the street, and on many newscasts, is that the issue is ideological. Is it? Ideology seems to pervade so much these days. There is clearly a split in Americans’ confidence in experts; Democrats show more than Republicans (Pew Research, 7/21/19). Experts themselves are speaking pretty much with a single voice: With only a few eccentric exceptions, medical experts recommend avoiding close proximity to strangers, wearing masks in public, certainly indoors and outdoors if you have to be close to people you don’t know (or don’t trust), and get vaccinated. Who is following the advice, and who isn’t, and why?

In two surveys involving almost 1700 US adults, we asked people whether they take preventive behaviors (mask-wearing and social distancing) and whether they support policies that encourage such behavior. We also asked respondents several questions about their political attitudes (conservative versus liberal), about their own risk from the coronavirus, their understanding of the transmission of the virus, how well they perceive their own understanding of the virus, and some other facts about themselves. We found that the best predictor of their behavior and their attitudes was their political leanings. Suppose you want to know whether an American is likely to wear a mask or practice social distancing. In that case, the single best question to ask them turns out to be what party they vote for.

This finding is consistent with a plethora of research. We looked for published and unpublished manuscripts made available by early 2021 that included measures of Covid-mitigation attitudes and also measured political ideology. Forty-four papers provided data that allowed us to evaluate which variables predicted those attitudes. Out of 141 observations from the 44 papers, political ideology was a significant predictor of responses in 112 (79%). Moreover, ideology was by far the best predictor of COVID-19-related attitudes overall. Other factors did surpass ideology’s predictive value in a few cases: age (8 cases), gender (8 cases), education (14 cases), and race/ethnicity (7 cases), along with various measures of news consumption (10 cases). Ideology did best 62 times.

In another study of over 1100 Americans led by Mae Fullerton, we found that not only did people’s risk profiles fail to predict their mask-wearing and social distancing, but even their perception of their own risk failed to predict them. As usual, partisanship turned out to be a better predictor, along with being an essential worker. Not surprisingly, another good predictor was believing compliance was important.

These studies are, of course, correlational. We have no direct evidence that one’s partisanship causes one’s willingness to take mitigating or preventive behaviors against Covid. It could be, for instance, that Republicans tend to live in Republican communities or regions of the country and Democrats in Democratic communities, and that people behave in line with their community, as opposed to their ideology. But alternative explanations like this just emphasize our bottom line: In the case of Covid, health-related behavior is not governed by a rational assessment of risk but by the attitudes of the people who surround us. Our data suggest that it is not primarily a function of race, age, or any of the other variables that have been offered as big players, but rather variables related to ideology.

In a recent book, one of us argued that human beings are tribal all the way down, all the way to the processes that govern thought. In other words, we let others think for us. We keep expecting to find exceptions to this rule. When it comes to matters of life and limb, for instance, or when behavior is obviously foolish or capricious, we would expect people to rely on their ability to reason, not to simply channel the people around them. Yet we keep being surprised. Covid is a matter of life and death and the strong consensus among experts means that deciding what to do is not very complicated, and yet people seem to be as influenced by social processes and pressures as they are in cases that seem much more benign (like how much respect one should have for economists) and topics that do not lend themselves to such easy conclusions (like how we should handle immigration).

Everyone makes mistakes. But when it comes to freeing the country from the bonds of a tiresome pandemic, some of us keep making the same mistake over and over.

You can reach Steven at: Steven_Sloman@Brown.edu. His website is: https://sites.google.com/site/slomanlab/

You can reach Mugur at geanam@ku.edu

and Check out the Human Tech podcast episode where Steven Sloman and his co-author, Philip Fernbach talk about their book The Knowledge Illusion.

Design is evolving—and designers need to evolve with it.

Below is a guest post from Nathan Shedroff

Nathan Shedroff

‘While designers have evolved over the last 25 years to be advocates for the audience/customer, we now need to be advocates for the rest of everyone else, democracy, society, and the planet, itself.

I was taught design in a world and a time where the word primarily referred to things you could see, touch, and maybe hear. Design craft is often focused singularly on pleasing our senses. Fair enough, but design has changed a lot since my undergrad days. It wasn’t OK to ignore the impacts and outcomes of our work then, and it’s much, much less now.

It’s Not About You

The kind of design I was taught essentially told us designers to “go out into the world and redesign it in your vision!” If we did that, we were told, we would be rewarded—with money, with recognition, with awards, etc. Mostly, that isn’t what happens. It’s been very recently that popular appreciation for the kind of design that wins awards has emerged. For most of the past, the “design aesthetic” of the moment did not align with what most people appreciated or wanted. One of the biggest achievements of companies like Appen and Nike has been the promotion of design qualities that design industries mostly emphasize. Never before have non-designers cared about, let alone sought-out, design that the “experts” agree are well-designed. This is particularly true in UX/UI design though there are disagreements, often.

Happily, the majority of how design is taught today has changed drastically, partly because many more people have entered the field but, mostly, because the design industry has steadily leveled-up its process, aims, cares, and context. Sure, there’s a few programs and faculty who only care about appearance and the plastic parts of “craft.” But, most reputable programs teach the process of design research, which extols the virtues of understanding our audiences/customers/etc. before we begin making things for them. This is a huge advancement! While some industries still reward and recognize only the starchitects of their worlds, most design organizations have started to refocus themselves around impact.

The famous Frog Design, under its original founders, thought their incredible innovation was to put the client CEO at the center of the design process. Today, even Frog Design puts the customer at the center now. Progress.

UX/UI/interaction/interface/etc. design has led much of this progress. It’s now no longer acceptable to jump straight into screen design without first investigating customers—well, unless it’s an agile project. It’s still the case that much of that customer research is really, really, poorly done, but at least the step is there in the process chart and, sometimes, there’s someone designated as the researcher.

It’s Not About Us, Either

But, designing for others isn’t enough. Therefore, design research isn’t enough, either. There are still too many steps in the process and people in the system who don’t understand, care about, or want to create designs that work for others, instead of merely for themselves. Even when designers know and do better, everyone else in an organization, from CEO or client to layers of managers, to peers in other divisions, can foul this process and prevent better design responses. This is simply the reality of systems made from real people, instead of idealized ones. It’s not a reason not to design, but it complicates our efforts and stifles progress. It requires us to not only better understand the context of our own work but levels of context above, below, and to the side that impact our work and, more importantly, are impacted by it.

This is why designers need to be taught so much more than traditional design. I’m not arguing that we shouldn’t understand the basics of form, shape, color, light, seeing, composition, etc. By all means, these are probably more important than ever. But, we need to find ways to add in much more about the impacts of our work, which I’ll admit, isn’t easy.

It’s About Everything, Now

I know this sounds daunting, and we’re definitely adding levels of complexity and context, but these levels are necessary, not only for ethical and just work but also for financially successful work, too. Today, in order to be a successful designer (or, rather a designer creating work successfully), we also need to understand (in no particular order):

• The social contexts of our work (including impact, issues, and meaning)

• The ecological impacts and consequences of our work (and ALL of it has some impact)

• How value is created and flows between people, organizations, and stakeholders

• How to better communicate with all of these people

• How to lead and manage people (I, know, ugh, right?)

• How to respect (and transcend) the quantitative data available to us (and sometimes foisted upon us)

All of the above helps us be more ic in our work (which is another, critical issue, entirely)

That’s a tall order, I know, but necessary if you want to practice design successfully now and in the future. For sure, you’ll be able to inquire with and be joined by others. You won’t be alone (though it may often feel that way).

I’ll try to make it simple. I’ve been working on these issues for quite a while now. You don’t even need to go get another degree to know the above. There are even tools in existence for these (and several more on the way). There aren’t a lot of books that describe the list above, but there are many videos.

The first context for design is the environment around us. Nature supports everything else on this planet. If it falters, changes, or ceases to support us as we’ve become accustomed, everything else changes—sometimes radically. You don’t have to spend the rest of your life, or even the rest of the year, reading about this. I wrote the book, Design is the Solution, to cover it all, in fact (though others have, too). But, you need to know the basics because nothing else works otherwise. It’s your duty as a designer to know these things. No, scratch that. It’s your duty as a human on this planet to know these things. Period.

The second realization is that while society is supported by Nature, society supports (or suppresses) everything else. You can’t have an economic system without a social system. The main context you will design around will be the social ones (there are many). You don’t get to skirt this one either. It permeates everything. It governs what is considered good or bad, successful, or frivolous, important or not. And, there are more social impacts and issues than I can list. But the most important are going to be: equity, justice, respect, desire, need, privacy, and safety. If all you do is investigate how your audience understands and relates to these, you’re doing better than 90% of the rest of people you work with.

The only thing I’ll say about the economic context is this: markets are incredible optimizers. They really are. However, they only optimize what you put into the equation and the things we care about most have been left out of the economic equation by economists and “businesspeople” for far too long. And, optimization may not be called for at all. Just know that that uneasy feeling you have that business is missing some important things is absolutely true. I won’t get into the entire argument (you can read an intro here) but know that:

  1. People are NOT rational actors
  2. People don’t only optimize for money (they absolutely WILL pay more for some things)
  3. Rich people do NOT create jobs

That will keep you busy for a while.

Next, the same way we can measure the ecological impacts of a product, service, or other experience, we can also measure social impacts of what we create. We probably can’t measure everything (there are so many) but we can measure enough—if we care to. Someday, we may even be able to put these into business terms.

This isn’t a case of garbage-in/garbage-out. Instead, it’s about what we don’t put into the process—we can still get garbage out. Remember those Segways? They aren’t even mostly Segways anymore. But, a LOT of money was wasted to get there. That’s what you’re trying to prevent for your companies and clients.

We should have to perform all of these roles. It should be standard operating procedure for all of this to be covered in the process—even for start-ups. Unfortunately, we’re not there yet. While designers have evolved over the last 25 years to be advocates for the audience/customer, we now need to be advocates for the rest of everyone else, democracy, society, and the planet, itself. That’s the design job ahead of us. Sorry, but it is—because few others in organizations will rise to the challenge (and the challenge doesn’t go away).

Or, you know, we can evangelize and recruit others, too. There’s plenty of work to go around and we don’t have to go it alone. But, we’ll have to leave the safety of our design ghetto studios to do it.

Check out the conversation we had with Nathan at our Human Tech podcast.

You can reach Nathan at www.nathan.com

Episode 23: The Best (And Only?) Way To Change Group Behavior With Ideas

There are lots of ways to change behavior (again read Dr. Susan Weinschenk’s book “How To Get People To Do Stuff”). But let’s talk specifically about a war of ideas. Information, such as a news report, can change your opinion on a topic, or at least in theory it’s supposed to.

As long as humans have been around there have been wars of ideas between us. Think of Communist propaganda, or wars of religion. Turning people to your side using words alone has been one of humanities earliest mind weapons.

I want to focus on saving electricity to reduce greenhouse gas emissions.

Behavior change is hard. You can tell people all day long that climate change is real, give them facts, but it’s very, very hard to turn that into using less electricity. Is it even possible to achieve behavior change with pure information? What information could you give that would turn into behavior change?

There’s a nice study by Nolan, Schulz, Cialdini, Goldstein, and Griskevicius that set out to study that question. They attempted to answer two questions. First, are people self-aware enough to know what sort of information can change their behavior? And second, what information actually works? The paper is called (spoiler alert) “Normative Social Influence is Underdetected”. I kinda wish they would have buried the lead, or at least made it so convoluted that you, the reader, wouldn’t understand it.

Nolan, et.al created 5 experimental messages (check out Study 2 for more information).  The first was environmental protection, the second was social responsibility, the third self-interest, the fourth social norms, and the fifth was an information-only control.

Each had a different way of trying to convince people to save electricity. The researchers had research helpers go out door to door and give doorhangers with information, conduct interviews, read electrical meters, and more; it was a well-funded study.

After providing the information, the researchers then measured the subjects’ electrical consumption over the short term and over the long term.

The results were interesting.

Take a look at this graph. In the left column are the different conditions. The middle two columns show the short term effect in daily kilowatt hours (kWh, a measure of electricity usage). M stands for the median which is what we are interested in, and SE stands for standard error, which you can ignore. The right two columns show the long term effect median.

Notice that the ONLY condition that seems to have any major effect is social norms.

I’ll quote from the paper:

“Despite the fact that participants believed that the behavior of their neighbors – the [social] norm – had the least impact on their own energy conservation, results showed that the [social] norm actually had the strongest effect on participants’ energy conservation behaviors.”

All the arguments for social responsibility, save the planet, or self-interest (save money), etc… none of them really worked. Maybe a tiny little sliver of something. But telling people how much electricity their neighbors used, and showing them that they, themselves were using more? That had a large and lasting impact.

The best and perhaps only way to change behavior when your only tool available to you is information, is to give people information demonstrating how “bad” a job they are doing in relationship to other people.

People just want to be cool. They want to fit in. Social pressure is much, much stronger of a motivator than any sort of love for the environment.

If you’re designing a campaign to change minds; use social pressure. Turn it into a “club” a “group”, a tribe. Setting out what a tribe is and saying what it takes to be “in” the tribe is the best way to get people to do stuff. Us humans will do almost anything (*ahem cough* Nazi’s *cough*) to be part of the group or a group society thinks is important.

Give it a try. It’s also cheaper than rewards so that’s a plus 😊.

Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008). Normative Social Influence is Underdetected. Personality and Social Psychology Bulletin34(7), 913-923. doi:10.1177/0146167208316691

Episode 22: Affect, Cognition, and Awareness

Quick post to talk about affect. Affect is primarily a verb meaning make a difference. This is different from effect which essentially means result (the effect of the great pitching was a win).

Affective primacy hypothesis asserts that positive and negative affective reactions can be evoked with minimal stimulus input and virtually no cognitive processing.

It’s mind control okay? It’s evil subliminal messaging and imposing bursts to your brain so fast it doesn’t have time to adequately process them. Let’s get into the details with an oldie but a famous paper by Murphy and Zajonc called “Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures.”

They set out to see if they could change people’s mood without them really being aware of what was happening.

Study 1 had two conditions. Half of the subjects were exposed to the optimal exposure condition, and half to the suboptimal exposure condition.

The “cover” of the study was that the subjects would be presented with an assortment of Chinese characters to rate. But there was a secret the subjects didn’t know. Right before they were to rate the Chinese characters, a slide of a face would be flashed. The faces were either male or female faces expressing happiness or anger.

In the optimal condition the faces were show for 1000ms (1 second). People were able to clearly see the primes but were told to ignore them. In the super-secret suboptimal condition, the faces were flashed for only 4ms.

The theory is that because the faces were flashed so fast in the suboptimal condition, only 4 ms, that the processing happens entirely unconsciously. Your brain has synapses that need to fire before you recognize something. And by the time you can react it’s already gone so the image stays in your unconscious processing.  

Here’s the results:

These bars in the chart are how highly the Chinese characters were rated. The black bar is when a negative face (mad) was shown right before the ranking. When a positive face (smiling) was shown right before is the white bar.

In the optimal conditions (1000 ms) we have fairly close parity. However, in the suboptimal where the faces were only flashed for 4ms, we have a noticeable reaction. There’s quite a large difference where the mean rating for faces shown after a negative face was only 2.75, where as if a positive face was shown before the rating was almost 3.50.

What an interesting result. The researchers dove deeper.

Maybe it’s a fluke? Maybe people just didn’t like faces lording over them.

So in Study 2 they re-did Study 1, but this time people rated only if they thought the object was “good” or “bad”. Now these are random Chinese characters. There should be no association one way or the other. Results? Same as Study 1.

We see a little movement in the optimal (1000 ms), but a huge difference in the suboptimal (4 ms). When people were flashed with a little something and are not able to process it fully, it makes a difference somehow.

Does this work with other primes besides happy or sad?

In Study 3 subjects were asked to rate the size of an object, where 1 was small like a mouse, and 5 was big like a tree.

However, instead of being primed with a face, subjects were primed with either a large shape or a small shape. Again, the optimal condition got the picture for 1000 ms, and the suboptimal only got the shape for 4ms.

The results were very significant (with an F value of over 20 for you econ kids out there). But different than what the researchers were expecting:

Again, small primes lead to smaller overall ratings, BUT ONLY FOR THE OPTIMAL CASE (1000 ms). There was no change for the suboptimal (4 ms).

It is this author’s opinion that this is because of the fusiform facial area, or FFA. What we now know is that there’s a small little part of your brain whose job is to identify faces incredibly fast, and “feel” what that face is feeling (it’s located in the mid brain near other emotional processing).

When you see a face you instantly process if the expression shows that the person is happy, sad, worried, etc… From an evolutionary perspective the FFA may be incredibly important to our social skills. There’s a theory that the reason people with autism have trouble identifying the moods of others is that their FFA is not connected in the same way to the amygdala where emotions are processed.

People with autism can “see” the face, but not “feel” the emotion. Our FFA is so good it can instantly see a face even when the image is not actually a face. Cloud in the sky? Stain on some bricks? Smiley face? Frowny face hand drawing? Emoticon? Front of a car? Dogs. Cats. Cartoons. Our FFA lights up and fires insanely fast.

Sorry for the tangent but that’s the important part here. The FFA is designed to basically be a fast pre-processing area. It fires very quickly.

To process other attributes in a picture other than a face with emotion it takes more time. The image has to be rolled around through the visual cortex and then to somewhere else, then something else etc… It takes more than 4ms. So it “doesn’t register” with the brain.

One quick note to point out is that I am not saying that the FFA is done firing in 4 ms, rather that the FFA only needs 4 ms exposure time to fire and process, whereas other areas of the brain may require more exposure time. It is also possible that it’s the amygdala that is working super quickly with short exposure time as well.

When an image is around longer, as in the 1000 ms optimal condition, it mucks around with the framing parts of our brain. We are susceptible to framing. And if you show something that’s large, and then ask me is this squiggle large, I’ll lean towards yes? So you see quite a large jump in the optimal case from about 3.1 size rating when prompted with the small image, up to 4.0 size rating when prompted with the large.

Study 4 also followed Study 3, but this time were asked to judge the symmetry of objects, so 1 would be not symmetrical, 5 would be a circle.

Again, there was no difference in the suboptimal (4 ms), but a significant different in the optimal (1000 ms). 

From this the researchers guess that geometric shapes also require a longer exposure time to have an emotional stimulus.

Study 5 tested masculine vs. feminine, and again, there was no change in the suboptimal which was primed for only 4ms, but a shift in rating something masculine vs. feminine in the optimal.

So general conclusions then. I quote form the paper:

“Primes shown as briefly as 4ms can allow subjects to discriminate between faces that differ in emotional polarity. Distinct faces that do not differ in affective polarity, even if they differ in such obvious ways as gender, cannot be accurately discriminated from one another if they are exposed for only 4 ms.”

What can this tell us? If you want to use evil subliminal messaging, use faces with expressions. This is why faces are so powerful because they are processed so quickly. They are “lower down the brainstem” (I didn’t invent that phrase, I borrowed it). Faces are more primal, and we have less control over our reaction to them.

If you want to use other pictures to help frame emotions you need to have their attention for at least 1 second. And I know that sounds short but in a world of advertising, a “one-mississippi” can be quite a while.

We are social creatures and our brains are evolved to put an emphasis on social cues, food, danger, and sex. Don’t underestimate how strongly they get our attention.

Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology64(5), 723-739. doi:10.1037//0022-3514.64.5.723

Episode 21: Monkey (Unconsciously) See, Monkey (Unconsciously) Do

Have you ever seen a chameleon? They instantly change color to adapt to whatever their background is.

There’s a theory that humans can do this too. Of course, we can’t change our skin color to match our environment but the theory is that if you see someone behave in a certain way, you’ll follow that behavior. Public myths including yawning being contagious, or sneezing. Indifference to suffering. There’s lots of myths floating around.

Well for each myth there has been a study and since we’re romping around in behavioral economics land I wanted to look at a paper by Chartland and Bargh entitled aptly “The chameleon effect: The perception-behavior link and social interaction.”

Their experiments require a “primer”. Usually they used what in psychology they call a confederate; which is someone who appears to be part of the study along with everyone else… but is actually an inside imposter, planted by the researchers to get interesting results.

In Experiment 1 subjects participated in two consecutive sessions. Both had a 10-minute interaction with a confederate. They were told to describe photos in the sessions but of course the photos were simply a distraction from the real study.

The confederates, who were trained actors, varied their mannerisms throughout the interactions. During Session 1, the confederate either rubbed his or her face, or shook his or her foot. During Session 2, the confederate did the inverse of Session 1; so if Session 1 was a foot shake, Session 2 would be a face rub.

Afterwards they did a post-experiment interview and only 1 person guessed the other person was a confederate, and no one guessed what the confederate was up to.

Here are the results:

Even though no one noticed the confederates doing face rubs or foot shakes, when the confederate rubbed their face, participant face rubbing increased about 25%. And when the confederate instead shook their foot, participant foot shaking more than doubled (108%).

So clearly there is some sort of monkey see monkey do unconscious thought going on. I’ll discuss why after we go through the other Experiments. One more interesting data point for Experiment 1 involved smiling. Participants smiled more times per minute when with the active confederate (median smiles per minute of 1.03), than with a neutral confederate (median smiles per minute .36). I should also note that the confederates were instructed not to make friends; only to smile.

Further, participants performed the intended action more times with the nonsmiling confederate than with the smiling confederate (median .56 vs median .40). Very interesting indeed…. Put a pin in this and let’s move on to Experiment 2.

Experiment 2 (electric boogaloo) was all about the “need to belong”. Now Dr. Susan Weinschenk has written extensively about this in her book “How To Get People To Do Stuff” as it is one of the 7 drivers of motivation.

The goal of this experiment was to see if they could unconsciously manipulate people into enjoying their interaction with a confederate. After a 15-minute session with a confederate people were asked to report how much they liked the confederate and how smoothly the session went on a 9-point scale, with 1 being extremely awkward or unlikeable, and 9 being extremely smooth or likeable.

The confederates either engaged in neutral nondescript mannerisms, which acted as the control, or the confederate mirrored the mannerisms of the participant.

I think this is a brilliant evil theory that people like people who are like them. If a participant folded their hands, the confederate would fold theirs, etc…

The confederates, being talented actors, played their part beautifully. Only 1 person “figured out” that they were being mimicked, and an outside panel of judges was used to rank the openness and friendliness to the participants.

This is very important to point out; there was NO difference in scores between the neutral control, and the mimicking. It is not the case that the mimickers were being more friendly, making more eye contact, smiling, or were judged to like the participant more. This was controlled for. So the results are not simply friendly vs. not-friendly.

The results are fun. Participants in the experimental condition reported liking the confederates more (M=6.62) than the control (M=5.91), and also reported a smoother interaction (M=6.76) than the control (M=6.02).

Now there are certainly potential large implications in this study; from politics to sales to friendship. Let’s again put a pin in this and talk about the last Experiment before we sum everything up. The thing to take away from Experiment 2 is that human interactions go smoother and are more positive if you just mimic the movements and actions of the other person.

Experiment 3 was the same as Experiments 1 and 2, except that subjects were also given an empathy questionnaire (a perspective-taking subscale). For example, “when I’m upset with someone I try to put myself in their perspective”, sort of thing.

What they found was that people who were high perspectivers (highly empathetic) joined in with the face rubbing and foot shaking more times per minute than low perspectivers (M=1.30 vs M=.85 and M=.40 vs M=.29).

This makes sense. People who are highly empathetic find it easier to feel what you’re feeling. When you feel empathetic the same parts of your brain light up that are lighting up in the brain of the person you’re observing.

If you see someone hurt their leg, a small ghost reflection of mirror neurons in the leg area of your brain will also light up. It’s why stories are so powerful. It happens completely unconsciously. And if you’re the type of person who can more easily slip into that state, then you are more prone to have an unconscious reaction to the external stimuli of others.

Okay so our first pin was that people in Experiment 1 performed the action more when the Confederate was NOT smiling. My theory is that we are always looking for a way to bond unconsciously. We as humans want to relate, we want to connect on whatever level we can. Obviously smiling and laughing together is our natural go-to. But when that’s not available our brains may slide into other ways to bond, such as face rubbing and foot tapping. BECAUSE as we find out in Experiment 2, our second Pin, people like interactions more with people who are mimicking them.

Maybe we know and understand this innately so our brains are one step ahead of the research. Perhaps Experiment 1’s outcome occurred exactly BECAUSE of the results in Experiment 2. We like people more who mimic us. We crave people to like us (unconsciously), and so we (unconsciously) mimic others to the extent we can to get them to like us. It’s an empathy circle.

And those who are the most empathetic also reach out the most to connect, so they mimic unconsciously the most.

There is a long list of practical applications. Sorority bonding, concerts or sporting events in unison, people in fields with lots of human interaction being more animated and reactionary (HR, sales, customer service). If you want people to like you and try to bond with you; try mimicking their energy, behaviors, and mannerisms.

Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology76(6), 893-910. doi:10.1037//0022-3514.76.6.893

Episode 20: What is Economics Useful For?

What is economics good for? I think there’s a lot of confusion as to what it can do and what its limitations are.

The problem us economists face is that we must always have answers, and they must always be accurate. Anything short of that means that the entire science is bogus.

But economics is only as good as the data it relies on, and data is always imperfect in some way.

The way I like to talk about economics to the general public is that it helps tell you where to look, and also if you’ve found what you’re searching for.

The first part is like advanced geology mapping equipment. Let’s pretend that you’re looking for gold in your back yard. Now you can stumble around blindly and just dig here or there, and depending on how much gold you have you might find some. But economics can point you to where the best spot to dig would be.

Economics achieves this by figuring out which way data are facing. That is to say, to maximize profit, should we decrease prices? Well if you do that you calculate  that you’ll sell more products, but make less money per sale. Is it worth it? Economics can give you your answer.

But it’s not perfect because your data isn’t perfect.  Maybe your sales estimation model is off. Maybe reducing your prices doesn’t lead to as many new sales as you thought. Just like how you can miss the gold vein, sometimes you can end up with the wrong result. But it helps you get close. And with more refinement you can often strike something.

The second way it can help is to verify what you’ve found. So you pull a strange rock out of the ground. Does it have gold in it?  Economics can help you test if the strategy you’ve discovered is indeed a winning one.

The way economics can tell you with certainty what it is that you have is with the magic of the p values. You’ll see this in most econ literature. The p stands for probability, and it’s the probability that the effect you are seeing is because of random chance. The lower the p-value, the more confident you can be that an effect is “real”. The higher the p-value, the more likely the result is just the randomness of data.

A quick example is the fastest way to illustrate the point. You’re flipping a coin, heads or tails. My hypothesis is that the coin is rigged to always land on heads.

You flip the coin and it lands on heads twice in a row. Well that’s data in the direction of my hypothesis, but you could get the same result with a normal coin easily. So the p-value would be maybe .5 or a 50% chance that the coin is rigged (yes economists, I know this isn’t how p-value is directly calculated but I’m trying to keep things simple to illustrate the point), but also a 50% chance that the flips of a coin are random.

The next two flips are tails. Wow. The chance that a rigged coin would “misfire” twice in a row is pretty unlikely. Our p-value jumps to maybe .99, or 99%. We’re almost certain it’s not a rigged coin based on the data that we have.

Then the next 10 flips are heads. Every single one; right in a row. That is statistically fairly unlikely, but not impossible (probability of about .01%). So our p-value jumps down to maybe .07. Then the next 10 are heads. 20 head flips in a row? That’s really very unlikely to be a “localized streak” (probability of about .0001%). There’s almost certainly some connection between the coin and these flips; it almost certainly can’t be random chance!

For our example let’s assume our p-value falls to .04, or 4%. It is generally accepted in the scientific community that a p-value under 5% is “statistically significant”. That is to say, we’ve crossed a magical threshold. I can tell you with reasonable certainty that the something with these flips is indeed rigged. It’s still possible that I’m wrong, but so unlikely, that I can say with reasonable certainty that the rigging of the toss is real.

Then we flip heads another 10 times in a row. Well now we’ve flipped 2 heads, 2 tails, and 30 heads in a row. The chances of a coin being flipped 30 times heads in a row are astronomically small. I mean like .0000001%. Another way to think about it is you can expect to have a run like this if you did a series of 10000000000 coin flips (I may have missed a zero or two it’s hard to keep track). We’re talking rare.

So our p-value now jumps below .01, maybe to .009, or .9% that the effect is due to chance (in reality it might be much lower with 30 flips but stay with my analogy). We can be almost positive that our results are in fact real. There is something rigged about the tosses. The chance that they are not connected is practically, but not entirely, zero. There is truth to the famous Mac (from IASIP) quote that “with God, all things are possible”. And that’s certainly true. But really our data suggests we have a fact. Under 1%, or <.01 p-value is the next generally accepted threshold for scientists. Usually when 5% gets a * (to mark it’s significant), 1% gets ** (two stars)!

Okay so let’s flip the coins some more, and let’s in fact say that you flip heads another 70 times.

That’s a run of 100 head flips in a row. The odds become… impossible on a near universal scale. Like. A .00000000000000000000000000007% chance. I mean it’s such a small number it’s insane. In the scientific community we’d just say your p-value is now <.001. This is generally regarded as the last stop and is given *** (3 stars) to denote its statistical significance. There’s generally no point in going smaller because at just a .1% chance of being due to a random streak of data we can say with confidence that this effect is real.

Certain applications of statistics will push for an even lower p-value, but it’s really just to make a point. At <.001 whatever you are trying to prove is a fact.

Here’s another way to think about it. Lavar Ball says his son Lonzo Ball is going to play for the Lakers at Lonzo’s birth.  The chances of someone playing in the NBA are amazingly tiny, and to play for a specific team are tinier still. If Lavar Ball’s statement had a p-value of <.001 however, then even if Lonzo’s birth existed in 1001 different universes, he would play for the Lakers in 1000 of them.

At that point you can just say it’s destiny that Lonzo is in fact going to play for the Los Angeles Lakers. There is a cosmic connection, or a rigged system. It’s not up to chance.

And the same can be said for our coin.

Economics and p-values are powerful tools. And I’m only scratching the surface. There’s R-values and T-values and regression analysis to tell you all sorts of fun stuff.

But for the general layperson out there, this is the basis of the power of economics. To give you a general map of what is really going on, and then to test to prove that the gold you found is really gold, and not fools gold.

Episode 19: Subjective Probability (Part 2)

I want to take you back to maybe 6th grade math? Ratios! A ratio just so you remember is for example, the number of shots made by a player in a basketball game, say 9 for 14, or 9/14 (a nice efficient game).

But ratios really mess people up; quick, what’s better 71/331 or 42/199? It’s not easy to solve.

A “paper with the best name nominee”, “Six of One, Half Dozen of the Other” by Burson and Larrick set out to find the weird human behavior that arises when people are confronted by ratios.

The biggest part of a ratio that messes people up is when comparing two equal ratios as they change. It’s a variation of the Subjective Probability issue we’ve talked about previously. People misjudge the value of proportional increases.

Here’s a simple example to illustrate the point Burson and Larrick were trying to accomplish. Would you rather increase your score from 80 points to 100? Or 4 points to 5? Their hypothesis is that people like the 80 points to 100 increase more, because again, increasing 20 points is better than increasing 1. Of course, the ratios are the same so it’s an equivalent relative increase in both situations.

In Study 1 Burson and Larrick had subjects evaluate cell phone plans in the first scenario, and a movie plan in the second. Here’s the original tables so you can see how it was all set up:

Let me explain this to you. Start with condition 1 in Table 1. As (I hope) you can see, you should notice that both Plan A, and Plan B are slightly different; one is cheaper, but the other has more value.  In Condition 2, the plans are EXACTLY THE SAME. This is very important. The only thing that has changed is the scale of the ratios. One is by a factor of 10x, the other, price per year vs. price per month. Again. They are exactly the same.

The same happens in Table 2 with the movies. Plan A is cheaper, but Plan B has more movies. It’s a reasonable tradeoff. In Condition 2, the ONLY thing that is different is that the number of movies is expressed per year instead of per week.

There should be no preference for one plan over another. Preference for Plan A should be the same whether it’s the price per month or the price per year, right? It’s all the same.

Well framing is everything. For cell phones, Plan B (the cheaper plan) was preferred 53% to 31% when it had a lower price per year. Most likely because the difference in price looks bigger ($60 instead of $5/month).

Meanwhile Plan A (higher quality plan) was preferred 69% to 23% when It had many more dropped calls… per 1000 instead of per 100.

For the movie plan there was the same result. The only variable that changed was number of movies per week vs. per year (the price stayed monthly). People preferred Plan A (the cheaper plan) 57% to 33% when the number of movies was given per week, because the difference between 7 and 9 is small.

But people preferred Plan B (the higher quality plan) 56% to 38% when the number of new movies was given yearly.

The bottom line from Study 1: framing is important, and people will think that bigger numerical differences create a relatively bigger movement, even when the ratio is exactly the same. This is a tried and true marketing technique: “For only $3 a day you could have premium life insurance” is used instead of “For only $1,080 a year you could have premium life insurance”.

The other classic example: “Only 4 easy payments of $22.95”.

To sum up in a slightly different way: As attribute expansion increases, preference also increases.

Burson and Larrick didn’t stop there. Study 2 re-examined the issue by asking participants what they would be willing to pay.

Participants were again exposed to different movie plans. They were given what an “average plan” costs, and how many new movies they get per week, as well as the “target plan” (aka, the researchers’ target), that only gave the number of movies per week. The price was empty, and subjects were asked to fill it in with what they would be willing to pay.

For example, in Condition 1, the average plan gave you 9 new movies per week for a price of $12/month. If you were to only get 7 movies per week what would you be willing to pay? The average by the way was $9.20 which feels fairly reasonable.

A quick note. This technique is pretty standard for behavioral economists. What we call the “willingness to pay” is a great way to measure how attractive an option is. If the willingness to pay goes up, then the offer must have become more attractive.

There were four Conditions in Study 2.

Two gave the number of new movies per week (per my example in Condition 1). One had a target plan with fewer new movies per week than the average plan, and one had a target plan with more new movies per week than the average plan.

Condition 3 and 4 were identical to Conditions 1 and 2, except the numbers of new movies were given in years not weeks.

Again. The plans and their costs are identical. The ONLY thing that has changed in Condition 3 and 4 is that the number of new movies is now being expressed on a yearly basis.

The goal was to see if there is a difference in what people are willing to pay. The plans are the same. People should pay the same for the same number of movies, whether they’re given per week or per year. It’s the same number of movies! The price is even the same for goodness sake.

Results?

This graph is a little hard to read. The first two dots on the left are the plans given with movies per week (Conditions 1 and 2). The dot at the bottom left of $9.20 is the plan we alluded to before with fewer movies than the average plan in Condition 1. The dot on the top left of $11.55 is the plan with more movies than the average plan.  Obviously people should pay more for the plan with more movies compared to the average and that’s what they do.

What gets very interesting is when you take the EXACT same plans, and just expand them to the number of movies per year, which is what the dots on the right are. They should be the same price! It’s silly to expect people to pay less, or more, for the same number of movies but that’s exactly what happens.

The average willingness to pay for the lower movie plan drops to $8.83 when expressed in movies per year, and the willingness to pay for the higher movie plan bumps up to $13.82.

These are considerable movements. While I would not assume you can achieve this level of change in your application or organization, the researchers here were able to get about a 5% drop in relative value for cheap plans if expressed annually, or about a 20% increase in value when expressed annually.

I will quote the paper’s final conclusions:

“Attribute expansion inflated the perceived difference between two alternatives on that attribute, and thereby increased its weight relative to the other attributes.”

So big takeaways:

When you’re comparing your product to the “average” competitor and your product is better than average in a category, make that interval of time as big as possible to maximize the number, and therefore the benefit.

A great example is what student loan companies do. I get letters in the mail from the SoFi’s in the world that say you could save $40,000 today! That number is huge! Of course they get that by comparing your early payoff in 10 years with minimum average payments you’d make for federal student loans over 30. They’ve stretched the window for savings as far as possible to maximize the benefit, and it certainly makes a huge impression.

If you or your customer is comparing your product to the average and your product is worse than average in a category, make that interval of time as small as possible to minimize the number, and therefore the difference of the negative attribute.

If your product is $2880 per year, and your competitor’s product is $2520, don’t use annual prices. Instead say “they” are $7 per day but have no features. Your product is only $8 per day, only one extra dollar, but has this whole list of expanded features!

We’ll talk a lot more about segmentation later. But this is another great example of how framing and segmenting work. Give it a try. It’s all about the numbers.

Burson, K. A., Larrick, R. P., & Lynch, J. G. (2009). Six of One, Half Dozen of the Other. Psychological Science20(9), 1074-1078. doi:10.1111/j.1467-9280.2009.02394.x

Human Factors in Healthcare: An interview with Russ Branaghan

Logo for HumanTech podcast

Every day around the world thousands of people receive medical treatment. They, or their health care practitioner are using a medical device: an xray machine, a pacemaker, a medication infusion pump… So how well designed is that medical device? Did a human factors expert work on the design to help make it error proof? How can you prevent human error in the use of the device?

In this episode of Human Tech we speak with Russ Branaghan. Russ has a Ph.D. and has worked as a human factors engineer, with a specialty in healthcare for decades. He is President of Research Collective, a human factors and UX consulting firm and the author of Humanizing Healthcare — Human Factors for Medical Device Design, which was published in February of 2021.

Humanizing Healthcare – Human Factors for Medical Device Design:  9783030644321: Medicine & Health Science Books @ Amazon.com

We talk about what it’s like to design medical devices from a human factors point of view  Also, in this episode Russ offers to give career advice to anyone who’s interested in getting into the field.

To reach Russ you can email him at russ@research-collective.com

and here is a link to the website of his company: www.research-collective.com