Episode 20: What is Economics Useful For?

What is economics good for? I think there’s a lot of confusion as to what it can do and what its limitations are.

The problem us economists face is that we must always have answers, and they must always be accurate. Anything short of that means that the entire science is bogus.

But economics is only as good as the data it relies on, and data is always imperfect in some way.

The way I like to talk about economics to the general public is that it helps tell you where to look, and also if you’ve found what you’re searching for.

The first part is like advanced geology mapping equipment. Let’s pretend that you’re looking for gold in your back yard. Now you can stumble around blindly and just dig here or there, and depending on how much gold you have you might find some. But economics can point you to where the best spot to dig would be.

Economics achieves this by figuring out which way data are facing. That is to say, to maximize profit, should we decrease prices? Well if you do that you calculate  that you’ll sell more products, but make less money per sale. Is it worth it? Economics can give you your answer.

But it’s not perfect because your data isn’t perfect.  Maybe your sales estimation model is off. Maybe reducing your prices doesn’t lead to as many new sales as you thought. Just like how you can miss the gold vein, sometimes you can end up with the wrong result. But it helps you get close. And with more refinement you can often strike something.

The second way it can help is to verify what you’ve found. So you pull a strange rock out of the ground. Does it have gold in it?  Economics can help you test if the strategy you’ve discovered is indeed a winning one.

The way economics can tell you with certainty what it is that you have is with the magic of the p values. You’ll see this in most econ literature. The p stands for probability, and it’s the probability that the effect you are seeing is because of random chance. The lower the p-value, the more confident you can be that an effect is “real”. The higher the p-value, the more likely the result is just the randomness of data.

A quick example is the fastest way to illustrate the point. You’re flipping a coin, heads or tails. My hypothesis is that the coin is rigged to always land on heads.

You flip the coin and it lands on heads twice in a row. Well that’s data in the direction of my hypothesis, but you could get the same result with a normal coin easily. So the p-value would be maybe .5 or a 50% chance that the coin is rigged (yes economists, I know this isn’t how p-value is directly calculated but I’m trying to keep things simple to illustrate the point), but also a 50% chance that the flips of a coin are random.

The next two flips are tails. Wow. The chance that a rigged coin would “misfire” twice in a row is pretty unlikely. Our p-value jumps to maybe .99, or 99%. We’re almost certain it’s not a rigged coin based on the data that we have.

Then the next 10 flips are heads. Every single one; right in a row. That is statistically fairly unlikely, but not impossible (probability of about .01%). So our p-value jumps down to maybe .07. Then the next 10 are heads. 20 head flips in a row? That’s really very unlikely to be a “localized streak” (probability of about .0001%). There’s almost certainly some connection between the coin and these flips; it almost certainly can’t be random chance!

For our example let’s assume our p-value falls to .04, or 4%. It is generally accepted in the scientific community that a p-value under 5% is “statistically significant”. That is to say, we’ve crossed a magical threshold. I can tell you with reasonable certainty that the something with these flips is indeed rigged. It’s still possible that I’m wrong, but so unlikely, that I can say with reasonable certainty that the rigging of the toss is real.

Then we flip heads another 10 times in a row. Well now we’ve flipped 2 heads, 2 tails, and 30 heads in a row. The chances of a coin being flipped 30 times heads in a row are astronomically small. I mean like .0000001%. Another way to think about it is you can expect to have a run like this if you did a series of 10000000000 coin flips (I may have missed a zero or two it’s hard to keep track). We’re talking rare.

So our p-value now jumps below .01, maybe to .009, or .9% that the effect is due to chance (in reality it might be much lower with 30 flips but stay with my analogy). We can be almost positive that our results are in fact real. There is something rigged about the tosses. The chance that they are not connected is practically, but not entirely, zero. There is truth to the famous Mac (from IASIP) quote that “with God, all things are possible”. And that’s certainly true. But really our data suggests we have a fact. Under 1%, or <.01 p-value is the next generally accepted threshold for scientists. Usually when 5% gets a * (to mark it’s significant), 1% gets ** (two stars)!

Okay so let’s flip the coins some more, and let’s in fact say that you flip heads another 70 times.

That’s a run of 100 head flips in a row. The odds become… impossible on a near universal scale. Like. A .00000000000000000000000000007% chance. I mean it’s such a small number it’s insane. In the scientific community we’d just say your p-value is now <.001. This is generally regarded as the last stop and is given *** (3 stars) to denote its statistical significance. There’s generally no point in going smaller because at just a .1% chance of being due to a random streak of data we can say with confidence that this effect is real.

Certain applications of statistics will push for an even lower p-value, but it’s really just to make a point. At <.001 whatever you are trying to prove is a fact.

Here’s another way to think about it. Lavar Ball says his son Lonzo Ball is going to play for the Lakers at Lonzo’s birth.  The chances of someone playing in the NBA are amazingly tiny, and to play for a specific team are tinier still. If Lavar Ball’s statement had a p-value of <.001 however, then even if Lonzo’s birth existed in 1001 different universes, he would play for the Lakers in 1000 of them.

At that point you can just say it’s destiny that Lonzo is in fact going to play for the Los Angeles Lakers. There is a cosmic connection, or a rigged system. It’s not up to chance.

And the same can be said for our coin.

Economics and p-values are powerful tools. And I’m only scratching the surface. There’s R-values and T-values and regression analysis to tell you all sorts of fun stuff.

But for the general layperson out there, this is the basis of the power of economics. To give you a general map of what is really going on, and then to test to prove that the gold you found is really gold, and not fools gold.

Episode 19: Subjective Probability (Part 2)

I want to take you back to maybe 6th grade math? Ratios! A ratio just so you remember is for example, the number of shots made by a player in a basketball game, say 9 for 14, or 9/14 (a nice efficient game).

But ratios really mess people up; quick, what’s better 71/331 or 42/199? It’s not easy to solve.

A “paper with the best name nominee”, “Six of One, Half Dozen of the Other” by Burson and Larrick set out to find the weird human behavior that arises when people are confronted by ratios.

The biggest part of a ratio that messes people up is when comparing two equal ratios as they change. It’s a variation of the Subjective Probability issue we’ve talked about previously. People misjudge the value of proportional increases.

Here’s a simple example to illustrate the point Burson and Larrick were trying to accomplish. Would you rather increase your score from 80 points to 100? Or 4 points to 5? Their hypothesis is that people like the 80 points to 100 increase more, because again, increasing 20 points is better than increasing 1. Of course, the ratios are the same so it’s an equivalent relative increase in both situations.

In Study 1 Burson and Larrick had subjects evaluate cell phone plans in the first scenario, and a movie plan in the second. Here’s the original tables so you can see how it was all set up:

Let me explain this to you. Start with condition 1 in Table 1. As (I hope) you can see, you should notice that both Plan A, and Plan B are slightly different; one is cheaper, but the other has more value.  In Condition 2, the plans are EXACTLY THE SAME. This is very important. The only thing that has changed is the scale of the ratios. One is by a factor of 10x, the other, price per year vs. price per month. Again. They are exactly the same.

The same happens in Table 2 with the movies. Plan A is cheaper, but Plan B has more movies. It’s a reasonable tradeoff. In Condition 2, the ONLY thing that is different is that the number of movies is expressed per year instead of per week.

There should be no preference for one plan over another. Preference for Plan A should be the same whether it’s the price per month or the price per year, right? It’s all the same.

Well framing is everything. For cell phones, Plan B (the cheaper plan) was preferred 53% to 31% when it had a lower price per year. Most likely because the difference in price looks bigger ($60 instead of $5/month).

Meanwhile Plan A (higher quality plan) was preferred 69% to 23% when It had many more dropped calls… per 1000 instead of per 100.

For the movie plan there was the same result. The only variable that changed was number of movies per week vs. per year (the price stayed monthly). People preferred Plan A (the cheaper plan) 57% to 33% when the number of movies was given per week, because the difference between 7 and 9 is small.

But people preferred Plan B (the higher quality plan) 56% to 38% when the number of new movies was given yearly.

The bottom line from Study 1: framing is important, and people will think that bigger numerical differences create a relatively bigger movement, even when the ratio is exactly the same. This is a tried and true marketing technique: “For only $3 a day you could have premium life insurance” is used instead of “For only $1,080 a year you could have premium life insurance”.

The other classic example: “Only 4 easy payments of $22.95”.

To sum up in a slightly different way: As attribute expansion increases, preference also increases.

Burson and Larrick didn’t stop there. Study 2 re-examined the issue by asking participants what they would be willing to pay.

Participants were again exposed to different movie plans. They were given what an “average plan” costs, and how many new movies they get per week, as well as the “target plan” (aka, the researchers’ target), that only gave the number of movies per week. The price was empty, and subjects were asked to fill it in with what they would be willing to pay.

For example, in Condition 1, the average plan gave you 9 new movies per week for a price of $12/month. If you were to only get 7 movies per week what would you be willing to pay? The average by the way was $9.20 which feels fairly reasonable.

A quick note. This technique is pretty standard for behavioral economists. What we call the “willingness to pay” is a great way to measure how attractive an option is. If the willingness to pay goes up, then the offer must have become more attractive.

There were four Conditions in Study 2.

Two gave the number of new movies per week (per my example in Condition 1). One had a target plan with fewer new movies per week than the average plan, and one had a target plan with more new movies per week than the average plan.

Condition 3 and 4 were identical to Conditions 1 and 2, except the numbers of new movies were given in years not weeks.

Again. The plans and their costs are identical. The ONLY thing that has changed in Condition 3 and 4 is that the number of new movies is now being expressed on a yearly basis.

The goal was to see if there is a difference in what people are willing to pay. The plans are the same. People should pay the same for the same number of movies, whether they’re given per week or per year. It’s the same number of movies! The price is even the same for goodness sake.

Results?

This graph is a little hard to read. The first two dots on the left are the plans given with movies per week (Conditions 1 and 2). The dot at the bottom left of $9.20 is the plan we alluded to before with fewer movies than the average plan in Condition 1. The dot on the top left of $11.55 is the plan with more movies than the average plan.  Obviously people should pay more for the plan with more movies compared to the average and that’s what they do.

What gets very interesting is when you take the EXACT same plans, and just expand them to the number of movies per year, which is what the dots on the right are. They should be the same price! It’s silly to expect people to pay less, or more, for the same number of movies but that’s exactly what happens.

The average willingness to pay for the lower movie plan drops to $8.83 when expressed in movies per year, and the willingness to pay for the higher movie plan bumps up to $13.82.

These are considerable movements. While I would not assume you can achieve this level of change in your application or organization, the researchers here were able to get about a 5% drop in relative value for cheap plans if expressed annually, or about a 20% increase in value when expressed annually.

I will quote the paper’s final conclusions:

“Attribute expansion inflated the perceived difference between two alternatives on that attribute, and thereby increased its weight relative to the other attributes.”

So big takeaways:

When you’re comparing your product to the “average” competitor and your product is better than average in a category, make that interval of time as big as possible to maximize the number, and therefore the benefit.

A great example is what student loan companies do. I get letters in the mail from the SoFi’s in the world that say you could save $40,000 today! That number is huge! Of course they get that by comparing your early payoff in 10 years with minimum average payments you’d make for federal student loans over 30. They’ve stretched the window for savings as far as possible to maximize the benefit, and it certainly makes a huge impression.

If you or your customer is comparing your product to the average and your product is worse than average in a category, make that interval of time as small as possible to minimize the number, and therefore the difference of the negative attribute.

If your product is $2880 per year, and your competitor’s product is $2520, don’t use annual prices. Instead say “they” are $7 per day but have no features. Your product is only $8 per day, only one extra dollar, but has this whole list of expanded features!

We’ll talk a lot more about segmentation later. But this is another great example of how framing and segmenting work. Give it a try. It’s all about the numbers.

Burson, K. A., Larrick, R. P., & Lynch, J. G. (2009). Six of One, Half Dozen of the Other. Psychological Science20(9), 1074-1078. doi:10.1111/j.1467-9280.2009.02394.x

Human Factors in Healthcare: An interview with Russ Branaghan

Logo for HumanTech podcast

Every day around the world thousands of people receive medical treatment. They, or their health care practitioner are using a medical device: an xray machine, a pacemaker, a medication infusion pump… So how well designed is that medical device? Did a human factors expert work on the design to help make it error proof? How can you prevent human error in the use of the device?

In this episode of Human Tech we speak with Russ Branaghan. Russ has a Ph.D. and has worked as a human factors engineer, with a specialty in healthcare for decades. He is President of Research Collective, a human factors and UX consulting firm and the author of Humanizing Healthcare — Human Factors for Medical Device Design, which was published in February of 2021.

Humanizing Healthcare – Human Factors for Medical Device Design:  9783030644321: Medicine & Health Science Books @ Amazon.com

We talk about what it’s like to design medical devices from a human factors point of view  Also, in this episode Russ offers to give career advice to anyone who’s interested in getting into the field.

To reach Russ you can email him at russ@research-collective.com

and here is a link to the website of his company: www.research-collective.com

Humans Calculate By Feel on the Human Tech Podcast

Logo for HumanTech podcast

In this Human Tech podcast episode we talk about behavioral economics, specifically about the idea that people don’t calculate the value of products and services rationally, but they do so by following how they feel about what something is worth. Guthrie walks us through the research and the practical implications.

East vs. West Design: A conversation with Sydney Anh Mai

Logo for HumanTech podcast

In this Human Tech podcast episode we talk with Sydney Anh Mai from Kickstarter about differences in design between the East and the West. 

Sydney grew up in Vietnam, but then came to school and to work in the US. A Product Designer for Kickstarter, we talk about product design, interaction design, and cultural differences.

 


We Overestimate Our Own Knowledge, the latest episode on the Human Tech Podcast

Logo for HumanTech podcast

If you ask someone how much they know about a particular topic they tend to overestimate their own knowledge. And we tend to rely on our social network to fill in our knowledge gaps.

This illusion about how much we know is the topic of the latest Human Tech podcast episode, where we talk with Drs. Steve Sloman (Brown University) and Phil Fernbach (University of Colorado) who wrote the book, The Knowledge Illusion: Why We Never Think Alone.

 


Dean Barker, VP for UX at United Health Group on the Human Tech Podcast

Logo for HumanTech podcast

On this episode of Human Tech we talk with Dean Barker who is the VP for User Experience for United Health Group, and is also a long time friend and colleague. We talk about the challenges of finding UX staff, challenges in doing UX work at a large corporation, and some of the past projects Dean and I worked on together over the years.


Are Virtual Meetings Stressing You Out?

These days my workdays often consist of one online meeting after another. Sometimes I have long streams of back to back meetings with less than 30 seconds in between. Sound familiar?

So on a recent call with a client last week I asked if we could all change our meeting settings so that the default is a 50 minute meeting, not a 60 minute meeting. This would give us all time to refill the water bottle, grab a bite to eat, go to the restroom, get up and do some stretching…

I haven’t had any meeting requests for 50 minutes. They are all 30 minutes (and then back to back) or 60 minutes and back to back.

Anyone tried this? Does it work? Does it help? What other ideas have you tried to make these virtual meeting days less stressful and healthier for our mental and physical well being?

Authors Karl Fast and Stephen Anderson on “Figure It Out”

Logo for HumanTech podcast

WOW, WOW, WOW.

If your job involves designing anything, or communicating information to others, then I think you need to read this book.

On this episode of Human Tech we interview Karl Fast and Stephen Anderson about their recently published book:  Figure It Out: Getting from Information to Understanding.

On the episode I act like a fan girl at the beginning, but really, the book impressed me that much. It may change the way you think about thinking and how people process information. It should change the way you present and share any kind of information, whether text, visual, digital or physical.

It’s not a “quick bites” type of read. It’s fairly substantial,  but it is so well written and with lots and lots of examples, that I recommend it to everyone involved in any kind of information design/communication.

The publisher,  Rosenfeld, has a coupon code for us:  Go to this webpage:

https://rosenfeldmedia.com/books/figure-it-out/

and when you are checking out use this code: humantechfigure0820

for 20% off through October 31, 2020.

Figure It Out
Figure It Out