I hope you’ve heard of System 1 and System 2 thinking. It’s an idea originally put together by Daniel Kahneman. System 1 is our normal state of brain activity. Watching TV, driving, looking at a picture of a sad face. It’s simple, effortless, and our favorite mode to be in. System 2 is heavy thinking, such as solving a tough math problem, or taking the bar exam to be a lawyer (which this author did and passed, so there). It’s hard, uncomfortable, and actually uses up more calories. It’s literally more work.

The idea that there are two different processing systems in the brain is not new. And it’s probably a much better analogy of how the brain works rather than the traditional “the brain is a computer” metaphor that isn’t accurate.

Much like System 1 and System 2, in 1992 Kirkpatrick and Epstein proposed another way of thinking about these networks in their paper “Cognitive-experiential self-theory and subjective probability: Further evidence for two conceptual systems.”

They propose the idea that there are two modes of processing info, one with an experiential conceptual system, and one with a rational conceptual system. Let me try and simplify this.

The first mode is an experiential conceptual system. Note, this is not experimental, it’s experiential which means observed or perceived. Our experiential system encodes information as “concrete representations” (thanks BEGUIDE 2016). Take this mind journey with me:

Think of a door alone in a long hallway. A single closed door in an empty space.

Through the magic of the brain, you have conjured up an image of a door. You can see its color, how it opens. The space around it. It’s a physical object.

In your mind journey keep thinking about the door, but walk closer. Get so close to the door you can almost smell it. Lean up close to it right before you touch it, and blow softly on it.

I’ll bet your brain made a solid door. Your breath didn’t go through. It’s a real object in your mind.

In the cognitive-experiential self-theory you’ve used your experiential conceptual system to create something observable; it’s an object.

Now instead let’s put you in front of a tricky math problem you have to solve by hand. Say (47*16)/19.

I want you to visualize the answer. What is it? Well. Unless you’re an autistic savant can’t visualize the answer right away. You can’t “see” the answer in the same way you can see the door because you’re using a different system. You have to use the rational conceptual system. You have to remember math and the strategies to multiply and do long division. It’s a different system. It feels different.

Kirkpatrick and Epstein wanted to see if any weird human brain stuff went on when humans had to switch between the two systems. So here’s the experiment they set up (for you purists, I’m skipping to Experiment 3 in their study):

There were two bowls with red and white jelly beans. One was the Big Bowl that had 100 jelly beans, and one was the Small Bowl with only 10 jelly beans.

They set up a game where if you randomly pick a jelly bean and it’s red, you win some money (like $4); but if it’s white you win nothing.

They then put their subjects into one of four conditions. Condition 1 had (and told subjects) there was a 10% win rate. So that means 10 red jelly beans and 90 white jelly beans in the Big Bowl, and 1 red jelly bean and 9 white jelly beans in the Small Bowl.

The odds are the same; either 10/90 or 1/9.

Condition 2 had (and told subjects) there was a 90% win rate. With 9/1 jelly beans in the Small Bowl, and again 90/10 jelly beans in the Big Bowl.

Again, the odds are the same; either 90/10 or 9/1.

Conditions 3 and 4 were the same as Conditions 1 and 2, except the odds were framed as losing. Condition 3 had a 10% lose rate (so the odds and bowls were the same as Condition 2, 9/1 and 90/10), and Condition 4 had a 90% lose rate (so the odds and bowls were the same as Condition 1, 1/9 and 10/90).

Subjects were then put in front of the Big Bowl and Small Bowl and could decide which bowls they wanted to bet on. Here’s the important thing to remember; THE ODDS IN THE BOWLS ARE EXACTLY THE SAME. In every condition the odds for the Big Bowl and Small Bowl are Identical. It’s just that the big bowl has 10x the number of Jelly Beans.

Statistically it makes NO DIFFERENCE which bowl you bet on. If you gave this problem to a computer (and perhaps this is a great question for my Turing Test, to see if you’re AI or a human), it would bet randomly, or 50/50 on the Big or Small bowls. The odds are the same. You make no more or less money betting on one over the other.

So that’s what people did right? Of course not!

When presented with *low* odds of winning (the 10% win, or 90% lose conditions), about 75% of people chose to bet in the Big Bowl (73.1% for 90% lose and 76.9% for 10% win).

Conversely when presented with *high *odds of winning (the 90% win, or 10% lose conditions), only about 30% chose to bet in the Big Bowl (30.8% for the 10% lose condition, and 36.5% for the 90% win condition).

When presented with low odds of winning, most people wanted to gamble on a Big Bowl with lots of jelly beans, but when presented with high odds of winning, most people wanted to gamble on a Small Bowl with very few jelly beans.

This provides very strong support for the theory that there are two different systems. Rationally we know the odds are the same, but then our experiential system kicks in. I quote from the BEGUIDE 2016: “our experiential system – unlike the rational system – encodes information as concrete representations, and absolute numbers are more concrete than ratios or percentages.”

When we’re faced with a simply ratio-based math problem we use our rational system. But when we are standing in front of bowls with jelly beans it’s not 90%; it’s 9 out of 10. That kicks us into experiential.

9 out of 10 is almost a sure win; it’s really concrete. Our brains tell us that we want the small bowl because there are “fewer” chances to lose because there are fewer jelly beans. There’s only one loser jelly bean! We only have to avoid one bad bean, but in the Big Bowl we have to avoid 10! Your brain says, “oh, 1 is smaller than 10, that feels better, bet on that”. And this happens even while the rational system tells you they’re the same.

We walk around in non-rational, experiential mode, so people bet the small bowl.

Conversely, when it is only a 1 out of 10 chance of winning, oh man, there’s only one winner jelly bean in the whole Small Bowl. I’d rather have 10 chances of winning, and the big bowl has 10 winner jelly beans, so 10 is more than 1, so let’s bet in the Big Bowl.

Even while the rational system says they’re the same.

People go with their feelings.

Takeaways then. Welp. It’s another nail in the coffin of human rational decision making. If you want people to feel better about making a choice that has small odds of success, they’ll feel better if there are lots of possible winners, even if there are also proportionally just as many chances to lose.

Conversely, if you want people to feel better about making a choice that has high odds of success, minimize the number of losing tickets, even if that means reducing the number of winning tickets. People feel much better when they see numerically only one losing ticket.

Kirkpatrick, L. A., & Epstein, S. (1992). Cognitive-experiential self-theory and subjective probability: Further evidence for two conceptual systems. *Journal of Personality and Social Psychology*, *63*(4), 534-544. doi:10.1037//0022-3514.63.4.534

http://www.behavioraleconomics.com/BEGuide2016.pdf