Gerd Gigerenzer on Gut Feelings
Dec 2 2019

Gut-Feelings-197x300.jpg Psychologist and author Gerd Gigerenzer of the Max Planck Institute for Human Development talks about his book Gut Feelings with EconTalk host Russ Roberts. Gigerenzer argues for the power of simple heuristics--rules of thumb--over more complex models when making real-world decisions. He argues that many results in behavioral economics that appear irrational can be understood as sensible ways of coping with complexity.

RELATED EPISODE
Richard Thaler on Libertarian Paternalism
Richard Thaler of the University of Chicago Graduate School of Business defends the idea of libertarian paternalism--how government might use the insights of behavioral economics to help citizens make better choices. Host Russ Roberts accepts the premise that individuals make...
EXPLORE MORE
Related EPISODE
Vernon Smith on Rationality in Economics
Nobel Laureate Vernon Smith of Chapman University and George Mason University talks with EconTalk host Russ Roberts about the ideas in his new book, Rationality in Economics: Constructivist and Ecological Forms. They discuss the social and human sides of exchange,...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Mauricio Lema
Dec 2 2019 at 12:46pm

In Ending Medical Reversal, Prasad/Cifu state that many well accepted medical practices (“medical advances”), do not hold when examined through randomized-clinical trials (RCTs). Framed in (pseudo) behavioral economics we could say: Let’s say that the results of RCTs are the “Thinking-slow”, and the so-called “medical advances” tested are the “Thinking-fast” (Kahneman).  But Gigerezer gives us hope for our gut feelings (“thinking fast”, of sorts). Could it be that the early success of those (in the future rejected by RCT) “medical advances” were indeed “medical advances” to the initial patients whe were the recipient of them, because the physicians that recommended these had a (n appropriate) “gut feeling” about them?

If that is the case, the RCT failed to capture those patients that would derive benefit from these TRUE “medical advances”, and maybe we should explore how to identify wich would benefit (that is, identify the “gut feelings” of the physicians that began the “medical advances”)?

Thank you again for another MAGNÍFICO PROGRAMA!!

 

John P.
Dec 2 2019 at 1:00pm

This interview was interesting in light of the comments about IQ posted to last week’s podcast.  I think I have a better sense now of why a Robertsian approach to social-problem solving might be cool to the use of cognitive ability as an explanatory variable.  First is the searching-for-your-keys-under-the-streetlight problem — i.e., because IQ tests give us measurements,  we’re inclined to overestimate what they can show us.  Second is the problem of over-reliance on statistical data when making decisions under uncertainty (such as decisions about who should be hired or where assistance should be directed).

Floccina
Dec 3 2019 at 12:40pm

Considering the fact that complicated algorithms fit the past much better than the future how should we react to projections of Anthropogenic Global Warming?

Bernhard Schmalhofer
Dec 26 2019 at 11:37am

Climate models is one of the cases where the simple models give the same general answer as more sophisticated models. So it is a good idea to follow the gut feeling in this case.

Alex K.
Dec 3 2019 at 8:26pm

Great episode.

It is unfortunate “nudging” seems so popular given its severe flaws and risks. As Gigerenzer says, nudging is repackaged public relations and, it seems, is often employed to undue years of private public relations campaigns and government incentive structures. Somehow adding another layer of coercion to the “decision architecture” is preferable to Sunstein et al than returning decision-making power and responsibility to the citizens directly involved. But as Gigerenzer says,

And what we really need in the 21st century is people who understand what’s being done to them. People who are a risk literate, who are health literate, who are able to deal with money, and who are also able to control the digital media. We need to invest in making people stronger. This is my view…[W]e don’t need more paternalism in 21st century. We had enough, last one.

Russ, a potential guest on these and related themes might be Jon Elster, specifically concerning his book Securities Against Misrule. Therein he expands upon Jeremy Bentham’s idea “the art of the legislator is limited to the prevention of everything which might prevent the development of their liberty and their intelligence.”

Rob Wiblin
Dec 4 2019 at 12:49pm

This was a great interview, I feel like I learned a great deal.

One frustration I had was the ‘weak-manning’ of ‘Thinking Fast and Slow‘ by Kahneman. Listening to this interview you would think that the book argues that ‘system 2’-style explicit reasoning is often or always better than going with gut intuitions.

It has been a few years now since I read it, but I don’t recall the book saying that at all. Rather, like Mr Gigerenza, it tries to specify circumstances under which we need to do more explicit reasoning, and others where we need to listen more to our gut.

Behavioural economics as a field may well focus on biases too much relative to heuristics, but there’s no need to portray behavioural economists as more naïve than they really are.

I would go as far as to say that virtually nobody thinks that  ‘system 2’-style explicit reasoning is usually more accurate (let alone the better approach, given its much higher cost). If such a person exists, I’m yet to meet them!

Marilyne Tolle
Dec 5 2019 at 12:14pm

This was a very interesting conversation (long overdue!) but I found it confusing that it conflated discussions of the use of intuition vs the use of rules of thumb/heuristics.

As I understand it, intuition is best applied to domains that display the following characteristics:

1. Stable/linear environment
2. Repeated actions
3. Immediate feedback

The best examples come from the field of sports where athletes use their “unconscious competence”, developed over thousands of hours of training (in a stable environment, with many repetitions and rapid feedback).

Rules of thumb are best applied when the domain is characterised by:

1. Complexity (many interconnected variables)
2. Non-linearity (variables might interact in non-linear ways)
and therefore
3. Uncertainty (the probability distribution of different outcomes is unknown)

So intuition and rules of thumb, and their domains of application, are distinct.

While intuition can sometimes be articulated as a rule of thumb (e.g. the gaze heuristic described by Gerd Gigerenzer: “the heuristic has three steps: Namely, fixate your eye on the ball, start running, and then adjust your running speed so that the angle of gaze remains constant.”), rules of thumb are simple instructions based on limited information, like the fast-and-frugal trees, which may or may not be “intuitive”.

Dr. Duru
Dec 24 2019 at 10:09pm

I like Marilyne’s logical distinctions here. It clarifies this podcast further for me. Overall, I like the defense of intuition based on years of experience. It reminds me of the discussions and studies of tacit versus explicit knowledge in the 1990s.

I thought the notion that having more information is not always good is a strawman in of itself. Perhaps we could derive more clarity if we focused on having more knowledge and/or more wisdom? These are levels of value-added information aimed at improving decision-making. Are there cases where gaining more knowledge or even more wisdom is bad? I could not think of such a case, but I would love to hear some. I contend that focusing on obtaining more knowledge and more wisdom provides a valuable filter on the kinds and amounts of information we consider desirable.

 

 

Gregg Tavares
Dec 9 2019 at 3:42am

I found this talk interesting and there were some good examples like Google’s over complicated machine learing cancer prediction failing where a simpler method succeeded better.

But, it wasn’t 100% clear to me how to apply it. Maybe my issue is with the phrases “going with your gut” or “using your intuition”. In Silicon Valley using your “gut” in hiring people has been suggested to be an unconscience bias for “same as me, a white male brogrammer”. Has there been any research on if that is actually real consequence of “using your gut” in hiring?

David Gossett
Dec 9 2019 at 10:26am

Patient walks into MD Anderson. Diagnosed with stage 4 lung cancer. Tumor in the mediastinum. Will eventually affect breathing and suffocate the patient to death. Meet with surgeon. I am sorry, but we can’t operate. The tumor is undifferentiated and will be hard to remove. There is a 15% chance you will die in surgery or post-op. End of discussion. No removing what you can. No taking pressure off diaphragm. No seeding the rest of tumor with radiation. Surgeons don’t like surgery. Surgeons like to succeed at minimal cost.

More is always better as long as humans are removed from the models. Otherwise human emotions/incentives creep into each model and corrupt the outcomes. Fast and frugal equates to human limitations.

I believe gut feelings are a proxy for cold and hot emotional states. I hike/climb an average of 50 times per year. It’s not my experience I rely on, but my ability to remain calm in adverse situations. When I take people hiking, I can see them moving into a hot state and not thinking clearly — taking unnecessary risks. Rob Hall and Scott Fischer were some of the most experienced (gut-driven) guides on Everest. They didn’t die due to lack of experience. They died because each went to a hot emotional state.

Jeroen Nieuwkoop
Dec 16 2019 at 11:17pm

This one covers a lot of ground and provides a lot of food for thought. The concept of fast-and-frugal trees is very interesting and the tool itself seems to capture the heart of the subject matter: in a high-uncertainty environment, through extensive quantitative analysis you can select the variables that help reduce uncertainty and develop a decision making tool that is simple and easy to use.

What this podcast downplays and perhaps overly simplifies is the need to (i) reduce uncertainty through analysis (developing an FFT is quite complex and involves extensive quantitative data analysis; it doesn’t seem to involve gut feelings or intuition), (ii) match the complexity of your prediction model to reflect the decision making environment (less is not always more as is seemingly implied in the podcast; you may be using too many or not enough variables, under- or over-fitting, using the wrong (linear) model to measure (non-linear) complexity, not sufficiently analyzing or understanding the process dynamics, or many other things. If a model is wrong, find a better one: don’t throw in the towel and rely on your gut), and (iii)  calibrate the features of your decision making tool with the decision making requirements (not all tools need to be simple per se. Different situations require different trade-offs between accuracy, avoiding false positives or false negatives, ease of use, ease of change, speed, etc.) 

Comments are closed.


DELVE DEEPER

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: October 29, 2019.]

Russ Roberts: Today is October 29th, 2019 and my guest is psychologist and author Gerd Gigerenzer. He is Director of the Harding Center for Risk Literacy at the Max Planck Institute for Human Development. Our topic for today is his work on decision making, rationality, and rules of thumb, including his 2008 book, Gut Feelings: The Intelligence of the Unconscious. Gerd, welcome to EconTalk.

Gerd Gigerenzer: I'm glad to be there.

Russ Roberts: So let's start with what you mean by gut feelings and how it's possible--which on the surface they seem irrational, or not analytical--how could they possibly help us make better decisions than a rational, calculating approach?

Gerd Gigerenzer: Yeah, so let's first be clear, what I mean, with gut feelings. So, a gut feeling, or an intuition, is based on years of experience, and it has the following features. It usually is quickly in your mind. So you know what you might do. But, you can't explain why. Nevertheless, it guides much of our personal and also professional decisions. So, a gut feeling is not something arbitrary. It's not as sixth sense. And it's also not what women have. Men also have gut feelings. There is a suspicion around gut feelings that is widespread in the Social Sciences that they would always be second best and misleading. The problem is, if one would not listen to one's gut--that means, to the experience that's stored in the brain or in those parts of the brain who can talk--then one would lose lots of important information.

Russ Roberts: And yet, in most treatments of decision-making--and you refer to this many times in your book. You--I think you called it the Franklin approach: Benjamin Franklin making a list of pros and cons. My favorite example of it is the Charles Darwin list of why he should get married or stay single. A rational person, an enlightened person, a thoughtful thinking person shouldn't rely on those unseen unconscious intuitions. They could always do better by doing a cost benefit analysis. Or can they?

Gerd Gigerenzer: Yep. That's not true. It's important to distinguish between a world where you can calculate the risks and other situations where you can't do that. So, if you can calculate the risk, that would be the case if you played a roulette or a lottery, where all probabilities are known, where all possible outcomes are known, and all consequences are known. Then: better calculate. But in many situations of the world, there is uncertainty, where you cannot know the probability, not even estimate them. And also, moreover, you can't know all the possible future states of the world. So, if you want to invest, if you want to find out whom to marry, if you decide what professor to hire, these are all situations of uncertainty, and precise calculations are an illusion in this situation. What's useful is heuristics that's robust--so, rules of thumb that have a good chance to hit as opposed to a calculation that over-fits the past.

4:39

Russ Roberts: So, I want to warn listeners that I love this book, and I love the way that you, Gerd, look at these issues. So, I am very prone--as I am often not--I am very prone to accept all of your claims as 'Well, of course. That makes sense. That's a good study. Another good study. And there's some more confirming evidence.' So, I have a natural inclination, being trained as a particular kind of economist, to be very skeptical of Behavioral Economics, and some of the claims of irrationality that have been put forward in recent years by behavioral economists and others. And so, I'm going to try to fight it; but I have to say I find that approach that you've just laid out extremely compelling. It pushes so many of my happy places. So, I want to take an example.

My wife is a school teacher, and at times she's been an administrator, and she talks about the challenges of hiring a new faculty member. And she was very affected by Kahneman's book Thinking Fast and Slow because her--for many years, she and her colleagues, she felt had relied on gut feelings. They had said, 'Wow, you know, that was such a great presentation.' They get wowed by it--by, say, the charisma of a teacher. And she argued, pretty persuasively to me at least, that when you're hiring a teacher, it's much better to take a set of categories, scale things from one to five, get away from your gut and force your decision-making process to be more analytical than just that one wow factor that might overwhelm everything else. And yet you suggest that that's maybe not so advantageous. Why?

Gerd Gigerenzer: No. The first intuition or gut feelings is not the opposition of getting information. Typically it's very useful to look for information to read through the CV [curriculum vitae] of the applicants, too; but, in many situations at the end, the numbers that you find there don't tell you the answer.

And, I've worked with experienced people who do personnel selection. They do exactly that. They get informed; but then they talk to the person and develop a kind of feeling about the person. And at the end, a gut decision is always one that started out with evidence. But, if the evidence doesn't tell you, then you listen to your guts.

And listening to your guts makes only sense if you have years of experience. So that means all the information is there. But it's in a way in our brain that you can't explain that.

Russ Roberts: So, my wife knows she's a sucker for charisma; and so she automatically discounts that dramatically. So that would be an example of how over time you learn about what your, maybe, weak spots are, and you get better.

But doesn't Kahneman claim, and find in research that he has done that--I think it was the Israeli Air Force--that when they went and used a very mechanical process for hiring pilots for the airports, they did better than their gut. Are you suggesting that that was misleading or that they used other gut intuition on top of it? or that you should?

Gerd Gigerenzer: So, the executives I've worked with--doctors in medicine, businessmen--in many areas looking and searching through the information. And at the end, it's often a gut decision. So it's not an[?in?] opposition.

And, give you an example. I have worked with some of the largest companies on the stock market worldwide, and ask the executives, 'Think about your last 10 important professional decisions where you participated or what you made by yourself. How many of them were at the end a gut decision?' So the emphasis is at the end. And the typical answer in these corporations were 'Fifty percent'.

So every other decision is at the end a gut session.

But the same executives would not dare to say this in public because there is anxiety. Because there is little tolerance in the public. In part because of this type of Psychology and Behavioral Economics literature that assumes that intuition is always second best.

It's not true. And that can be shown experimentally.

For instance, some dear colleagues of mine have shown that, in sports--so for instance, if you instruct a golf player, an experienced golf player, to make the put quickly--so in less than three seconds--as opposed to let the person more time, what do you think? Which one will hit more often? It's the less than three seconds. So, it's where you have little time and room to pay attention. But this only holds for experienced people. It doesn't hold for beginners.

And this kind of distinction has been put in dust [?] in this kind of crusade against intuition. That is now basically the definition of Behavioral Economics.

I would wish that Behavioral Economics would get out of this idea that their goal is to explain how people deviate from homo economicus. Homo economicus is, anyhow, an unrealistic person; and every economist would admit to that. Behavioral economists have taken that too seriously.

11:08

Russ Roberts: Yeah. It's a straw man that--it drives me crazy. It reminds me of a story that I've told a few times on here when talking about Value at Risk [VaR], a technique for very sophisticated "scientific, statistically-driven mathematical technique for evaluating the riskiness of a portfolio" that led people badly astray during the Financial Crisis of 2008. And a friend of mine, wonderful, bright person in the investment world, said, 'Well, what's the alternative? It's the best we have.' And I said, 'The alternative is to use intuition, which is what we had for most of human history.' The advantage of intuition is that it doesn't lull you thinking you know what you're doing. If you are careful. Value at Risk is--the risk of Value at Risk is that you might actually think you understand the risk, and you forget the assumptions of the program that allowed you to come up with an actual number. And people say, 'Oh, I won't forget.' But human nature is such that people do forget, and they get overconfident.

Gerd Gigerenzer: So, Value at Risk would be great tool if finance, the rule[?rules?] of finance, would be predictable. That is, in a world of risk where you know everything and can calculate it. No surprises. Nothing unexpected can happen. But the real world of finance is not one of calculable risk. It's one of high uncertainty. And, Value at Risk--just to give you an idea--but the calculations done. So the calculation that a large bank has to do to calculate its Value at Risk involve estimating thousands of risk parameters and their correlations, which amounts to millions of correlations, based on five years', 10 years' data. That borders on astrology. This is not science.

And we have seen that a value-at-risk-calculation have not prevented any crisis.

So, in a world of high uncertainty, we need to have simple methods that are robust . And, value at risk calculations, they also foster an illusion of certainty: that one would mean that this precise number--the one they calculated--is really the true value.

I work with the Bank of England with Andy Haldane, who is the Chief Economist, on getting the, at least the regulators, to change these complicated risk assessment tools into simpler, so-called heuristics like Fast-and-Frugal Trees, where what's being done gets transparent and also where the regulators can see what the banks are doing.

If a bank estimates millions of covariances, there's no way a regulator can find out where they are twisting and tinkering; and banks can also use their own internal model. That's not a model of safety. Banks will always try to find a way around the calculations. And try to game it. But, if it's just as, maybe, three variables that are used in such a--maybe a fast and frugal tree like the leverage ratio and a few others, a liquidity measure, then gaming is not as easy. So simplicity--

Russ Roberts: Explain what you mean by a fast and frugal tree, which is a really a useful heuristic of heuristics, I think.

Gerd Gigerenzer: So, a fast and frugal tree is like a decision tree, but it's much more simpler.

You start with a certain feature--and that could be the leverage ratio. If the leverage ratio of a bank is higher than a certain [?], then it gets a red flag. So, that's it. And not even anything else is looked up. If it is not higher--it's lower--then a second question is asked. And this is the way you proceed. So, a fast and frugal tree doesn't make any trade-offs. So it's like a body. So, if you a failing heart, then a good kidney doesn't help you much. So it's not like a linear progression where everything compensates with everyone.

And we have tested this fast and frugal trees in many situations. And meanwhile they are used in medicine and in many other areas. And also what's very important is that a doctor using such a fast and frugal tree or a banker or a central banker can actually understand what's happening.

16:40

Russ Roberts: So I want to use an example you take from the book because it really--it's profound. And it connected to a number of things we've talked about here on the program.

I want to start with an observation that comes from the book Ending Medical Reversal by Adam Cifu and Vinay Prasad. I interviewed Adam about the book; and Vinay Prasad is scheduled to be a guest coming in the future talking about a different book. But they come up with a very profound and disturbing discovery, which is: When you look at observational studies, that is looking back at what has happened to people with different characteristics who, say, adopted a medical intervention, this intervention looks very, very good. And so it gets adopted by the profession.

After a while time passes, and there's an opportunity to do a Randomized Control Trial. And instead of relying on statistical techniques to control for the differences between people or, say, the selection of who ended up being chosen for the intervention in observations, we now actually have a random sample of two different groups.

And it's shockingly depressing how often the randomized control trial fails to find efficacy, and in fact often finds harm from the intervention that looked good in an observational study.

Now, you find something quite similar. It doesn't seem so at first, but here's what--the example you have in the book.

A patient comes into the Emergency Room, has chest pain. What should we do? Should we send them home? Should we put them into the coronary units? Should we put them into the intensive care unit? And you talk about this fancy study that was done with, I don't know, maybe 20, 25 different pieces of information; each one had an associated probability attached to it. And doctors were given a card, which is--it's hard to say it without laughing--and were told, 'Use a calculator and just do a weighted average of these probabilities after you've made the assessment and then decide where to put the person, whether to put them in the coronary unit, in the ICU [Intensive Care Unit], or send them home.'

And you said, what you showed--and this is just extraordinary; I don't know if it would hold as generally as the finding of any medical reversal--but what you show is that the fancy technique works best in explaining the past. It's very good at assigning people ex-post--after they've already gone through the system and we evaluate. In fact, the numbers come from, the data are fitted, using all these variables that we've come up with to figure out what's the best place to put people.

Unfortunately, going forward, it does not do so well. And in fact, the simple rule of thumb that you just described, the fast-and-frugal tree of looking at, say, three factors instead of 25, and looking at them sequentially, not weighting them in the fancy way, actually does better going forward.

And that's an extraordinary, and I think, incredibly important finding.

Gerd Gigerenzer: Yeah. It is correct. The fancy complicated methods work best in explaining the past; and integration of uncertainty, which is usually the case we need to predict, then you need to scale back. You need to make it simpler. And the tools that we have developed like fast and frugal trees, they are used by doctors to make decisions about life and death, such as whether a patient who is rushed into the hospital, was just in pain, should be sent into the coronary care unit or in a regular bed with telemetry.

And these fast and frugal tree can do better than in that case a logistic regression. Now, it not only can do better, but also to doctors can actually understand it. There are very few doctors who understand what a logistic regression is. And the fast and frugal tree is also--immediately, it can be changed. If the patient population changes, doctors can adopt it--which is very hard if it's a logistic regression. In general, under uncertainty, predicting the future, you avail advice[?] to make it simple. If you want to explain the past, make it complicated.

21:09

Russ Roberts: But the key there, I think is that you're not really explaining the past. You're having an illusion that you're explaining the past. Because, the data fits well--and you point out, this is really in cases where, even after you've taken account of all the variables, you may only be able to explain maybe half of the variation in outcomes. But in that case, you're under the illusion you've explained the past because you're really fundamentally in that case, assuming that correlation is causation--when it is not. And that's my takeaway from your explanation for how it's possible that it doesn't do better going forward. A lot of what it's picking up is noise rather than signal. And it's actually deceiving you as to whether you've understood what's going on.

Gerd Gigerenzer: Yeah. So in general, the difference is between data fitting--so you have the data, and you fit model on that. And many studies--also in economics--stop there, and report a great fit or R-square.

The proof is in predictions. Whether that model that fits so wonderfully, actually predicts. And there is statistical theories like the bias-variance dilemma where one can understand that, in prediction, you're better to make it simpler.

And these are usually what's heuristics are. Heuristics have a bias. So, they are simple. You can't have many free parameters to fit them. But they reduce the error, what's called by variance. That means over-fitting. They're not fine tuning on the past. And, fine tuning on the past only pays if the future is like the past. That's in a situation of risk but not under uncertainty.

Russ Roberts: And as you point out with a different population, the past on that old population is not necessarily going to be like the future of the new one. And I think so many of the Economics Nobel Prize which was just given to three economists for their work in randomized control trials [RCTs]--it's interesting--in the area of economic development. It's an interesting technique. And I think it's been greatly oversold in a certain sense, which is that many of the findings which are in RCTs, randomized controlled trials, which is the so called gold standard, don't seem to hold up so well with a different population--with a larger sample, in a different country. And that's because there's too much going on that's not going to be the same in the future going forward as it was in the past where you ran the experiment.

Gerd Gigerenzer: Yeah, that's correct. So, what I'm studying is: What are those simple heuristics that just look at a few variables in order to deal with these huge amounts of uncertainty.

One example I would like to give you is Google Flu Trends. You may recall that Google tried to prove that big data analytics can predict the spread of the flu. And it was hailed with fanfares all around the world when they published a Nature article in 2008 or 2009. And so they had done everything right. So they had fitted four years of data and then tested data means[?]. They had about 550 million search terms and then they had maybe 100,000 algorithms that they tried and took the best one, and had also tested it in the following year.

And then they made predictions. And here we are really under uncertainty.

The flu is hard to control, and people's search terms are also hard to control. And, what happened is something unexpected--namely, the swine flu came in 2009, while Google Flu Trends, the algorithm had learned that flu is high in winter and low in summer. The swine flu came in the summer. So, it started early in April and had its peak late in September. And of course the algorithm failed because it fine-tuned on the past and couldn't know that.

Now, the Google engineers revised the algorithm. By the way, the algorithm was a secret, a business secret. We only knew that it had 45 variables and probably was a linear algorithm. Now, in our research, what I would do is now realize you are under uncertainty: Make it simpler. No. The Google engineers had the idea if a complex I algorithm fails, make it more complex.

Russ Roberts: It just didn't have enough variables.

Gerd Gigerenzer: Yes, yes.

Russ Roberts: It had a cubic term or a quadratic term.

Gerd Gigerenzer: And they changed it to 160 variables--so up from 45--and made predictions for four years. It didn't do well. And then it silently was out [inaudible 00:26:27] buried it.

So, I'm just writing a book on machine learning and fast-and-frugal heuristics focusing on fast-and-frugal trees. And we have asked ourselves, what would be the most simple heuristic you can think about to predict the spread of the flu? Precisely, the flu related talk to visits. Now, think for a moment. So a heuristic that doesn't need big data, a heuristic that doesn't need 50 million search terms. It doesn't need to test a hundred thousand models and actually has something that can be available that can be easily found by everyone.

So, one of the features that humans use to make prediction on the uncertainty is recency. They're looking for the thing that happened last of the time because you can't trust the very far past. And the most recent information about flu-related doctor visits is from the CDC, the Center for Disease Control, about two weeks ago.

So we use the two-weeks-ago variable and only this one variable and nothing else, and set up a heuristic. The flu-related doctor visits in a region are the same as those two weeks ago. That's an absolutely simple heuristic. And then we tested it on the entire four years of the revised Google Flu Trends algorithm [?]. And, what do you think? How well did it do? It predicted better.

Russ Roberts: Since I know you're going to tell me, I know it did better.

Gerd Gigerenzer: And you can see--so, an intuitive way to understand that is, so if something happens like the swine flu, an algorithm calibrated on the last four or five years, we'll be surprised that things are changing. But, if you have something that is just two weeks ago, it can follow any trend. It has a delay. Yes. It has a bias. But it is much more flexible, and it's also much, much cheaper and much more transparent, and people can actually use it.

28:58

Russ Roberts: So, I want to take an example that has been on my mind because I got in some Twitter fights over it. And that's the question of whether--and I have friends who wonder about this, worrying about it--whether you should have children or not.

And on Twitter, people told me that we should do a poll of people who have children, see how happy they are; and we should do a poll of people who didn't have children and see how happy they are. And then we can find out whether having children is good or not. And I thought, and said, 'I don't think that's a very effective way to find out whether you should have children. You're going to be a different person after you have a child. People lie on surveys like that. They want to confirm their decisions and feel good about them.' Your best bet to figure out whether you should have children is probably to read some books and cultural discussions about what it's like to have children to get a feel for it. Talk to some friends who have children and see what they think of it. And then probably just rely on the heuristic that human beings have children. It's part of the human experience. I wouldn't even probably do a cost benefit analysis at all. But if you're going to do a cost benefit analysis, don't fool yourself into thinking that a survey is going to give you information.

And one of the responses I got--more than once and it happens on related issues--is, 'Well, more information is always better.' And yet, you disagree with that. So react to that claim in that context if you want, or in some other context.

Gerd Gigerenzer: More information is not always better. Now the real question is: when is it better? So again, this distinction between the risk and uncertainty can help. It's an old distinction that goes back to Frank Knight, to others, and Keynes and Savage and most decision theorists had made this distinction. It's just ignored most of the time.

So, in a world of risk, you can fine tune and that means more information is always better, if you forget the costs and the time that you spend.

In a world of uncertainty, that doesn't hold. And even if you forget the time and costs.

So, we have shown for a number of heuristics that they do better if you have only a certain amount of information; that doesn't mean no information. Something in between.

And one can understand that again, if one goes into statistics, into theories like the bias-variance dilemma, which exactly make you understand about why a certain reduction of information can be helpful.

So, for instance, the recognition heuristic: You just go by name recognition. It's a mathematical model where you can see that if you are semi-ignorant, you may do much better. A simple example is--so, let me test you on a simple trivia question. Which German city has more inhabitants, Bielefeld or Hannover? What do you say?

Russ Roberts: So I've read your book. I don't know that exact answer to that one. You may have given it, but I don't remember that. But I do remember the heuristics. I'm going to rely on it right now. And the first example was Bielefeld?

Gerd Gigerenzer: And the other one is Hannover.

Russ Roberts: And I've never of Bielefeld. So the odds are, unless there was a nuclear accident in Hannover and that's why I've heard of it, which there hasn't been--I'm going with Hannover. How'd I do?

Gerd Gigerenzer: Yeah. And you're right. You're right because you're semi-ignorant.

Russ Roberts: Right. I have no idea.

Gerd Gigerenzer: 'I've not heard of Bielefeld. I've heard of Hannover,' and used the recognition heuristic.

Now ask Germans who have heard about both and you will find the Germans, many of them, they don't know and they get it wrong.

So this is just an example of a very specific mechanism that you can analyze mathematically, and you can make predictions, and what level of ignorance is actually better than knowing more.

There are also other reasons why ignorance can help. There is an entire research field about deliberate ignorance. So, for instance, would you like to know when you die? Some people want, but 90% don't. Or if you're married, would you like to know whether your marriage ends in divorce? Very few want to know that. And if you would know, you would lead a life like Cassandra who could see the future and the future was not a nice one. And that destroyed her entire life.

So, 'more is always better' is an illusion. More is sometimes better and it's better when you are in a situation where the past is like the future and the future like the past.

34:25

Russ Roberts: Yeah. You use an example in the book I use also of a body scan. Surely it would be better to find out if you have any tumors growing right now in your body! And the answer is: No, it's not. There will be a lot of false positives. You will not sleep well at night. And it's amazing--this is a very important point; it seems like just like a cheap debating point--but I think it's incredibly important for how we live our lives: It's easy to say to someone, 'Well, if you get a bunch of tumors on the full body scan, don't worry about it because they're probably false positives. So don't let it haunt your sleep at night. Just ignore it.' We can't ignore that. We're not good at that as human beings. And recognizing that is very important. And that's another example why sometimes ignorance is bliss.

Gerd Gigerenzer: Yeah, that's a good point. False positives. So the information, so the saying 'more information is always better' also assumes that the information is true [inaudible 00:35:22]. But most of the time, not the case. There are misses. There are false alarms.

And particularly in medicine. So, in screening where a disease is rare, like HIV [human immunodeficiency virus], there are lots of false positives. Even in such good tests like HIV. And if you do a mammography screening or prostate cancer--PSA [prostate-specific antigen] screening--most results, most positive results are false; and people need a little bit of statistical education here to understand that. Since an article by the colleague you mentioned, I think Prasad, argued that there's no single cancer screening method where we have proof that the total mortality is being reduced.

Russ Roberts: Incredibly depressing. Yeah, but I think unfortunately true.

Gerd Gigerenzer: You must think twice. You can call this depressing. You can also call this relieving because--

Russ Roberts: Liberating--

Gerd Gigerenzer: Liberating, yes. I know many men and women who are going screening, a positive result; and then they turned out it wasn't really positive. Or maybe [inaudible 00:36:48]. And are in cycles. And life is around mammography or PSA tests . And some of them feel life is more or less--the quality of life--is destroyed. If you understand that this type of screening has little hope for benefits but lots for harms, then you better do something. You do real prevention. For instance: stop smoking, don't drink too much alcohol, and move your body around.

Russ Roberts: Yeah, lose some weight. Ideally, don't eat too much. Of course, listeners, you should consult your own physicians for decisions along these lines, but we have had many guests who've made similar points, and it's interesting how hard it is to stop gathering those kinds of pieces of information. When I tell people--I've told this before--but when I tell my doctor I don't want the PSA exam for my prostate in the blood work that he does, often the lab does it anyway. I think they make money on it. I'm not paying for it out of pocket. I don't want it because I don't want to be scared incorrectly, but it just sometimes happens anyway. And I think the system is so geared toward precaution; and, for a whole bunch of economic reasons you happened to mention in the book--we don't need to go into them now.

38:24

Russ Roberts: One other thing I want to mention is a term that's come up a few times recently here, which I think is very useful, which is the Chesterton Fence. And if we keep bringing it up, it may have to be added to the EconTalk drinking game. But the Chesterton Fence is this idea that you come across a fence in the middle of a field; you think, 'Well, this looks just like it gets in the way. I'm going to tear it down,' and you don't know why it's there. And when you tear it down, you find out it had a purpose. But it doesn't make sense to you. And so you just decide it must be irrational.

And many heuristics, many rules with thumb are like that. They've evolved over time. They are consistent with the way our brain works. And yet as arrogant experts, we often say, 'Oh, well, this just must be a mistake,' and we change policy or make decisions accordingly. And I think a respect for some traditions is incredibly important and it's a way to access information you wouldn't otherwise get.

Gerd Gigerenzer: Another concept that we could add to EconTalk is defensive decision-making. That is, a decision-maker like a doctor does not recommend the patient what he or she the doctor thinks it's the best to do, but something second best. Why would doctors do that? Defensive decision-making is done in order to protect yourself as a doctor from the patient who might sue you if something happens. And that usually leads to unnecessary imaging, to unnecessary cancer screenings, to unnecessary antibiotics; and just mostly that, too much, too much, too much, too much.

In the studies in the United States, doctors typically say--so, when doctors are asked, about 90-95% of them say, 'Yes, that's what I'm doing, and I have no choice.'

So it's very important that a patient is aware of that situation in which the doctor is, because the patient is the problem. It's the patient who sues or the lawyer that runs around.

And so this kind of structural understanding is very important. Sometimes, a simple heuristic helps here. So: Don't ask your doctor what he or she recommends to you, but ask the doctor what she would recommend to her own brother or sister or mother. The mother wouldn't sue. I typically have gotten a very different answer to both of these questions.

Russ Roberts: Yeah. I think after a while, doctors get better at ignoring the fact that they say it's their mother or sister or brother. But, I do ask that question all the time. And I think it's incredibly--it's important because it forces the doctor to step out of an unconscious mode of thought, their own heuristic of precaution and safety, and it forces them to at least recognize the possibility that what they're recommending may not always be so good for the patient. They can still lie, of course. But I think a lot of the wrong decisions that doctors make are not consciously done in any unpleasant way. It's the: When you're holding a hammer, you're looking for nails. You start to forget that you've got a hammer in your hand all the time and everything looks like a nail.

And if you remind someone to say, 'Well, gee, that's a wine glass. That's not a nail, isn't it?' And they might go, 'Oh. Oh yeah, you're right, I won't hammer on this one.' So I think it's natural that surgeons like surgery and dentists like filling cavities and so on. And many times, of course, it's a good idea. But sometimes it's not always a good idea. And it's good to be cautious in that direction, too.

Gerd Gigerenzer: Yes. So what else are we going to talk about?

42:31

Russ Roberts: I want to talk about an example you start with in the book, which I think is extremely important for economics, and it's come up in a number of different contexts here. And you're talking about a very simple act in sports, which is catching a ball that's either thrown in the air, hit--a baseball by a batter. And you have a fabulous quote from Richard Dawkins who is a scientific man and he says the following:

[Dawkins:] 'When a man throws a ball high in the air and catches it again, he behaves as if he had solved a set of differential equations in predicting the trajectory of the ball. He may neither know nor care what a differential equation is, but this does not affect his skill with the ball. At some subconscious level, something functionally equivalent to the mathematical calculations is going on.'

Now, this isn't economics all the time. It comes from a famous article by Milton Friedman we've referenced here before on methodology in economics where he advances a very similar argument, the 'as-if' argument, it's called in economics: People act as if they're maximizing utility, as if they're doing this, that, or the other.

Now, models are to help us understand the world. They're also to help us predict the world. And a model can be very effective at predicting even though it's a very poor description.

This example is pretty good at predicting. It's absolutely wrong as a description. It is not what people do. They do not calculate differential equations in their head. And on the surface you could say, 'Well, who cares? It's just a model.'

The problem is, in economics and I suspect elsewhere, and Paul Pfleiderer in an episode here talked about this at great length: Economists start to confuse the two. All the time. They start to think, 'Well, my model has been confirmed by the data. Therefore, my model is reality.' And in particular an example might be, 'If the minimum wage in this particular data set doesn't reduce employment, then I know that there's what's called monopsony power because that's one of the predictions of monopsony is that a minimum wage won't have the effect it might normally have.' But that doesn't follow at all. In fact, you've learned nothing about the structure of the labor market in that particular one-study example. And yet economists constantly conflate their models with the underlying process of reality. And, it's just not true.

Gerd Gigerenzer: Yeah. The example you gave about the baseball player--so, the outfielder uses a very simple heuristic that's called the gaze heuristic. It works if the ball is up in the air. And the heuristic has three steps: Namely, fixate your eye on the ball, start running, and then adjust your running speed so that the angle of gaze remains constant. Try it and you will be exactly where the ball is coming down. No calculations of trajectories are needed.

All the information that's in such an equation to calculate the trajectory of the ball can be ignored. It's, again, a heuristic that relies on one single powerful cue: the angle of gaze. And the--here we have--I call this a process model. It's still a model: those people may do something slightly different. But, it describes what's usually being done; and also it allows us to predict better than the as-if model of a trajectory calculation.

For instance, if you understand the process, we understand why players change the speed while running in order to keep the angle constant. If you have a trajectory model, then you would assume the player is trying to run as fast as he can to the point where the ball is supposed to come down and then make final adjustments. It's not the case. It also explains why people run into the ball. Players run there because they don't know where it's coming down. They have a heuristic to catch it, not to predict where it's coming down.

Understanding good models of the process is also important, not only because it explains how it's being done--so the causal process--but also it can help people to teach others to make better decisions.

One great example is the miracle on the Hudson River. You may recall that a plane started in LaGuardia Airport, and a few minutes later something totally unexpected happened. A flock of Canadian geese flew into the plane, into both engines, and they turned down; and the pilots had to make an important decision: Can they make it back to an airport or should they have to take the risk to go into the Hudson River?

How did they find out whether they can make it back to the airport? No, they didn't do any calculations of their trajectories. They used the same heuristic, the gaze heuristic, but now consciously. The baseball players usually use unconsciously. And the heuristic then is: fixate, say, the tower of the airport through your windshield. And if the tower is going up, then you won't make it. You will hit in[?] before. And Skiles, the co-pilot of the plane, is explicit that they used this heuristic.

So this is an example which also dispels the myth that heuristics would be unconscious, like in a so-called System 1. It's not true. Every heuristic that I study can be used unconsciously, like most baseball players do this unconsciously. Like the pilots--they are trained. And it also shows that is simple heuristics can increase safety. And also, of course, they save time. This is one of the big values of going away from as-if model to studying the decision-making process.

49:19

Russ Roberts: I want to ask you about a different use, which is nudging, and sometimes called libertarian paternalism. You give a number of examples in the book of how people behave very differently when they have to opt in versus opt out. A very powerful example you give of a group of police in the middle of World War II who are told to go and kill women and children, Jewish women and children. And their commander says, 'If anyone's uncomfortable with this, take a step forward,' and only a handful of people that opt out and take that step forward. And you suggest that if the commander had said, 'Who is comfortable doing this? Take a step forward,' only a few people would have stepped forward; and it may have even stopped this thing from happening. It was deeply horrible.

So, a lot of people have pointed this out. You have a couple of other examples in the book, but a lot of people point this out and say, 'Well, we need the government to make opt-in and opt-out decisions to do the right thing. Whether it's kidney donations--let's say an organ donor card and in filling out your driver's license--or savings--we should make savings mandatory and[?] you can allow to opt out. So you're still free. Government is not coercing you. But the government should choose the options that are best in order to overcome these natural biases we have toward passivity. That opting in is often difficult; so let's make opting out the thing that is bad.

And so you imply, and you wrote the book in 2008, you seem to suggest that that would be a good thing. And yet, I know you've written since then an article that you don't like nudging. So how do you resolve those?

Gerd Gigerenzer: So, I'm not a fan of nudging as a policy of governments. First the question is, what government? So is it Obama or Trump that nudges you? And second, I bet much more on informing people and make them risk survey, but also then steer them like ships.

Meanwhile there are now studies on the organ donation problem, opt-in/opt-out. It is true that you get more potential donors if it's opt in and opt out. But the question is, do you actually get more actual donors? And the new studies that have come out show that the opt-in versus opt-out difference makes little difference. So, most of the differences between countries are due to the organization, to the incentives for doctors, and to many other things.

On nudging in general, I mean nothing is nothing new. It's basically the method of marketing and others have used before to influence us. And what we really need in the 21st century is people who understand what's being done to them. People who are a risk literate, who are health literate, who are able to deal with money, and who are also able to control the digital media. We need to invest in making people stronger. This is my view. We don't need more paternalism in the 20th century. Sorry, we don't need more paternalism in 21st century. We had enough, last one.

Russ Roberts: Yeah, I agree with you. But I think the paternalists would argue, 'Well, that's all well and good for you, Gerd, and for the, even maybe the host of EconTalk who, despite his limitations, might be able to acquire some of this wisdom you think we can improve on. But most people, they're not smart enough to become risk literate . That's unrealistic. We have to help them.'

Gerd Gigerenzer: Okay. Most people who make bad decision don't, in my experience, make bad decisions because something has misfired in their brain--that's the usual take of the nudging people--but, because there's an industry who sells them products that are unhealthy. There is a tobacco industry who sells some cigarettes that are unhealthy, and so it goes on. And to nudge people--meaning using the same methods against big industry--has little prospects.

It's also--nudging is based on a certain group of psychologists and behavioral economists who want to point out that everyone else is somehow stupid. So the term isn't used, but the term 'lack of rationality' is used.

Many of these experiments that claim that people make wrong decisions, that typically paper and pencil experiments have been shown in the last decades already to be doubtful or to be refuted, or that these statistically errors that John Q. Public allegedly commits are actually errors of the researchers.

That literature, which is brought in Psychology, is not well known in Behavioral Economics. I've written a paper called "The Bias Bias in Behavioral Economics." That there is the tendency to find biases even if there are none, and it will be well done to realize that people aren't so stupid. They can be seduced. Yeah. But most of the arguments are about people's probability judgments, intuitions about change, and so on. And these intuitions are fairly good and that psychological research has documented since decades. And a few studies who claimed the opposite. They are featured. That's part of the bias bias.

I can give you a simple example. Let's go back to doctors. And I take this example from the Nudge book by Thaler and Sunstein. So, you suffer from a severe heart condition, and you think about a dangerous operation. You ask your doctor about the prospect. So the doctor could say you have a 90% chance to survive. Or, she could say you have a 10% chance to die. So, the first one is called the positive-, the next one[?] the negative-framing. So, Dick Thaler and Cass Sunstein are the opinion that you shouldn't listen to the way the doctor frames the message because it's logically the same. But people are intuitive psychologists and they listen.

And also, studies have shown that doctors choose a frame depending what they want to recommend. So, if the doctor tells you, 'You have a 90% chance to survive,' that's a recommendation. And most people understand that. And if the doctor tells you, 'You have a 10% chance to die,' that's not a recommendation to do the surgery. And most people understand that.

This is not an error. It may look logically the same, but it's not psychologically the same. And people intuitively understand that. And framing is typically interpreted by ordinary people as a recommendation. And this is by itself, not an error to be corrected.

57:53

Russ Roberts: Yeah; I think it's a very deep point, actually. The general point is that--you said it a slightly different way--but a lot of times what we're testing is the researcher's logic, not the subjects': because language is inherently ambiguous. You have a number of examples in the book; I encourage listeners to get the book and check it out.

But, people when they run an experiment, assume that language is like math. It's not like math at all. It's very different. And that's, in fact, culturally we have evolved--as you point out many times in the book-our brain has evolved to read those cues very subtly and they're not--they're not just like flipping a coin between whether it's 90 or 10. They're not the same.

Gerd Gigerenzer: Yeah. Yes. The brain evolved to deal with uncertainty, not the situations of risk, not with lotteries. The brain is amazing in taking up the subtle cues. And that's, in the same way, also a way to mislead the brain. But, in the first place, the brain would not work well if it would just think logically. And language understanding--language is ambiguous. Many terms have several meanings, and people immediately understand what it is. Another example--

Russ Roberts: So context is so important, also, as well. And the idea that people make decisions about money in some sort of laboratory setting is, of course, ridiculous. Markets provide information that aren't available in the lab. And Vernon Smith has, I think, been very eloquent and correct on this. But carry on. Sorry.

Gerd Gigerenzer: Yeah. I'll just give you another example of some of these studies. So, one of the so-called cognitive illusions is called the Conjunction Fallacy. So, the idea is that some events--cannot be more likely that the same event and another event together. So, the end is--for instance, the famous Linda problem. So if you, say, Linda, so the story goes something like that. So your subjects--you read Linda is active in the feminist movement. She is 31 years old, and a lot of things. Then you are asked, what is more probable? Linda is a bank teller, and you've got no cue that she could be a bank teller. Or, Linda is a bank teller and active in the feminist movement. Now you got some cues that she's active in feminist movement. So most people say the latter is right. Now, Kahneman and Tversky said, 'Wrong,' because the probability of an event like she's a bank teller cannot be higher than the event and another event. That's the Conjunction Law.

What's being overlooked by my dear colleagues is that language is polysemous, an 'and' has many meanings. And people are very smart to find out what it is.

Just to give you an example, when I say, 'This evening I invented friends and colleagues.' You understand what I mean. And you don't think it's the conjunction. So, the intersection--only friends who are also colleagues. It's the logical 'or.'

And this kind of intelligence that we have intuitively, we can't even describe how we do this in this way. This is one of the hallmarks of human intelligence. It's the most difficult to teach an algorithm or a computer to think in this way, to make these inferences that people have. It's easy for a computer or to teach it to think logically. But logical thinking is simple compared to the intelligence that people have.

1:02:16

Russ Roberts: So, let's close with a little bit of, maybe some caution. When I say things, like the things I've said on the program today about, say, regression analysis or the reliability of cost benefit analysis or big data, I'm often accused of being anti-science. And I respond by saying, 'No, I'm anti bad science.' I'm not anti science. I love science. Science is a beautiful thing. It's just that we have to understand it's limitations. Otherwise, it becomes the equivalent of a religion.

And similarly in our conversation today, I think people, if they're not careful, can say, 'Yup, I just have to go with my gut.'

Now, we've talked a little bit about it before, but talk a little bit--let's close with some thoughts on the fact that sometimes heuristics are dangerous. And sometimes it's hard to know where to find the heuristic or the first thing to put in that fast and frugal tree.

I'll never forget a story I've told here before about the CEO [Chief Executive Officer] whose company went bankrupt. And I asked him what went wrong. He said, 'Oh, I chose the wrong case study.' He was a Harvard MBA [Master of Business Administration]. He had been trained in the case study method. And in some dimension, case studies are heuristics. They are to give you--it's a taxonomy of how to make decisions rather than a formal analytical technique. And he'd fallen from grace because he had chosen the wrong rule of thumb. He'd picked the wrong case study.

So, give us some wisdom on how to be careful and not go too far in the other direction.

Gerd Gigerenzer: Yeah. I think the first step is to distinguish between situations of risk and uncertainty. In situations of risk, then do your calculations. It's also the world where big data is most promising, and also the world of machine learning.

So this assumes a stable world. The more you have to do with situations of uncertainty, the more you need to simplify and to make things more robust because you cannot know how the future will be.

It is correct what you said, that heuristics can fail. But they also can be excellent. And the important question is to use them in the right situation. For instance, in situations of uncertainty.

And we have studied in much detail for most heuristics that we have investigated: What is the ecological rationality of heuristics? It's also a term that Vernon Smith has used in his Nobel speech. And the ecological rationality is exactly the question: I have a tool. What is the situation where it will likely to work and when not?

And we also should not forget that analytic methods, including statistical methods that are used all the time, can lead to excellent results but also to total failures. This is nothing specific about heuristics.

Just recall Value at Risk calculations before the Financial Crisis of 2008 or the models of rating agencies. And most of these models of the falling type[?defaulting?], they do, maybe, a Bayesian updating or some other updating based on maybe five years of data. What can they do or could they do before 2008? It was going up all the time. And so the models could only predict it will go up further.

That's called the Turkey Illusion. Yeah, because it's like the situation of a turkey was fed every day, and every day the probability they will be fed and not be killed increases until the day 100, which is the day before Thanksgiving. It was not in a world of calculable risk.

So, the important point is, it's often said: Heuristics can do well, but they can fail. But it's almost never said that Bayesian analysis can do well or fail. Or that Value at Risk can do well or fail.

It is: Whatever tool we use--if it's a heuristic, if it's a fast-and-frugal tree, for example, or if it's some analytical calculation--it's a tool that is a good tool for a certain type of environment and a bad tool for another one. A hammer needs a nail, but not a screwdriver.

Russ Roberts: My guest today has been Gerd Gigerenzer. His book is Gut Feelings. We'll put up links to a lot of the papers he's mentioned and to some others. Gerd, thanks for being part of EconTalk.

Gerd Gigerenzer: Thank you. It was a pleasure.