Intro. [Recording date: November 19th, 2020.]
Russ Roberts: Today is November 19th, 2020 and my guest is author and broadcaster, Michael Blastland. He is the author of The Hidden Half: The Unseen Factors that Influence Everything. Michael, welcome to EconTalk.
Michael Blastland: Thank you.
Russ Roberts: What's the 'hidden half'? What do you mean by that in your title?
Michael Blastland: One of the great human endeavors is finding all those regularities that govern the way that people behave, the way that systems work, the way that one thing causes another. I mean, that's what we spend most of our time doing. It's a great part of our human instinct to look for the way that one thing leads to another. Even the way my brakes work on my bicycle, I'm pretty sure they're going to stop it, and I know that because they usually do, there's some kind of regularity there. And, also because I can look at that mechanism and I can say, 'Well, it looks like the kind of thing from previous experience that ought to slow my bike down.'
So, we're observing patterns all over the place. We're observing, if you have certain experience in use, does it make you more likely to be a criminal? If we put up the minimum wage, does it cause unemployment? This is what we do as intelligent human beings, is we look for patterns and regularities.
The hidden half is where there's break down. And my contention is that actually there's a lot more breakdown than we would wish for, in some ways of looking at the problem; and there's a lot more breakdown than is admitted.
In other words, there is a huge reservoir of sources of discontinuity, irregularity, chance, lucks, [inaudible 00:02:07]--whatever you want to call it--which disrupts our ability to know things.
It's a simple way, in fact, of thinking about knowledge, which is that it is a kind of regularity. You know: I know something from this experience of it, and therefore I predict that it will work again tomorrow. My alarm clock will go off tomorrow because it went off yesterday. You know, these very kind of simple things. I'll go home and I'll find my bedroom and my bed and I'll be able to sleep, with luck. They're kind of normal, everyday regularities. That's what knowledge is: it's the way that information about something travels to the next instance.
And, as I say, my contention is that it doesn't travel as well as we hope it will because of what I call this hidden half of countervailing factors--enormous number of detailed countervailing factors which disrupt it.
Russ Roberts: I think it was Alfred North Whitehead who said, actually, knowledge and effectiveness advances when we don't think about things.
And, a bunch of things that we've figured out, like my alarm clock, will probably reliably go off tomorrow. Once I set that little green light on my iPhone, I stop thinking about it because I've seen it work pretty--not pretty reliably, it always works. Unless I have the volume off and then I messed up. So, it's actually--it's a good example of how--well, it's a little more--that's the hidden half.
But, that's a feature, not a bug, most of the time: that we don't think about the things we think are true. But, of course, the bug part is that sometimes they're not true.
Michael Blastland: No, I think that's exactly right. It kind of--it's very easy for these things to go under the radar, you know, because we settle into habitual patterns of behavior, which very much rely, they depend on these tacit pieces of regularity that we've grown used to, that we've come to assume.
And, there's a great quote of Daniel Kahneman, as you'll be familiar with--probably you know Danny--who says that it's a fault of the academic mind, actually, that once you've applied some kind of model, some kind of theory, some kind of framework to the way that you think things work, very often academic one, it's pretty hard not to do it in future. It's pretty hard just to sort of clear out the mental furniture once it's in there. It's a tough job to remove it again. And we have these settled presumptions about how things are. And then we need to, to some extent: you couldn't kind of revisit the whole caboodle every day you get up in the morning. We just have to do it that way.
Russ Roberts: Yeah, I think of myself as kind of a specialist in confirmation bias. And I think about it a lot. And I'm quite aware that despite that focus, I still suffer from it. I see myself making leaps of presumption about the quality of this study or that study based on the worldview framework perspective I bring to it. I think it's very important.
Russ Roberts: Now, your book starts with a rather startling beginning. And, let's talk about crayfish. You wouldn't think that crayfish would be very interesting, and you wouldn't think they'd have much to do with what we're talking about, but it turns out they're a spectacularly thought-provoking example of the phenomenon you're mentioning. So, talk about crayfish.
Michael Blastland: Crayfish are just, is just a wonderful story. It starts in an aquarium in Germany. You can tell this has got to be weird. These enthusiasts for aquaria in Germany and amongst their collections, you know, some of them have a crayfish. And, this is a recognized crayfish: it's been imported actually from the United States, from Miami I think, that kind of area. And, they pop it in their aquarium; and, you know, it's just one crayfish. And it has offspring. And, this happens in a few places; and people say, 'Well, that's weird because there's no male. Okay, well, maybe it got pregnant or whatever crayfish do, the equivalent term, in transit or in its place of origin or the shop where I bought it from or something like this.' But, anyway, they observe this and they're a bit puzzled by this. But, gradually they begin to realize that all the crayfish, even the offspring, are also female; and they're still having offspring. And, they're saying to themselves, 'Where are the males? There are all females and there are no males here.' Some people's dream world by the way--
Russ Roberts: I know--
Michael Blastland: you could have them without males involved.
Anyway, they gradually begin to suspect that these creatures somehow spontaneously have become parthenogenetic. So, they're reproducing asexually without any sexual contact. They've basically become--they've acquired the ability to clone themselves.
And, nobody quite knows how they've done this. It seems to have been one spontaneous genetic mutation in just one crayfish somewhere along the line. And, then of course once one can do it, all its offspring have the same genetic capability, so they can do it, too. I mean, this is--the consequences for this are fantastic, because you only need to release one of these things in the wild and suddenly you've got a whole population.
And, actually that's happened. There are areas of the world, Malaysia I think, where these things have got out into the wild and they've just totally overrun the place. They're highly fecund. They breed like crazy. They're robust creatures; they're not fragile[?]. You can put them on the barbecue, by the way. If you like a barbecued crayfish, maybe you should think about this variety.
Eventually they were called marmorkrebs--they were given this name 'marmorkrebs' because the scientists got to hear about these things. And, their eyes lit up. You can imagine, 'Hey, we have a new species, an entirely new species, doesn't exist in the wild; it's popped up spontaneously in aquaria in Germany.'
And, we can use this, because it suggested to them that here was a way of understanding that old thorny question about the balance of forces between nature and nurture. You know, because we've got half the problem contained. It's absolutely nailed down: They are genetically identical.
So, if we see differences in these creatures, well, it's got to be the other one. Hasn't it? It's got to be the environmental cause in that case.
So, they got a few, hold of a few of these things, and they started putting them into tanks in the lab. But, they also went a step further--and this is where it gets really interesting, because they also standardized their environments.
So, they made sure that the water every single creature was in was the same. They made sure that all their food was the same; they made sure that every single creature had more than enough to eat so there needn't to be any competition for food. They put quite a lot of them on their own so there wasn't even any interaction. They had the same person examine them on every occasion using the same variety of rubber gloves. They tried basically to standardize everything they could think of and make their environments as boringly uniform as imaginable. And, this is Germany--they're good at that.
So, what did they look like, these creatures, as they developed? Because, now we have perfectly consistent genetics: they checked that, they didn't just assume it. We have, as far as humanly possible, a consistent environment. These are the big-two causes as far as we know of everything.
So, okay. The marmorkrebs--you know where this is going--they're fantastically varied. You can take marmorkrebs from the same batch of eggs and one of them turns out 20 times the weight of another. The physical variety is just astonishing.
They're genetically identical, their environments are identical. They're all fed to excess--no competition for food. One is 20 times the weight of another. The carapace on every single one of them, the shell, has a different pattern of markings. They have these little feeding parks [?] around the front of the mouth: they have different numbers, different physical numbers. It's like having different numbers of teeth. So, they're physically different.
They're also behaviourally different. Some of them like a crowd, some of them are loners. Some of them were really gregarious. Some of them are dominant. When you bring them together, some of them turned out to be dominant and some of them are kind of subservient. Some of them feed when they're laying, some of them don't. The point in life when they start laying eggs is quite radically different. Their lifespans vary by a factor of three.
And, imagine that in human terms, you know --where, I mean: Take triplets, genetically identical, standardized environments, and imagine their lifespans varying by a factor of three as a norm. Yeah. I mean, just the variety in these creatures was absolutely dumbfounding.
And, you say, 'Okay,' once you've got over the shock, 'Why? What's going on in order to produce this kind of variety when everything we know is the same?' The same clearly isn't, in some way, the same, but what is it that's different?
And, this is where we come back to the original definition of the hidden half --you know, that all the time we're looking for the big regularities. And, here are two of the most Herculean regularities we've ever come across, genetics and environment. And, it's neither.
Now, how powerful is this effect? Well, some of the descriptions of the variations, you can tell: it's a pretty strong effect, whatever it is, this third cause, this hidden half, this something. If we get around to talking about people, there are, in some ways you can argue that it really is about a half of the way that things turn out is accounted for by factors which we simply can't define in normal terms.
So, if it's a half, that's the equal of the other two put together, which is one way of thinking about it.
You think about the human history of argument over genetics versus environment and the way we've slaughtered each other about that. And, the way that it's still such a bloody argument. I mean, I don't mean that just metaphorically . And you say, 'There's something else, which is the equal of them both?'
Where's the conversation about that? Where's the argument there? Where's the scientific recognition of this vast range of uncertainty because it's thoroughly unpredictable. We can talk in a minute about some of the potential causes. But for me, it just opens up a kind of range of possibilities about the determinants of life, the causal factors that we began talking about--that it's just irresistible. You can't hear that story and start getting curious about the way the real influences on the way the world works.
Russ Roberts: Well, I thought of it as the dark matter of causation. It's the things that we can't--the hidden half is the dark matter. We know it's there because it has to be. What is it? Well, it's the dark matter, it's the hidden--we have a word for it actually, we call it often randomness. But, that's just a way to describe the fact--it sounds kind of scientific, actually. You can even put a Greek letter on it, like an ? (eta). But, in fact, it's--and we sometimes call it 'noise,' which sounds a little less scientific. But it really raises some very deep philosophical questions.
And it reminds me of my favorite, one of my probably five favorite jokes which I heard from Joseph Telushkin. It's about the--I think I've told it once before on the program--it's about the kid, he's the seventh grader and he's looking at his report card as his father looks over his shoulder at it. And it's all Ds and Fs. It's a horror show. And, the father is getting more and more upset and the kid turns back to look at his dad and says, 'What do you think, dad? Nature or nurture?'
And, what you're suggesting, of course, is what--actually, that joke is quite appropriate because the dad's reaction is, 'No. It's neither. It's you. It's something distinctive that you've >messed up.'
And, it does raise a question of free will, right? Because I think most of us believe that, as you say, nature or nurture describe everything. That's it: we're all the victims, or we were given these gifts or punished by our genetics and the way we were raised and the things we're exposed to. And this suggests there's something else.
Talk about Mike Tyson and his brother.
Michael Blastland: Okay. I mean, I think you're right, by the way. They're all big questions raised there about genetics and nurture, and this very peculiar appetite I think some people have for reducing ourselves to machines. And, we have these two arguments, but they're both deterministic. Why do you want either of them, actually? Why would you wish for either of them to be true?
Russ Roberts: Well, if you're a researcher, you might.
Michael Blastland: And, you know[?]--that's--I think that's the thing. And, you know, there are researchers who talk about this element--you said 'dark matter'; I call it the 'hidden half.'
There's one who talks about 'the gloomy prospect.' So, the gloomy prospect is that in research terms, we'll never be able to nail some of this stuff down. And this is horrifying to them.
Okay, so Mike Tyson--well, you start to say about the marmorkrebs, 'Okay, these are weird animals. We can't really interrogate their lives too carefully. They can't tell us what's going on. So, maybe there are micro-environmental influences or something, which we're just not going to get out. People, on the other hand, are much more accessible. We can go in there with the surveys and the questionnaires and the life studies and all the rest of it. And we can get much more information about them and possibly pick out some of the causal influences.'
Now, the curious thing about Mike Tyson is that he's spoken often, speaks of himself, actually, as someone who is very much the product of his upbringing. He grew up in a world where violence was common, crime was the norm. I think he'd been arrested something like 38 times by the time he was 13, 14. He was in one of the most notorious juvenile prisons, I think it's been referred to. He grew up in a home where his father wasn't really his father and soon fled the coop. Anyway, his mother, he says, was violent towards him. I mean, the stories are horrifying if you read his autobiography. I think he says at one point, 'I did a lot of bad, bad stuff,' in so many words.
So, you say: Okay, Mike Tyson, he went on. The rap sheet just keeps on growing. And, is it any surprise? That having grown up in a culture of violence, he was subsequently, he bit off another boxer's ear. He was in prison for rape. And, you kind of think, 'Yeah. That's about what you'd expect, in a way.'
Not a very sympathetic view in some ways. In others it kind of feels as if you might be making allowances for him. I don't want to get into that really, just to observe the fact that many people say this is kind of: History is destiny.
Mike Tyson has a brother. His brother is a specialist trauma assistant in a hospital in Los Angeles where he patches up the victims of violence. He had that same upbringing.
So, now you say, 'Okay, well, which is the causal influence now?'
And, pretty soon you start making excuses. You try and find reasons why it could affect one person differently to the other. And, you say, 'Well, Mike reacted one way, his brother reacted the other way.' You know, and--but, they were all reacting to violence in one form or another.
The problem is once you've said that, how do you know which way this most extreme of causal influences is going to pan out? Because we've now accepted that it can take you in two diametrically opposed directions.
So, our predictive ability--you suddenly begin to realize, 'Well, which is it going to be?' And, that kind of, at the individual level, destroys quite a lot of our ability to define causal pathways.
As it turns out, you know, you can narrow--because some people will be saying, 'Okay, they're brothers, but maybe they're just different genetically. And, that explains why one went one way or the other.'
But, you can then do the same similar sort of things with genetically identical twins.
So, you say, 'Okay, like the marmorkrebs now, we're getting there. We've got identical genetics and we've got a very, very similar upbringing because they're in the same household, same parents, same school, same TV, pretty much same diet.'
And, you can say things like, 'Okay, if one of these twins has schizophrenia, what's the likelihood the other one will?' Well, it's not 100%. It's about 50%. Now, that's quite a lot of regularity. It's a lot more than you'd find amongst two strangers. But there's still a half of it that's missing.
You can go even further. You can take conjoined twins, and you can find examples where, actually there are still astonishing differences between them. There were two conjoined twins in the Far East who actually undertook the most dangerous operation in order to be separated because they felt they were so distinctive as individuals that they could not lead a life together. They had different interests. One of them wanted to live at home, the other one wanted to go to the big city. One wanted to be a lawyer, the other one wanted to be a writer. They had different hobbies. They were extraordinarily different. Now, there--so, we're getting closer and closer to the point where we're saying, like the marmorkrebs, everything is the same.
I'll take you even one step further. You know, if you're still not satisfied that there could be room for causes in the normal sense, that we can understand and define--I'll take you down to one person. Even in one of us, even in me or you, Russ, just draw a line down the middle of our face, and they are different side to side. Now, that's nature trying to apply the genetic program identically to make a carbon copy, as we call them, carbon copies are--
Russ Roberts: Symmetric.
Michael Blastland: Yeah, symmetrical. And, it can't do it. I think some people would say, 'Okay, only in a trivial way, the difference is never that big; you can still see that it's the same person.'
So, let's take a more serious example. Let's take a case of breast cancer in one breast. We're going to put aside some of the big genetic causes of that for a moment, the BRCA Genes 1 and 2 [BReast CAncer genes], which can cause a lot of breast cancer. And, we're just going to say, 'Okay, so it's a more ordinary case of breast cancer. But, it's just in one breast.' Talking about one person. If you've had it in one breast, how raised[?], is the probability that you'll have it in the other? Compared with the norm, amongst people who've never had it at all?
Well, it's hardly raised at all. Hardly at all.
Now, a striking way of putting that is to say that the other breast is more like the breast of a stranger than it is to its own twin.
So, the point here I'm making is the difference--difference--creeps in even where you think you've defined everything.
And, this difference, this kind of hidden half, this enigmatic variation--you could have: what good is it going to do you to define every lifetime exposure for that one woman? It's all going to be the same. To define the genetics, they're identical. To define the diet, it's identical. And, yet you still have difference.
Now, if you can't do it in one person where everything is identical, think how easy it's going to be or how hard it's going to be, really, between two individuals, two different individuals, where there are a number of factors that could be present: they're just kind of indescribably complicated and numerous. So, that gives you a sense of the difficulty.
Russ Roberts: And, it really raises the question of how you should think about randomness, which I alluded to a minute ago. I think about Hayek's Nobel Prize speech, "The Pretence of Knowledge," which you reference, I think, in the book.
Russ Roberts: In that speech, he talks about the idea that, you know, if we had all the information, we could predict, say, the outcome of a sporting event. In other words, we have a lot of information in advance. Asporting event is very narrow with, the rules are very prescribed. You can't say, 'I'm going to play this game in a different field than my opponent.' There's a lot of things you can't innovate on or have be random.
But, there are inevitably things that you can't measure. What the quarterback had for breakfast in the case of a football game. How the pitcher got along with his wife, talked to his wife. Maybe there's a conversation they had the morning of the game and that somehow disrupted his normal practice and he wasn't as effective that day. And so, you can't forecast his performance. Other athletes of course are as reliable as clockwork, or similarly--some kind of clock.
But, what Hayek is saying in there is a little different than what you're saying.
What Hayek is saying is that variation is so complex--there's so many causes to behavior--you'll never have data on everything you need to know.
And, what the causal economist says is that, 'Of course. But I've got enough, I've got most of it. I've got the important things. I have data on the things that are truly the levers of change.
And, what you're saying is--I think it was two things. One, to remind you that those other levers are out there. Maybe it's large, much larger than they appear to be.
But, the second is, I think that the subtler point, and we don't have to go into this in detail: but I think it's fascinating that there's something more than just--you could say, it's the true randomness. A lot of what we attribute to randomness, we just, that's a way of saying, 'I can't measure it.' Or, 'I can't observe it.' Or, 'I can't get data on it.'
But, there's also this weird micro-, micro-, micro-, micro quantum-level uncertainty in the world. And, you give the example of the woman who smokes a chimney--cigarettes--like a chimney and lives to be 103, because the time that she would have gotten the cancer, her front door was open and she got a little chill and she coughed; it expelled a tiny, tiny cell. Now that's an example where that would have grown into cancer and killed her. But, that's an example of where I just can't observe it. I can't observe it down to that micro-, micro-, micro-level. But, there's also the possibility there's something even finer going on, something more mysterious that is beyond the calipers, beyond the survey questionnaire, etc. And, it's a more philosophical question. I don't know if you have anything to say about it. You're free to respond.
Michael Blastland: No, I think what you're describing with the possibility that, you know, we could discover the cough that saved somebody from cancer when you would expect them to get it, it's akin to the idea of Laplace's demon. Laplace was a statistician who said that if we had complete knowledge of all the causal factors in the universe, then everything would become predictable.
Russ Roberts: Of course.
Michael Blastland: And, the difference between that and the philosophical question about whether there is some kind of true randomness in the way that people sometimes talk about quantum randomness--I think from a practical point of view, it doesn't actually matter. If it's not ascertainable, then it might as well be true randomness. I mean, there's a vigorous argument about whether true randomness exists. Einstein famously didn't like the idea. I just don't mind because I don't think in the end it becomes that material, actually, to the problem, the practical problem of deciding what to do next. If we don't know, we don't know.
Russ Roberts: Well said. Yeah. It's failed, it's failed from us in a certain fundamental sense, I think.
What's more--everyone would agree with that. The reason your book is interesting, besides the way you talk about it--and it's a fantastically entertaining read. But the reason it's interesting is the magnitude of it. Right? It's one thing to say, 'I can't quite predict the exact score of the football game, but I'm pretty good at figuring out who's going to win.' Or, 'I can't get it within a certain number of points or touchdowns or whatever, but I can get close.'
You are suggesting a lot of times it's not close. And, I think that's really unpleasant to confront. But useful. And, we'll get to the usefulness in a minute.
Russ Roberts: Before we leave this section of the book, I want you to talk about Darwin's nose, because I just--and there aren't many times you gets to about Darwin's nose. So, why is Darwin's nose in your book? This would be Charles Darwin, and nose meaning the thing in the front of your face.
Michael Blastland: Yeah. So, Charles Darwin says in his autobiography, The Voyage of the Beagle, the ship he sailed to the Galapagos Islands on, 'has been by far the most important event in my life and has determined my whole career. Yet it depended on so small a circumstance as my uncle offering to drive me 30 miles to Shrewsbury, which few uncles would have done, and on such a trifle as the shape of my nose.'
So, how does the origin of species and Darwin's subsequent career depend on the shape of his nose? Well, I use this just as a really kind of a narrative illustration of the point you made about the cough--that these trivial biographical events can be incredibly deterministic. It's because the captain, Fitzroy, of the Beagle, believed that the facial features revealed character. And, he was of his time. This was a very common belief in those years. And, Darwin's nose was really borderline. He was almost rejected for that voyage because of his nose.
Russ Roberts: Just made it.
Michael Blastland: And, I guess Fitzroy stared at his nose for a long time and judged the width of his nostrils and the bridge and thought, 'No, I think this guy is going to be okay.' And, eventually decided to admit him.
I mean, it's a slightly tongue-in-cheek example. But I don't think that's kind of unfamiliar to most people's experience when we look back on our lives--the serendipity, the chance moments. I think we all understand that these things can have disproportionately profound effects on the way things run for us. I'm hugely conscious of the element of luck in my life--that chance, happenstance, again, whatever words you want to use for these kinds of things.
And, I don't want to suggest that things like a privileged upbringing or an impoverished upbringing or great advantages in life of one kind or another are irrelevant.
Russ Roberts: Of course.
Michael Blastland: I call it the hidden half. I don't call it the hidden whole.
And, then this is important: I recognize regularity and I'm as fascinated by it as anybody else. I'm an absolute probability nerd. I'm completely captivated by the degree of regularity that we can find.
It's simply that I think we neglect the irregularity. We don't appreciate sufficiently its power.
You mentioned a moment ago the economists who say, 'Okay, we don't have all the data, but we have enough.' I just would like to give a little bit of illustration from a probabilistic point of view about how weak the data can be when it comes to trying to make it useful. And, I'll take this one from medicine.
So, let's take the top-10-selling medicines in the United States. They work: you know, we have enough data. We can say with quite a lot of confidence that if you take one large group of people and you give them these drugs and you take another large group and you give them a placebo, then the people who take the drugs, they're more likely to do better with whatever condition they have. We know that, this is solid [crosstalk 00:31:39]--
Russ Roberts: It's the gold standard, revealed in double-blind randomized controlled trials.
Michael Blastland: Gold standard. Absolutely, as robust as knowledge comes in this field.
So, okay, we know these things work. How likely are they to work for you when you next take them? Amongst those top 10, these are the top 10--I repeat: they have a lot of people's confidence. They've been through all the trials, the doctor who prescribes them, the physician who says, 'Take this,' they believe in them. Well, it ranges from about one in four--so, one in four of the people who take these things are actually going to show an improvement against the measure that we say marks improvement--all the way to about one in 25--meaning 24 out of 25 times these things that work, don't work. I mean, this is a little head-spinning to say that things that genuinely work most of the time do not work.
If your car didn't start 24 out of 25 times, you'd take it to the junk yard. But, these are drugs, which we can say work.
Now, I say this just to give a sense of the scale, this proportions that we're talking about when people say, 'We have enough data. We have enough regularity.' So, we can have a huge amount of regularity at the very big scale when we're looking at the differences between huge populations, which can be almost useless at a level of individual predictability.
The same thing actually applies with a problem like cancer and smoking. If you take two countries, hypothetically, in one of them, everybody smokes and then the other, nobody smokes, you're going to see a big difference in the amount of lung cancer and other cancers in the smoking country. And, you're going to be able to conclude from that, 'Well, it looks like smoking probably has a lot to do with this.'
On the other hand, if you focus just in the country where everybody smokes and you say, 'Okay. Now, how predictive is smoking of cancer?' It's not predictive at all, because some people get it and some people don't. In other words, you're a level of pure luck.
Now I say that and a lot of physicians would get really antsy. They're going to say, 'Yeah, you can't call cancer luck. There's a clear cause.' And, there is, until you try and understand it at the individual level; and then there is a large element of luck. These things are not one or the other, they are both. They are both luck and they are determined.
And, those things operate in tandem at different scales. We can observe them at different scales. We can observe one at the population level. We have to resort to the other at the individual level, because there's no other way of explaining it. You have a whole population where everyone smokes the same number of cigarettes, some people get cancer, some people don't. You have to acknowledge some element of luck.
Now, I get into trouble for saying this, but I can say that while at the same time firmly believing that smoking causes cancer. And, they say, 'You can't have it both ways.' But, you can. You can have regular, clear knowledge at the probabilistic scale--like your economists, we think we know enough--and you can have complete confusion at the individual level.
And so, that's your hidden half in those cases. It's: How reliably can it tell you in the next instance when you next want to use this knowledge? Will it work? You might find actually that your data is less useful than you thought it may be or would be.
Russ Roberts: And, of course, the flip side of that is the treatment that makes something better, that probably would have gotten better on its own, but you don't know that so you tend to attribute it to the phenomenon. I had a skin tag on my lower eyelid and careful viewers will be able to notice that in past YouTube episodes of EconTalk. Just a little white spot sitting on my lower eyelid. I went to the dermatologist; he said, 'Yeah, that's a little scary. It's on your lower eyelid. You should get it removed.' So, I went to an eye doctor. He said, 'Well, I can remove it but it's going to bleed a lot, and it's really not harmful. So, it's up to you. It's mainly cosmetic, it's not going to hurt you. You do what you want.'
So, I thought, 'I can leave it alone.' It's pretty much gone. It took two years. It either fell off--you can look now if you want--it's in this side; it's pretty much gone. But, if he had taken it off, I would have said, 'Thank goodness I got rid of that. It could have overwhelmed me.'
And so, many procedures, of course, are like that. People attribute any post hoc ergo propter hoc--after this, therefore because of this--classic fallacy that's so hard for us as human beings to remember.
Russ Roberts: The other thing I want to mention that uses an example is the fact that you're on this program. And, I want to use that as a way to think about how I think luck is a little bit misleading about how to live. There is uncertainty in life, there's a lot of randomness from our own human perspective that we can't resolve in our lives or in interventions of various kinds, policy interventions and so on.
But, you're on this program. Let's think about why you're on this program right now, why we're having this conversation. You're on this program: I never heard of your book--which surprised me because it's right up my alley, as listeners will know--but I hadn't heard of it. And, I think--I didn't check beforehand, but I think Brent Orrell, who is at AEI [American Enterprise Institute] and who is a friend and an EconTalk listener, interviewed you about the book a while ago and recommended it to me. And, I wrote you, I think and asked for a copy of it; and I probably forgot about it for a long time. And, "as fate would have it," just luck on your behalf. Unless this goes badly, which it could, in the remaining half an hour or so. It could end up to be a major disappointment or worse.
But, assuming it goes okay, this bit of good fortune occurred because I was looking through some PDFs [Portable Document Format files] and I saw the title and it kind of grabbed me again; and I opened it again and I started reading it again; and I really liked it.' And, I thought, 'Oh, maybe I'll do that.' And, then I forgot about it again.
And, you could have--at this point, prospective guests sometimes intervene. They write me an email and say, 'Hope you're still thinking about my book.' In which case I usually go, 'Yeah, I should look at that.' And, who knows what kind of mood I'm in that day or whether I've had eight guests about the hidden half on recently, in which case you're doomed--you got no chance. But, in this case, it kind of grabbed me, and I decided to invite you.
So, what's the lesson? The answer is there isn't one. Right? Because most of what occurred, kind of--and I'm going to push back against this and get your reaction. Because a lot of people believe the way you get on a program like this, where there's just too many books and not enough episodes: You got to be persistent. And, what you should have done, Michael, you almost missed out. You just got lucky. What you should have done, you should have taken life by the reins. You should have grabbed your opportunity; you should have emailed me yourself two or three times and said, 'Brent really liked our conversation. You should listen to it. I think we'd make a--'
And, people do that to me all the time. And, sometimes it works and sometimes I go, 'Oh, I wish this person would leave me alone.' I have one potential guest, I got four or five emails in one day from alleged strangers who said, 'You don't read this book, I think you'd really like it.' I got four or five. And, I thought, 'Oh, this is a concerted effort by this person to get on the program.' Which sometimes I respect. It depends how well the email is crafted and what the message that it is.
But, I think often we think there's a strategy--which is either, 'Be really persistent, never give up,' or, 'Don't bug them too much because--'. And, the answer is: you never know enough about the person you're interacting with to know how they're going to respond to that kind of stimulus. And, you don't know what they had for breakfast and what kind of conversation they have with their wife and what were the six books they read before. So, it is "complicated," which is part of the randomness.
But, the other thing I want is to argue is that if you do it enough--if you interact with enough people in a thoughtful, kind and helpful way--good things happen to you. Right? I can't tell you how many times I've said yes to a request that I wasn't sure it was going to work out really well for me. Someone asked a favor of me and I said, 'Yes'; and I got some lovely thing as a result. And, I think when you answer a lot of favors--if you're not careful, you don't get any work done. So, again, not an iron rule. But, at the same time, you do enhance your luck as Branch Rickey said, I think, what is it?--'99% of luck is persistence'--I can't remember the quote right now.
Michael Blastland: I think you're spot on there, Russ. I mean, it's interesting which attributes are going to bring you success. The idea that we can observe certain people and we can define these things.
One that I really like is risk-taking. So, if you talk to business leaders, entrepreneurs particularly, they'll say, 'Yeah, the key thing, you've got to be able to take a big risk.' The thing you never hear from is all the people who took big risks that are not business leaders and successful entrepreneurs--
Russ Roberts: They went out of business--
Michael Blastland: They went out of business. So, shouldn't you take big risks? Well, you shouldn't just listen to the people for whom it worked out because that's--so there's a kind of huge selection bias, survivor bias going on in those kinds of reports of what works. As you say, it's probably harder to predict which of these strategies is going to work on given individuals. You can become a pain in the neck if you're persistent. Or, you can swing it. And, it's pretty hard to tell.
The point you make, I think, about sometimes just kind of checking[?] yourself of a lot of opportunities is it also pays off.
I think there's something in that, again, because in a way, the way I like to think about it is that you're making yourself open to coincidence.
So, thinking of coincidence as a benign kind of risk, or just as beneficent chance. You just kind of, 'Oh man, a lot of this isn't going to work out.' Or, 'It's just going to be a nothing. It's another one way or the other, but some of it will.' That's the nature of chance. But, once in a while, things roll for you. And, I think there is a good, reasonable human strategy where you say, 'Okay, I'm just going to try and give luck a lot of opportunities in my life.' Accepting that [crosstalk 00:42:19]--
Russ Roberts: Especially when you can avoid or choose not to implement the left-hand tail. I think this has something to do with convexity or concavity; and Nassim Taleb would tell me which one it is--I think it's convexity.
But, the idea that having lots of options that you don't have to take--it's a bad idea to throw yourself into the randomness wheel, onto the randomness wheel, where everything that happens to you could happen. But, if you can reject the worst things that can happen and avoid them because you don't have to say yes to the request that follows after the request that you did, I do think good things can happen. I think it's not a bad a piece of advice, so I like that.
Russ Roberts: Let's talk about something that I've heard recently and you talk about, which I find very provocative: the weakest link principle as an impediment to solving problems.
Michael Blastland: Yeah, I came across it in some work by a Philosopher of Science called Nancy Cartwright--very interested in causation. She tells a story about a project in a province in India, Tamil Nadu, which was backed by the World Bank. And, what they observed in Tamil Nadu was that there was a very high rate of infant mortality. Kids were being born underdeveloped, so the odds were against them from the beginning. And, quite a lot of them were dying--too many. And, they discovered that there was a phenomenon there whereby pregnant mothers were, what they called 'eating down.' They didn't have much faith in the health system there; they didn't want to push out a big baby. So, they reduced the amount of food that they were taking as they became more pregnant. And it was this that was contributing to the malnourishment of their babies and reducing the chances that their own children would survive. And, you could understand why they felt this way, but clearly the consequences were pretty catastrophic.
So, an aid team went in there and they tried to understand. They didn't just march in with a solution: they tried to understand the thinking, the fears. They tried to find ways of intervening that would be accepted and would be felt to be constructive. It was a sensitive program, and they did a couple of things in principle. They talked to the mothers and said, 'This is why we think this is happening. This is the link between the death of your children. And, by the way, actually healthcare has improved a lot around here. Did you know there's a new clinic open and their techniques are better and it's less risky than maybe it was in your mother's day? And, also, 'If you don't have enough food, we're going to help you out there as well.'
They did this and the results were fantastic. It was a really very, very effective program recognized by the World Bank as having a high standard of high quality evidence for an effective intervention. One of those rare things that really worked.
So, this is where it gets interesting, because Nancy tells the story about what happens next. And, everybody was very excited with this intervention and then they decided, 'Okay, there must be other parts of the world where we can use this.'
So, they went to Bangladesh where the problems were very similar, some communities. And, they did the same thing. And, it failed. There was no meaningful improvement in infant mortality. So, at this point they all sit down and scratch their heads and say, 'How come something that works so well for so many people--and again, we've done proper trials of this thing, we have good evidence--doesn't work here at all?'
Well, the difference, as the story goes, is that, whereas in Tamil Nadu, the original site of the experiment, family meals were controlled by the mother, in Bangladesh they were controlled by the mother-in-law. The domestic head of the household was not the mother, it was the mother-in-law, the son's mother. Now, maybe when she was handing out the food, she was less sensitive to the needs of the mother than she was to her own son. Possibly. I'm speculating now. Maybe the researchers had only talked to the mother about the improvements in healthcare; they hadn't talked to the mother-in-law. Maybe they hadn't made clear to the mother-in-law the link between eating down and malnourished children and infant mortality. Maybe they'd said that to the mother but not to the mother-in-law, so the mother-in-law wasn't quite persuaded that this was a legitimate argument.
But, one way or another, the mother-in-law turned out to be a central causative agent. And, could you have predicted that?
Yeah--and this is what Nancy means by the weakest link: You can have absolutely robust evidence, and you can take it--coming right back to the principle that we articulated right at the beginning, that knowledge is a case of does our information, understanding travel?--you can take robust knowledge and understanding, does it travel from one place to another? It might fail because of this one weak link. Everything else you got right, but you just fail to appreciate the role of the mother-in-law.
I'll give you another example. There was a project--again, I think it was in the Indian sub-continent--to kind of connect a community to a public water supply, rather than having to go to standpipes and things like this. And, they went to huge efforts to make sure the supply was reliable, and the water was good, and that everybody knew about it, and all the rest.
So, tell me the one thing they forgot to do which meant the project had an appallingly low sign-up rate? They forgot about the importance of the photocopier.
And, at this point you say, 'Well, what has a photocopy got to do with the water supply?' Well, in order to sign up, you need a duplicated copy of the application form or whatever it was; and nobody could duplicate the form. You needed to go into an office with a photocopier.
And, this proved such a hurdle--I mean, it's preposterous. The final implementation stage, the stage of design. This is an example from Esther Duflo, who was Nobel Prize winner recently in economics, where her point was that these very small details--almost the last link in the chain for her--these very small details of implementation can be absolutely critical to the successful failure of an idea, of a project.
So, there you can see this weak link principle. Everything else can be fine, but if you're missing one piece, it can be a 100% failure.
Russ Roberts: Yeah, the challenge there, of course, is that there's a tendency to then say, 'Well, just we've got to make sure we have a photocopier.' And, then when you get to this other village, it's the mother-in-law. Or, it's the soil.
Michael Blastland: Yeah, something else. Yeah.
Russ Roberts: We've talked about this issue before of deworming, which was this exciting finding that removing parasites from school children improved their ability to learn and that and therefore improved their ability to get a job; and therefore improved their ability to avoid poverty: and, therefore, deworming is the most effective aid vehicle we can possibly have. It's 10 cents a pill a day. It's fabulous. And, a lot of those results didn't hold up; and they didn't generalize; and they didn't scale.
And, you start to think about why. And, one of them is the weakest link.
I mean, the obvious point, if you read William Easterly's work--he has been on the program--obviously education is the key to ending poverty in the poorest countries: you need a better education system. Well, if you have great education but you can't get out of your village to find a job, or the job market doesn't work very well--there's always, often these weakest links that make it hard for the intervention to be as successful as it was in the first time you observed it.
And, the only thing I'm adding to yours, which I think is important, is that the weakest link isn't the same weakest link every time, which makes it hard--
Michael Blastland: Yeah. Deeply, deeply frustrating. Esther Duflo, who I mentioned a moment ago, she would, on the question of education, the first thing she'd say is that we have all these magic bullets in Aid Economics over the years which have, one after the other, seemed not to have the success that we expected. And, why is that? And, she says--her argument is that quite often it doesn't depend on the kind of, what they call 'pressing the button on the machine'--just getting some whole grand theory started because that's all it takes: 'Oh, well, let's just run the machine and it'll all be okay.' She says, 'No, the machine doesn't work if you don't have the right language for the school textbooks. If you don't get your fingernails dirty with the real grit of the implementation. The local contextual details are every bit as important as the ground theorizing and the big answers. And, without both of those, you're not going to get anywhere.'
And, I think she probably has a point. Her emphasis is very much on the low level of detail. Other people emphasize the higher level theory--you know: 'If you cut the education budget by 30%, you're not going to do too many people much good, right?' But, similarly, if you get the total of the textbooks wrong, you can increase the budget, but buy the wrong textbooks and you don't do any good either. So, it seems pretty clear to me that you need to think at every level.
And, this endless, iterative problem of going back to the theory, testing through experiment, considering the details, trying to get an understanding from local people what the local motivations are, and then doing the whole thing all over again; and repeating it and repeating it, understanding the causes[?] work in teams--to take your layer analogy earlier on--it's very seldom one thing. It's usually an interaction of many things. And any one of them could go wrong and the whole thing falls down. So, understanding casualties, I agree, maddening. Infuriating. Endlessly frustrating. Can I tell you one story about toilets?
Russ Roberts: Yeah, tell us that. I know that story. It's in your book. It's a great story. Tell us.
Michael Blastland: I do like this one. Because, open defecation around the world is an enormous problem. If you don't have the facilities, it can cause all kinds of health difficulty. It's pretty obvious to see why water courses become polluted, disease is easily carried.
To solve that problem in India--and they really want to solve it--would require building a new toilet or a new latrine about once every couple of seconds or something for, I don't know, 30 years. It's a monstrous problem. So, you say, 'Okay, well, the state is going to be pretty constrained in its ability to do this. Let's see if we can find some way of motivating individuals to do it.' Now, here's the causal problem: How do you do that?
So, team went in; and they said, 'Okay, let's first try to understand it. Maybe if we just give people some money, a grant for a toilet, and they come back and they show us they built one, then fine, that's the way to do it. So, we don't have to build it, we just give out a bit of money, maybe a loan. Maybe they have to repay it.' Well, they did that and they found that people spent it on something else. So, that doesn't work.
So, you say, 'Okay, well, maybe we have to teach them about the importance of building a latrine, and then maybe they'll do it.' Or, 'Maybe we don't give them the money until they've built the latrine, and that will work.' And so, you don't give them money, but you tell them they can have the money when they've done it. And they don't do it. So, that doesn't work.
So, you go through all these kind of things and you have to say, 'Well, when they say,' as many of them did, 'we can't afford it. What do they mean?' Do they mean they're not convinced that it's the most productive expenditure they could undertake? Is that what they mean? They do have the cash but they don't want to spend it on this? Do they mean they actually just don't have the cash? Do they mean they think it's just not worth it on any terms? What exactly is going on? So, you have to do quite a lot to understand what that means before you can get it--now, eventually they started doing an education program. So, 'Here's why toilets really matter,' and, 'Here's some money up front.' So, you do those things at the same time and they built the toilets.
Did that reduce the problems caused by infection and transmission of disease and so on, caused by bad water, polluted water? No.
And, there is huge evidence to suggest that good sanitation is one of the most constructive things you can do to improve human welfare; but in this instance, did it work where they tried all these pilot schemes? Actually, no.
So, they have to go back again and say, 'Well, why didn't it work? Maybe you need a threshold of 80%, 90% of people having these toilets? Maybe even though they have the toilets, the customer is so ingrained that they don't actually use them? Maybe it's: one person defecating their water source is enough. Maybe it's animal waste, which is now the problem because they're cheek-by-jowl with the animals and maybe they got--so what do we do about that? Do we need another program to kind of isolate one from the other?'
And so, it goes on and on and on.
Now, the interesting thing about that is the imagination that you need, the curiosity that you need, just to keep on asking questions about the myriad potential influences that could play into the success or failure of this project. It's almost impossible to anticipate which way the thing is going to go given a particular set of parameters. You just have to keep trying it--and again, this iterative process, theory, thinking, imagining, asking, testing details over and over again.
Russ Roberts: Yeah. I'm a--I'm not a big fan of intervention generally, but I am a big fan of helping people. And, I think those two things kind of collide here for me, and I think there in some ways, it's the story of the development literature of the last 25 to 50 years, which is: People with good intentions intervene with imperfect knowledge. And, they hope to do something for someone--or worse, do something to someone.
The most classic example of this, I would say, is giving people bed nets for malaria, to reduce malaria, in hope of keeping mosquito bites down. And the people get the nets; and they use them for fishing.
Now, when you hear that story, you think, 'Oh yeah, well, that's just an excuse for not sending the nets, not helping them. Because, we've got to educate them, and we've got to maybe give them enough money so that they can afford to use them as bed nets instead of fishing nets, or give them fishing nets with the bed nets that work better.'
And, it's always one level down to that next weakest link in theory.
And, I have to say, as eager as I am to see philanthropy succeed, the whole idea of it is somewhat intellectually offensive, ethically offensive to me when I think about it. Which is that, 'We know what's best for you. You need to be using a toilet. Here's how we do it. Come on, get with the program. You'll be better off.'
And, often they've evolved all kinds of norms to deal with the fact that they don't have a toilet or whatever is the issue. And, we're not just oblivious to those--because we're imperfectly informed. We don't respect them. We say, 'Oh, that's the wrong way to fix it. Don't do that. This is our way, and our way is better.'
And, I think there's an incredible hubris. Listeners can go back and hear my episodes with Nina Munk and Jeffrey Sachs, which are two unforgettable--sometimes unpleasant--episodes in the history of EconTalk. And, I think it was in the Sachs' episode where--or maybe I heard it somewhere else; I can't remember. But, they--the outsiders, the economists, the wise people--they decided, 'Everybody needs to grow corn,' or whatever it was, 'maize. Because it works really well in your soil and you're stupidly growing yams.' Or vice versa, Without understanding a thing. How did these people live this way for centuries?
And, sometimes of course, outsiders have insights that--they bring technology or useful things. But a lot of times those people on the inside are just going like, 'Wait, these guys are really pitiful. They don't think about farming. Like, what would the world look like if everybody in this piece of the subcontinent of Africa were growing maize? Maybe that'd be a disaster.'
The irony--sorry, I'll get off my soapbox in a second--the irony is that you'd stated the people who are defecating in the fields, 'Well, you don't realize you're imposing costs on others.'
So: 'Your natural personal incentive is to avoid a toilet, but you don't realize that if everybody used the toilet, we'd all be better off. There's a Tragedy of the Commons, there's this Externality problem.
And, then the reverse of that is that the researcher that does the same thing--they say, 'Everybody needs to grow maize.' And, then everybody grows maize. It's a glut on the market, and the price plummets; and they're starving. Because they don't understand the full richness of what goes on.
So, it's a--the world's complicated.
Michael Blastland: I mean, I don't agree with you entirely about imposing intervention, but I do agree with you entirely about the risks. I think they're huge. And I think we've plenty of examples where people have gone in overconfident about their ability to understand and analyze a problem and they've imposed solutions which have been counterproductive. And, I think the literature is full of those kinds of examples.
And, I think there's a very reasonable, ethical debate about what's the most constructive way for societies to advance.
I'm familiar with Angus Deaton, British economist based in the United States [Nobel Laureate, Economics, 2015--Econlib Ed.] takes a pretty dim view of aid. Yeah. And, I think that's a strong argument. I don't go the whole way with it.
Russ Roberts: Neither do I. I'm just saying that the whole mindset, which should be, 'How do I help this person help themselves?' It has to be at the center, or it often does not work--because you don't know the weakest link and you don't understand, even, the stronger link.
Michael Blastland: Yeah. I think the point I would make is that Daniel Kahneman, asked about the worst of all human cognitive biases, says, 'Overconfidence.'
And, that's the kind of characteristic that I think The Hidden Half is aimed at--it's that belief that: 'We know. We know because we're smart types of people and we have a lot of data and we have sophisticated AI [Artificial Intelligence] or machine learning and we have all these other wonderful techniques for discovering causation.'
But, people have always felt like that and they've always made spectacular mistakes. And, you don't have to look far to see examples where presumptions about having really robust knowledge of come back and kicked us pretty quickly. You know what I mean? Donald Trump--you read any analysis of the political conditions in the United States prior to Donald Trump and people thought they had pretty well understood. And, then he arrives. The same in the Europe with the Brexit vote in the United Kingdom--that was considered by most people to be highly unlikely. And, then out of nowhere, apparently to the point where quite a lot of people--there were a few who said, 'We need to start tearing up the political textbooks. We don't understand how things are anymore.' We had a leader--
Russ Roberts: The Financial Crisis of 2008.
Michael Blastland: We had the financial crisis. Yeah, exactly. What were all those phrases about the Goldilocks Economy? The Great Moderation? We had it nailed down, didn't we? Until we didn't.
This--time and again you see this kind of hubris, I think, in people whose expertise, deep understanding--I don't deny the depth of their understanding--but it gives them a kind of false assurance, I think. Because they know a lot, they therefore know enough. And, I think that's very seldom the case.
Russ Roberts: So, you have some really eloquent passages in the book which I would read, but they're longish--but, I love them and I encourage readers to read them--where you basically say that, 'Don't misunderstand what I'm saying. I'm not anti-science. I don't think we should give up on understanding everything. We should just be aware that our understanding is imperfect.'
And, I am--like you, I make the same speech all the time. People say, 'Oh, you're anti-science because you're skeptical.' And, well: That's called science. And I'm anti bad science, is what I am. And, I'm pro good science.
Why do you think this--why are we so lonely? Why are you and I so uncommon, or seemingly uncommon, in our willingness to embrace a more, what I would call, a more accurate level of confidence about what we know?
Michael Blastland: Well, I have some optimism about that. I mean, I'm taking part in the conference which is called the Known Unknowns, which is about current state of knowledge about COVID. We have something like 2,000 people coming along who are attracted by that title. And so, I think possibly there's been a bit of a learning moment around the pandemic, actually--a sense that the uncertainties attached are huge on every side. That we're, on the one hand we have the bench science[?]--the discovery of the vaccine--has been mind-blowing--
Russ Roberts: Incredible. So inspiring--
Michael Blastland: The sophistication, the genius involved, the accumulation of learning is just as absolutely astonishing.
On the other hand, some of the more social, economic, epidemiological understanding has been struggling.
And, I don't blame it. I don't think this is a fault. I think if there's a fault, it's been the claims of overconfidence about how things will pan out. I've been wrong about the pandemic. I thought the serological studies would show that there was a huge amount of unrecognized transmission. That there was a vast amount of asymptomatic infection that had already taken place. And they didn't show that.
Now, there's some argument about whether they're sensitive enough to pick it all up. But, you know, they didn't show what I expected them to show. I was wrong.
And, I think, you know, we should all sort of try to be able to say those kinds of things. Because, if we do not have that kind of self-critical aspiration to humility, I'm sure with plenty of your audience, Russ, who don't find my comments humble, I'm resigned to that. But, I think there is an attitude of epistemic humility, about which I'm quite confident, if I can offer you that paradox?
Russ Roberts: Yeah, I like that.
Michael Blastland: I think it's absolutely necessary. Otherwise, you know, you're just riding for a fall. The number of occasions when people have bet the ranch on one thing or another and lost the ranch. Now we see, as we mentioned earlier, it's only the people who kept the ranch that are still around to talk about it. And the ones who failed have retreated, they're off stage now. We don't hear from them too much.
But, betting the ranch on our knowledge is a dangerous thing. And, by and large science does not succeed by becoming complacent about its knowledge.
But, if you watch the practice, too much of it, it seems to me--it feels like sales. It doesn't feel like a skeptical, humble inquiry. It feels more like people trying to tell you a story or win your vote--convince, persuade. These are the skills of rhetoric and marketing. They're not the foundations of science.
And, what goes for science, I think, goes for most human endeavors. It should apply to government--politics is a difficult one. It should apply in economics. It should apply to business. Without this kind of basic humility in our approach, we're suckers for a story. And, some stories are true, some stories aren't. But, you'll never know if you're one of those confirmation bias machines which just looks for proof that your story is the right one. You have to have that open mind, that skepticism, that humility to discover that you were wrong. And, then the willingness to say it and start all over again. Otherwise, don't come looking to me for investment.
Russ Roberts: So, the other part of your book, which I loved, is how you end it. And, we can end our conversation on this topic, which is--and I'm thinking about this a lot because it's a book I'm writing--which is: you know, uncertainty is just the way it is. There is this dream that will resolve all of it with just--if we just get enough data, big data, artificial intelligence approves, then I'll figure out who to marry because the algorithm will be able to predict it. I'll get all the information and, like Laplace's demon, I'll be able to be sure I'm happy: I don't have to have all this uncertainty and risk. And, it's going to be fine. And I'll reduce the probabilities. We'll have customized medicine, which you talk about in the book. So, yeah, right now the top 10 drugs don't work that well on every single person who takes them but, it's just a matter of time.
And, it's not a bad attitude to have, right? In theory, it's what makes the world better, this confidence--this overconfidence--that we can better understand the world; because without it, maybe we just give up and we wouldn't know half the things we know today that have been glorious, and saved lives, and morally meaningful lives ,and longer lives, and so on.
But, you make the case for this, what I call it, irreducible uncertainty, this hidden half: It's not so bad. It's okay. You can survive it. In fact, you could embrace it. So, talk about--make the case for why it's okay to be in the dark about certain things.
Michael Blastland: I mean, I couldn't agree more. There are clearly benefits to irrational exuberance, as I recall your central banker [term coined by Alan Greenspan, Chairman of the Fed, in a speech in 1996--Econlib Ed.] describing it. It motivates--
Russ Roberts: Don't call him 'mine,' Michael; but I knew what you meant. You meant, 'as an American.' Okay.
Michael Blastland: All right. Done. Their motivation is important, and people need motivating by the promise of better. And, if they create for themselves such an expectation, well, if it gets them out of bed in the morning, maybe that's not such a bad thing.
And, there are cases where you really want to know. If I go into hospital and I'm offered an operation, ideally I would like to know whether it worked. I'm just not going to compromise on the certainty there. I really would like certainty in that instance. So, I'm not going to say that certainty is--we should just reject it.
On the other hand, how about I tell you exactly the time and manner of your death? Do you want to know that? Maybe, maybe you're not so sure. I mean, there's an old joke that if you tell me where I'm going to die, I'll just make sure I'm not there at the moment. But, maybe that kind of certainty--or do we really want that kind of certainty?
How about I tell you how all the films end? It could be arranged. I can do that for you, Russ. You tell me what you want to view this evening, I'll look it up, and I'll tell you how it finishes. Then no uncertainty, then we can get rid of that. Are you really sure you want to know who you're going to marry when you're five years old or something like that? And, all the interest--and you say, 'Oh, well, I'm just passing time now until we get to the big one because I know that's the one that's going to be.'
Do you want to strip out every piece of excitement from life? Because, quite a lot of excitement is because we simply don't know. Do you want to say to every business in the world, 'Look, if you just do the probabilities, your investment route will be safe and secure.' That's not going to work. The way that businesses, some successful businesses are more successful than others is because they take chances. And, they think they've seen an opportunity which others haven't. And, they're uncertain about the outcomes of these things. That's what makes business exciting and possible, that we have this huge experiment going on there about competing visions and understandings of what the world is really like and what's going to sell and what isn't. If you reduce it all to some kind of probabilistic calculus, everybody will take the same calculus. And, like everybody growing maize, it won't work for anybody.
So, you have to--this is life. You can't reject uncertainty without rejecting life. I just do not want to know everything. I want some of it to be an adventure. I want some of it to be mysterious. I want the possibility of discovery. I want the dance to unfold in its own way. I don't want to be told at the age of 59 years and three months, 'You will be in position x over here.' Does anybody? I find it slightly-and there are[?] certain as I come back, you have to roll back now, that certain degrees of confidence about the future yes, they're good. I would like to know that I'm not going to be impoverished or I'm not going to be murdered. And, those would be helpful. But, I--beyond that: No, I want a lot of room for uncertainty. I really do. It feels to me what life is about.
Russ Roberts: My guest today has been Michael Blastland. His book is The Hidden Half. Michael, thanks for being part of EconTalk.
Michael Blastland: Thanks for inviting me, Russ. It's been a pleasure talking to you.