Russ Roberts

Nassim Nicholas Taleb on the Precautionary Principle and Genetically Modified Organisms

EconTalk Episode with Nassim Nicholas Taleb
Hosted by Russ Roberts
PRINT
Continuing Conversation... Gre... Continuing Conversation... Nas...

Nassim Nicholas Taleb, author of Antifragile, Black Swan, and Fooled by Randomness, talks with EconTalk host Russ Roberts about a recent co-authored paper on the risks of genetically modified organisms (GMOs) and the use of the Precautionary Principle. Taleb contrasts harm with ruin and explains how the differences imply different rules of behavior when dealing with the risk of each. Taleb argues that when considering the riskiness of GMOs, the right understanding of statistics is more valuable than expertise in biology or genetics. The central issue that pervades the conversation is how to cope with a small non-negligible risk of catastrophe.

Size:31.0 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast episode

Related Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast episode:

Highlights

Time
Podcast Episode Highlights
HIDE HIGHLIGHTS
0:33Intro. [Recording date: January 8, 2015.] Russ: I want to remind listeners: please go to econtalk.org; in the upper left-hand corner you'll find a link to a survey where you can vote for your favorite episodes of 2014. Now for today's guest, Nassim, Nicholas Taleb. [...] Today we are going to be talking about a recent paper of his, co-authored with Rupert Read, Raphael Douady, Joseph Norman, and Yaneer Bar-Yam on "The Precautionary Principle (with Applications to the Genetic Modification of Organisms)," and other general issues related to risk and ruin. Nassim, welcome back to EconTalk. Guest: Hi. I'm always honored to be on your show, but also I have to admit that also it's a pleasure, conversation with you. Perhaps we think too much alike, and it may be a problem from a scientific standpoint but it's always a pleasure. Russ: Well, it looks like two data points; it may only be one. That's correct. Let's start: what is the precautionary principle and why is it important? Guest: Okay. There's some water on the floor. Do you drink from it? Would you drink from it? No. Why do you not drink from water on the floor, if you are thirsty, you are very careful. But you have no evidence that it's poisonous. Uh-huh. So you are making a decision without evidence. This is the exercise of the precautionary principle in your daily life. In other words, for things for which you don't have evidence, you try to stay cautious until you accumulate the evidence; then you can pick the risk. Russ: So, it's useful in situations where, you call it 'non-evidentiary problems.' Guest: So, technically, the definition of the precautionary principle is on decision making, what should be accepted or rejected in situations for which you do not have enough evidence or you do not have evidence yet. In other words, scientific knowledge has not been sufficient in establishing a clear cut answer about things, like what you exercise in daily life. 99% [?] are based on precautionary principles in our daily lives. But there is something much deeper there, is that as people are getting more and more into techniques of risk management, they tend to forget that most of the risks we are taking are of non-evidentiary nature, in the sense that the evidence comes always too late. And this is what we're trying to avoid. This is a very general concept, that people who know have always understood in history in decision making, and the problem of what we call 'scientism,' in the Hayekian sense, Hayekian/Popperian sense, scientism, this idea of using mechanistic tools from science to make claims and techniques, scientism has blinded people to this sort of reasoning. That is, effectively more rigorous than science because you have an asymmetry: you may die if you are wrong; and if you are right, [?] very [?]. Russ: And you argue very thoughtfully in the paper that experts are important, but you have to pick the right kind. Guest: Very often, people in a given profession develop expertise about what they are doing. In most domains, they don't quite have a grasp of the risks, simply because their knowledge, professional knowledge that may help you do a lot of things, but particularly if it's academic, it's not going to help you understand the risks. This we've seen in many domains. Like, traders understand the risk because they are pretty much risk managers, there to be risk managers. But, say, people that we've encountered, [?] for example, they understand return but they don't understand the risk of something. But what they don't understand typically is that the risk belongs to a completely different category. In other words the tail risk, the risk of ruin, is very different from knowledge. So, for example, your risk can increase while your knowledge is increasing. And we have shown, in the paper and some derivations elsewhere, how for example sometimes you bring something new, a new technique, for which you understand the benefits are going to be great. And what you do is increase both the benefits and the risk of ruin. So we end up worse off than we started, sometimes from a pure problem with another one. Is this clear enough, or should I-- Russ: I think-- Guest: Let me continue--yes. Go ahead.
5:46Russ: I think we talked about this in a previous episode. You have to make a distinction between the process and the consequences of the process. Right? So I think-- Guest: Exactly. So, some people--[?]--they understand biology, okay? They understand very well [?]. And science is not about making claims about risk. Science is about making some verifiable and generalizable claims, from a given process; and that someone else can read[?] and continue and improving on a body of things. But it doesn't make claims about risk. So, we notice that neurobiologists, or biologists in general, but particularly but the same[?] was done on neurobiologists, quite general, through that profession, through the broad field, they understand what they are doing, but the claims of evidence are usually more than 53% of the time in that experiment, in that paper which seem that they get things wrong in that they are making the claim statistically. So, a statistician can direct once that higher than a neurobiologist in a scientific claim. And the error is common, is, for example, the testing whether a is better than b is a test of significance of a and the significance of b, and without testing the significance of the difference between a and b. May be technical for the common person, but it's a know-known[?] statistics. And yet more than half of papers in top journals in neurobiology make that mistake. Russ: Yeah, it's a great point. Guest: So, a statistician--and the way these people operate is they know biology a lot; but there's a cop called a statistician on top of them, who studies the paper and puts a stamp on it. And typically runs the data himself or let him run the data on SPSS (Statistical Package for the Social Sciences) or something. And then give his approval. So, knowing biology doesn't mean you understand the evidence. Okay? And this is quite good. Now, once that higher up is that understanding statistical evidence, doesn't mean you understand statistical risk. And that's [?] how. Many people we have discussed the [?] problem. I wanted to detail [?] analysis, any kind of statement about some kind of technology that may [?] masses of people. Many of these people think that they have evidence; and then you read their papers and you look at it, and no statistician would ever let you say 'I have evidence that'--this is again the Black Swan problem. A statistician would only let you say, 'I failed to reject the null at x% confidence. This is what we brought up, which all of us are doing in our lives. So here you see that statistical evidence or what we call the mechanism doesn't say anything about a tail. Russ: Well that's the distinction-- Guest: The tail. Statistics is what happened with that band and do we have enough data to make that claim that this works. If it doesn't say anything about what happens if that claim is wrong, and they give you, they say, okay, there's 1% probability or 2% probability or 5% probability of that claim being wrong. But what happens when it's wrong is usually a different business. And that's where risk measurement starts. And that's my profession. Russ: And of course therefore Nassim is the expert of experts. You have to be careful. It is a comforting thought for you. Maybe not for us. Guest: No, no, I'm not an expert on experts. Our job is the left tail, which is a sub-specialty. Russ: Yeah, that's true. Guest: But, so when it comes to right tail or benefit, stuff like that understanding process, body of distribution, we have no specialty; or we may understand some things but we don't rank higher. Guest: So, now that I give you hierarchy--I said, neurobiologists and I said on top of neurobiologists you have statisticians saying whether or what their claim was or whether the claim missed the statistical evidence or not; and then higher you have the left tail, and it's a complete different business that we have discussed there. Now one simple analogy of why people sometimes in the profession are not qualified to talk about risk of the profession is what we call the Carpenter Fallacy. If you want to understand the risk of ruin or sequence of bets. It's a standard result in probability. But who would you go to for that problem? Would you go to a carpenter who builds roulette? Or would you go to a probability person? The carpenter may claim, 'Hey, you know what, you are insulting me. I know very well how this is built,' and stuff like that. But his knowledge of the carpentry involved in building the roulette table doesn't allow him to make claims as to the probability distribution of what is going to happen. And then less even about claims concerning large deviations, the long sequences of tail events. You see my point. Russ: I do. Guest: So here we have [?]. This is where we are positioning that precautionary principle--it's about saying that are in the business of that very left tail, who are completely different, a different science than yours. Science never really talk about left tails. Only journalists think science talks about that--or bad scientists. And then you need a cop for that. That's it.
12:12Russ: So, the way I think about it, that I learned from your paper is really a distinction between harm and ruin. In one world, you play poker every night; and some nights you lose a dollar, some nights you make a dollar. Some nights you might lose $5. But if you are in a neighborhood poker game, you are not going to lose your entire wealth. You are not going to have ruin. But you are dealing with cases--you are making a crucial distinction between harm, which is that 'some nights I might lose a little money,' versus being wiped out. In the case of the globe, you are talking about extinction. Guest: Exactly. So, what happens is that, to frame it with the discussion of the three layers of knowledge from the biologists to the statistician to the risk analyst, the body of the distribution is particularly[?] the job of the statistician. Variations, all these things. It's not part of our job. Our job is ruin; completely different dynamics. And for many probability distributions, there is a complete decoupling between variation and ruin. You remember, when I published The Black Swan it was in April 2007; if I received a Mexican peso for every time someone mentioned the Great Moderation to me, that the world is becoming a lot safer because it's less volatile, I would probably own a big strip of land in northern Mexico. And then of course, sure[?] that the crisis happened; and then it was not a change of regime. It was nothing. Just that they are making claims concerning tail events from observations about the distribution. And for the class of distribution that we used to work with, with fat tails, these claims cannot be made at all. So the risk can increase while at the same time variation can get smaller. And this is where Ben Bernanke went from, because he was not trained enough in statistical, in fat tails, to understand the risk. Another [?] problem. Russ: Why are-- Guest: Let me, steal, here--let me steal a method[?] you gave me; actually I've used it before and I gave you credit the first couple of times and I stopped giving you credit. So maybe--so I owe it to your listeners, is I learned from you something. It's that, you remember when you were talking about the difference between a systemic and fat-tailed systemic event, and a capacity[?], a smalltime [?] capacity--that if a plane crashes, it's a tragedy because it will kill the people on the plane and it's a great loss--very bad news. But a plane crash will not kill every single person who ever took a plane before. Whereas in some domains, such as finance, for example, banks can lose in a single quarter every single penny they ever made before. So in fat-tailed domain you have to be very careful because the tail is absorbing--it's a lot worse, but that's only money. It's a lot worse when we talk about finance. Vastly worse, because this is not renewable. Go ahead.
15:45Russ: So, talk about the underlying processes. It's a little bit puzzling to an amateur as to why fat tails are so important. So, for example, if I have thin tails, well, it just means that ruin is just very unlikely. It's still possible, though. So why are fat tails important? Guest: Fat tails are important because number 1, you don't notice the variation, as I said are compressed, so you don't notice that the risk is present. In a thin-tailed domain, evidence can accumulate as to the riskiness of something. If you go to Las Vegas and are there for 3 days, you pretty much understand everything; you can predict anything that can happen. Because in the thin tails are so tractable and the law of large numbers operates very quickly. For fat tails, you need a lot more data to know what's going on. And when an event happens it can hit you big time. And the consequences of the event can be monstrous. Which is why we cannot be casual about fat-tailed domains. Now, we can ex ante figure out that something is fat-tailed: we know the ecology is rather fat-tailed, and that the crises in an ecosystem are not systemic because we have isolations. We do not have a large scale, generalized--or we did not have that before GMOs (genetically modified organisms), why I am worried about GMOs. Russ: Well the example you give that I think makes that so clear is a forest fire. Forest fires are extremely destructive. But there are all these natural built-in barriers: there's oceans, there's rivers, there's mountains, there's natural firebreaks that keep a fire from being a catastrophic event. But what you're worried about is something that has the potential to cross those barriers. Guest: Exactly. So the way we say that nature has not blown up at least in the history of the process we have zillions of variations, trillions and trillions of variations on mother Earth, and it did produce some tail events but not pronounced enough to cause extinction. So even if we adjust by what we call survivorship bias or some similar principle, we just can make claims that nature seems to have survived thanks to a mechanism by which capacities stay relatively local. So things don't spread. So in other words, plane crashing doesn't kill every single passenger on other planes or every plane before. Things stay confined and isolated. We had that in economic life of course until globalization; so what happened, a crisis now took place on the planet in 2008 and there is no place to hide. Or almost no place to hide. In the ecology, it's going to be worse. We used to have an island separation, every island barrier, which produced effectively some diversity, because diversity is much higher in [?] square meter on an island than it is on a continent. And we're losing it. And we're losing it through a lot of methods. But we'll come back to that in a minute. Now I have one other element of fat tails I want to add so we can inform the rest of the conversation, which is as follows. Many people understand that there is a risk of ruin, and it could be very small, and sometimes we've got to take it. Many people understand that. But few understand that risk needs to be zero. Not small. Why? Because think of what happens in the sequence of risk-taking. If you take a risk, say with Russian roulette, a risk of ruin, and survive, what would you do next? You may take it again. So many risks that are very, very small, because you've survived them, lead to 100% risk of ruin. Russ: Right, because you get--well, it's a couple of things we've talked about before which I find extremely powerful, which is what you call the Turkey Problem, which you get from Bertrand Russell: every day the turkey is being taken care of by the farmer and thinking--every day he gets additional and new evidence that it's safe. It's fine. He's got a good life. Until Thanksgiving comes and he's killed. And similarly, Value at Risk (VaR) in the financial crisis--it's working; it's fine; we're making profits every quarter; we're very prudent; we're very careful because we have this tool that we use. And--I may have mentioned this before: I have a friend who is skeptical of your work. I won't name him on the show. But he says to me, 'Oh, everybody knows Value at Risk is dangerous.' I say, 'Well, it's true.' In theory. But after a while if you keep using it, you'll probably get lulled into--if you are not careful and if you don't have other feedback loops that make you wary, you are very likely to start thinking 'I've got this licked.' So you fire the Russian roulette; the bullet doesn't kill you because it's got a thousand chambers. Or maybe 100,000. But if you live for 40 years, you are in trouble. Guest: Yeah, exactly. So, this is what people fail to get: that ruin is not a renewable resource. It's [?] insurance. Russ: Explain. Guest: Let me explain. If I play Russian roulette, if I play things like that, I'm not--the probabilities add up. So mountain climbers have a very small probability of dying in any given episode. What happens? Hey, they survived. So they're going to attempt to do it again. So eventually their life expectancies are going to be much shorter. Because they do a lot of it. So, on the repetition, you end up with 100% guarantee of ruin. So you lose--it's a resource that's not renewable. And people fail to look at risk that way--it's that, you look at the risk of one episode, not succession of tail risk taken by the planet. So I have no problem of people taking risk so long as everyone stays local, not--doesn't [?] the whole human race. Russ: And you mentioned insurance because it's like the cat having 9 lives? You get another-- Guest: [?] with insurance you have a cash flow. And they understand the problem very well, since Cramer [probably Harald Cramér--Econlib Ed.], the guy who studied insurance. They looked at some process that compensate the risk you are taking because you are making some money to accumulate in some reservoir that's going to be depleted, but not 100%. So the idea is to calibrate the risk taking to what you are getting into the reservoir. In insurance you can do that. In ecology, and many domains, you cannot do that, because the reservoir is not being filled. We are just wasting risk. You see? So what happens in the end, risk accumulates to 100% probability of ruin.
23:19Russ: So, let me ask one more general question and then we'll turn to GMOs and environmental issues more generally. In the article you talk about a contrast between bottom up, local events leading to thin tails, whereas global, connected, top down events are going to be fat-tailed. Talk about that. Guest: The best way before getting to the statistical taxonomy of these things is through a common--probably your next best economist. Who is your next best economist, after Adam Smith? Russ: Uh, that would be F. A. Hayek. Guest: There you go. So let's talk about Hayek. You see--by now I can read you. Russ: I got nervous; I got nervous there for a minute. But I got the right answer. I'm relieved. Guest: What was the idea of Hayek? Why did Hayek want distributed knowledge in society, nothing, no monopoly of knowledge by anyone? Because he wants the errors to be distributed. He thinks that the system knows more than any individual part of the system. And also be he thinks we cannot forecast--the mind cannot foresee its own advance. That's another profundity, not just we can't forecast: we can't forecast how we are going to forecast in the future. So really, let's call it Popper/Hayek because they really worked on that together and the two friends were brilliant in slightly different domains. So, Hayek was against--what? Against a top down social planner who thinks he knows things in advance, can't foresee results. And makes--because the person first of all has arrogant claims that may harm us, but also because of mistakes--he's not going to foresee his own mistakes; and mistakes will be large. So you see where I'm coming from? Russ: Yep. It's Adam Smith's man of system, also--same problem. Guest: Let's continue with Hayekian thought. And this led Hayek to stand against what he calls 'scientism'. Scientism is an unscientific use of science, that I've encountered with pro-GMO people who keep attacking me--the scientism, because they say, oh, I'm for science; risk management is science fiction. And then there's no point, there's nothing wrong. Hayek has solved that problem of scientism and false claims, what, 50, 60 years ago. And he effectively is a man who is vindicated. There's something even more interesting than that about Hayekianism. You know, the opposite of Hayek--people who did exactly what he was against--were the Soviets. You know? Russ: Yeah. Guest: Now, it so happened that there's a branch of mathematics largely developed by the Soviets in dynamical system, one just got the Abel Prize, in that tradition, started by the Soviet Union, in the heyday of Soviet science, about nonlinear dynamics. And the last one was the billiard ball fellow, Yakov Sinai, who got the Abel Prize. And he's probably the most crowned mathematician alive today. Now, what is this Soviet mathematician saying? You know what, in a complex system you can't predict. That's sort of what they said. Financed by who? A social planner. Russ: Yeah, it's ironic. Guest: But nobody saw the contradiction. That if they are right, then they should have no Soviet system. It's ironic, but let's not laugh too early, because it looks like many people are making that mistake. Russ: Well, it's a common problem. Guest: [?] But making it is different when you switch domains and the fact is the mistake isn't a mistake thinking that an environment is predictable when it's not. It's a mistake of not realizing that an idea developed in one domain can apply to another one, while accepting that these two domains have same operating mechanisms. So you continue Hayek effectively looked at nature as a format by which things--he sort of like thought of nature directly and indirectly and thought of the organic directly or indirectly as operating according to his principle of distributed knowledge. And technologies. And tinkering--away from that central planning mode. Russ: Well, that's why the latest paper on macroeconomics that claims that such and such an intervention is good for the economy, or bad for the economy, is the same as the epidemiologist who claims that drinking coffee or wine or whatever it is, is good or bad for you. And they find some data-- Guest: I would say--it's benign to say coffee is good or bad for you. But it is a benign claim. And some such claims can be rigorous. But let's say now, a Soviet planner, one that comes to nature. Aha--GMOs. You see where I am coming from?
29:09Russ: Well, let's talk about that. Guest: So, GMOs. If you look at evolution, if you look at how things get from point A to point B, it's by small tinkering, where mistakes are kept small and local. And you cannot foresee interaction in a given complex system unless you experiment with things. And that's Hayek, that's the mathematics that we have behind us [?], and the entire class of [?]-- Russ: Schumpeter. Yeah. Guest: I don't know about Schumpeter, but I know about the real mathematicians who worked on these problems and dynamical systems. You cannot really forecast interaction in systems that are too complex. And you can explain it to someone, you can explain limits, with all kind of incompleteness theorems that we have; or with simple example of billiard balls. So, the problem that natural systems--this is universality of complex system--has opacity, if you look at them from the standpoint of a social planner. But they are very understandable if we look at them from the perspective of a complex system that has evolutionary attributes. So, what you do is, time counts a lot. You put things together, let them interact, and then there's some dynamics of interaction; and you see if the system doesn't blow up, then it's a good system. If it blows up, then it's a bad system. And the system would anyway clean itself automatically using these mechanisms. And small tinkering. Russ: Feedback loops. Guest: Sorry? If it was feedback loops [?] things. In Antifragile I presented it in terms of different layers. You have a fragile layer at the bottom, like your selves. And then you have a hierarchy, above the selves, and you have individuals and then you have society. And then your families and then society and so on. And then humanity. And then--oh, species, and stuff like that. So you have hierarchies. And then you have, of course, evolutionary mechanisms at all levels of the hierarchies. So, this is how things work in nature. And I'm not saying anything that that's not true for evolutionary biologists[?]. That that's how the process of tinkering is accepted. Now, and that was bricolage actually--the word 'tinkering' I'm using now comes from bricolage, from the famous Monod and Jacob papers, two French people who got the Nobel in the 1960s. Now, when we look at GMOs, what are we doing with GMOs? We are skipping steps. A tomato, okay, there's a GMO tomato made according to the FDA (Food and Drug Administration) will be the same as a tomato--the same organically through natural mechanisms--or human breeding, even. But these steps are not the same as skipping zillions of steps to get to a tomato. We don't know in the soil what's going to do to other plants. We don't know what it's going to do to you. We have a lot of unknowns. So, when you have a lot of unknowns like that, you put the precautionary principle until further notice. So that's where we're going.
32:00Russ: I interviewed Greg Page, who is the former CEO (Chief Executive Officer) of Cargill. And he accepts the idea that there may be some risk. But he, as you would argue, doesn't think much about ruin. So his view, and I think the view of many people in the industry, and certainly many scientists, whether they are tainted by self-interest or not, they would say, 'Well, look. People are eating these new tomatoes that have, say, the gene of a fish in it--or whatever has been done to it. And they are not dying. And it's hard to understand why you would be worried about the fact that there's going to be, say, a mass extinction of human beings from eating a GMO-modified tomato.' So, what's the scientific evidence? Guest: No, no, that's exactly what we want to avoid, having to talk about scientific evidence when the burden of the proof is on the GMO people to show us that they understand anything remotely about the tail risk. Which they don't. The tail risk is not someone dying from eating the tomato. That's not a big risk. No. That's not a systemic risk. The big risk is what can happen when you have two things going together--which is, what happens, Soviet style, is a combination of monopoly of some plants over others, that it's too large a system; and of course creation of other species that will themselves also be too powerful and then you may kill the GMOs or one may kill the other and you may have huge imbalances in nature. And these imbalances in nature can produce large deviations. This is our point. And we haven't seen any paper looking at the risk from that standpoint. And when people look at risk--we looked at them, some are using 1960s [?] error type reasoning, which of course is not, is too primitive to allow us to make any conclusion. And when people say, where is the evidence, tell them, 'Hey, you know, what was the evidence that smoking could cause cancer? What was the evidence that lobotomy was bad? What was the evidence that Teldane, Triludan, Seldane, Ecotrin, [...], where was the evidence that these were harmful?' Evidence showed up late. Sometimes--even in one case across a generation. So you have a problem with the reasoning of people invoking evidence when they don't know what they are talking about as far as evidence. No statistician would put his stamp [?] that we have evidence, that it is safe. They tell you failure to reject the null at this percentage. And, so they sort of agree with us that that tail is not investigated. We haven't seen an investigation of the tail that's properly done. Russ: But as you point out in the paper, if you are not careful, you can invoke that for lots of things. Guest: Exactly. So we are not invoking--for the nuclear, we cannot invoke the precautionary principle for the nuclear. Why? Because the nuclear will stay local. It doesn't mean it's not risky. You may want to ban nuclear, for risk purpose; but the nuclear, you cannot have, a Fukushima cannot lead to destruction in India. Or may, in India, but not in Lebanon. Or maybe Lebanon but not Cyprus. So you don't have--if you have now the same crops invading the whole planet, it's too much. Having GMOs on an island is one thing, and generalizing same[?] to the planet in the name of science is another thing. And I have heard--listen, if I had to inform you that it was Mexican peso, that if I had the Lebanese lira, or maybe the Turkish lira--now there is a lot of trading in the Turkish lira [?]--every time I heard people saying that we scientists with a zillion Ph.Ds. think that these securities are very safe, say in people speaking employed by Fannie Mae, Freddie Mac, or Morgan Stanley in 2007 before the crisis. And they would say no, there is zero probability of a fail in that. And even if you saw the Stiglitz-Orszag report about Fannie Mae. Russ: Oh, yeah. Guest: So, the point is you have to deal with skepticism in a corner of the probability distribution unless you have some strong feeling or really a very, very robust reasoning showing that this is not going to harm beyond that local [?].
38:12Russ: Well, let me ask it a different way. I understand--well, you said it in 2007; if you said, well, if the so-called Ph.D. expert said it's been safe so far, you could have said, well, these things are all tied together. You could say, oh, you have insurance but every firm has insurance with AIG, and therefore when everybody goes broke, AIG is going to have a problem honoring their promises. So you could say that, then. The question now I have is that: Where is the evidence that this GMO process is a fat-tailed process rather than a thing-tailed process? Guest: The first thing you've got to--when you think of are we in fat-tailed or thin-tailed domains is look the other way and say, 'What is the evidence that we are in a thin-tailed domain?' Nature has some, produced some thin-tailed things under a process that has to have some kind of balance and obeys central limits. For example, within species, humans--the height follows a Gaussian distribution. But nature doesn't deliver thin tails between species. Look at an elephant versus a mouse. Hey, this is a very fat-tailed process. Or the difference in size between a mammoth and a bacteria. And they are all life--life can take a lot of forms. So, nature is effectively largely fat-tailed. And the way we define fat-tailed in the paper is rigorously as in the class we call sub-exponential class. So nature is fat-tailed. But why doesn't nature blow up? Since it's fat tailed. Uh-huh. Because effectively you have circuit breakers that transform the fat tails into what we call 'modified thin tails.' You see what I mean? Russ: Well, you are talking about an example of, there could be an extinction on an island; there could be an extinction on a continent. Guest: Exactly. Exactly. Russ: Local--there could be ruin, but it's local ruin. Guest: Exactly. So it stays local. So, nature--so it's much healthier for people to default to fat tails and [?] the evidence of thin-tailedness [?] the other one. And I wrote a paper, actually a paper, it's a chapter in Silent Risk, I think Chapter 4 now--you know what happens, we keep adding chapters-- Russ: Silent Risk is a manuscript you are working on. Guest: It's freely available on the web. Russ: Right. Guest: It has all the mathematical theorems and backup of these things, plus a lot of discussions of fat tails, how to calibrate fat tails. Russ: And people who are intimidated by the math can open just to the beginning and see a cartoon of Nassim Taleb in a boat going over an enormous waterfall as he's saying, 'Stop!' and the people in the boat saying, 'Oh, where's the evidence saying anything is wrong with the boat?' Etc., etc. So, we'll put a link up to it. It's good browsing even for the non-mathematical. Guest: Great. The book is effectively is a mathematicization of "Incerto." "Incerto" is the four books I've written so far, philosophical essays on uncertainty. Russ: The fourth one being The Bed of Procrustes which I did not mention in my introduction. Guest: Exactly. So nothing in there that's not said verbally; and what I'm going to discuss now instead of talking about the mathematical version of Chapter 4 of Silent Risk, I can talk about what I called in The Black Swan the Masquerade Problem, in that you can always say the process is not thin-tailed; you cannot with any confidence say the process is not fat-tailed. Just from observation. Simply because fat-tailed processes can masquerade as thin tails. And that's a Turkey Problem. Russ: And I feel bad--we should have made it clear, because I forget that not everybody has been listening to EconTalk since 2006, but: Thin tails means that the probability of remote events is very, very, very vanishingly small. And fat tails means it's small, but not zero. Is that a good summary? Guest: Exactly. It's a good summary. In other words, view it as not in probabilities but in consequences. And if I gather a thousand people randomly and [?] through an EconTalk episode to watch and weigh them, and then evidently have the total weights, and then you add to that sample the largest human being on the planet, that person will not represent more than .3% of the total. You see? But if you do the same with wealth, you will have one of the total--you will be maybe because you have a lot of people living on a few dollars a day on the planet, 7 billion people total population, you have 3 or 4 billion very poor people. So, odds are you have maybe a million and a half net worth for your sample. And then you add to that the richest person on the planet, $75 billion, and look what happens. There will be a rounding error. So it means fat tailed is how much the rare event contributes to the total [?]. Russ: Although I'd like to think Warren Buffett is an EconTalk listener. I don't know that he isn't. So, you know. But go ahead. [more to come, 44:02]


COMMENTS (80 to date)
Dave writes:

I was really glad to hear this, because I feel less insane about my GMO stance. My basic argument is: nutrition and how we absorb nutrients is more complicated than adding up the macro and micro nutrients in natural foods, and I don't think we understand it well.

One example is the multivitamin, which appears to not be a substitute for a healthy natural diet:
Experts Decisive Against Multivitamins: 'Stop Wasting Money'.

I also think that a quick look at the last century of nutrition debate should make us very humble about our understanding of what's healthy to consume.

The counterargument you commonly hear is that farmers have been unnaturally selecting fruits and vegetables for years, so that what we eat today is not like what our ancestors ate. But at least they were using natural reproductive processes to accomplish this, rather than direct genetic engineering. But even then, often farmers try to produce fruits that are sweeter, which *are* less healthy than the original variants, at least in the sense that they contain more sugar.

Trent writes:

A very interesting discussion (as always) and an equally interesting paper (thanks for providing the link to the PDF). What made Prof. Taleb’s point ‘click’ for me is his illustration in his paper regarding the potato famine. Likewise, what risk of blight (or similar ruin) do we face if all our corn growers are using the same GMO? It’s analogous to the risk that we’d face if all Americans’ retirement funds were invested 100% in the same stock.

While I understand Prof. Taleb’s theory and how the PP is to be applied, I don’t see how it will ever work well in practice, given our governance.

• Yes, he rightly argues that it’s important to rely on the proper experts (e.g. a statistician rather than a carpenter when you’re seeking expertise in statistics). However, as we’ve seen in economics, you can seemingly find an ‘expert’ to argue whatever side you’re on, and you end up with a battle of ‘experts’ that hinders the emergence of a convincing, lasting consensus.

• He argues that the PP should only be applied in the instances where there is actual risk of global ruin (e.g. worldwide famine vs. an isolated nuclear incident affecting only a small portion of the world). Given the rhetorical hyperbole that politicians generate (and promoted further by the media), they can make people believe that isolated risks are instead worldwide risks. And they have an incentive to do so because the more issues that are labeled as such, the more power they grab to oversee/regulate/outlaw, and the more political donations they receive from all interested parties.

• I also fail to see how we stop non-compliance by other countries. Wouldn’t there be an incentive for another country to ‘cheat’ and allow the use GMOs by their farmers?

Always appreciate Prof. Taleb’s appearances on EconTalk and look forward to future appearances. This episode/paper fits in with what he told you a few years ago – that he’s interested in how people behave in situations of uncertainty…where there’s no proven path to take. I’m very interested to see where his thinking takes him next!

Daniel J. Winings writes:

I love it when Nassim Taleb is on the show. The line of questions I would like to ask Mr. Taleb revolve around soil erosion and GMOs.

I understand your concern about the unknown risk of GMO crops, but what about the known ecological cost of soil erosion? Erosion is just as much a threat to the well-being of the race, just one that plays out much slower than some theoretical, or dare I say imagined, GMO apocalypse?

If the soil erosion problem, spread out among however many landowners there are on the globe, is a risk that can be managed without GMOs, where a financial carrot induces landowners to act in a way to reduce erosion, I'd like to know how; apart from bureaucratic dictates from above.

Tell me what it is, so I can support it.

How long must GMOs be used on an island before we utilize them to reduce soil erosion allowing more time to search for longer term solutions? One hundred years? A thousand? We're not just talking tomatoes here but the calorie rich cereals which feed the world.

How long of a track record is enough? Who will pay for that experiment? Wouldn't the result be, in effect, an outright ban on the technology, hence we'd never develop a track record by which to understand the risk?

Justin P writes:

Re: Carpenter Fallacy

Last week (on the Page podcast) I was accused of committing the carpenter fallacy for my preemptive critique of the Taleb paper. I didn't want to derail those comments so I waited until this episode to comment on the carpenter fallacy.
So the carpenter fallacy that Taleb uses is basically a domain knowledge problem. You don't ask the carpenter about risk because all the carpenter knows is about wood and tools. You ask a statistician about risk. What Taleb has done is the opposite problem, he is assuming knowledge about carpentry without bothering to ask the carpenter if the base assumptions are correct. With GE crops, he is assuming a level of knowledge about biology, agronomy and biotech that he doesn't have.

The most basic flaw in Taleb's argument is that he is assuming ruin, without any attempt to develop a mechanism. No doubt, he'll claim that he doesn't have to know a mechanism. He'll be correct up to a point. His argument up until he applies it to GE crops do not require any domain specific knowledge except of that with he already holds, statistics.

Once he starts to try and apply his domain knowledge to other areas, his attempted defense, crying out about the carpenter's fallacy, doesn't hold any longer. Without any known mechanism specific to GE crops, his argument must hold for all types of agriculture. This also exposes his ignorance in agronomy, all the other methods that farmers use to develop different traits. This is where a real expert in plant biology (domain specific knowledge) can easily show holes in Taleb's argument.

Kevin Folta, professor of plant biology at University of Florida (Hopefully Dr. Roberts can have him on as a guest) has a post that he calls the "Frankenfood Paradox." It shows the different methods used food, including Chemical mutation breeding. If you like Star Ruby grapefruit, thank mutation breeding. The paradox, is how no breeding product from any other method of plant breeding is subject to any safety oversight at all, only ones made with GE technology. If you look at the number of genes actually affected, if there is a probability of ruin, it is much much much higher for every other method of plant breeding.

Taleb's attacks on "monoculture" also show a profound ignorance about modern agriculture. Monoculture, as popularly defined, do not exist as much in GE crops as they do in non-GE crops. Now for full disclosure, I work in agriculture but not one that develops GE crops. Seed variety is key to dispelling the modern myth of monoculture. Most people rely on the Irish Potato famine as evidence of the ruin of monoculture. They'd be right if they never bothered to look any farther to know that Ireland still exported more than enough food to feed themselves, and the famine was a product of other forces.

The simple fact is that modern agriculture does not practice monoculture by and large. I know that might go against popular notions and facebook memes, but so does a lot of reality. To claim that farmers all grow the exact same thing is about as absurd as saying all stocks are exactly the same.

Farmers and seed company sells hundreds of different varieties of seed, GE traits such as BT in corn are only one of the many traits farmers buy and seed companies produce. There is not just one BT seed sold by Dow or Monsanto. BT is part of hundreds of different varieties. If there were some blight (This is where you need to know of some sort of mechanism) or virus that affected a variety, you'd still be left literally hundreds of other varieties with different traits that might not be affected at all.

That is only for, say, corn. More domain specific knowledge in agriculture would inform Taleb that not all farmers grow corn at the exact same time. This is a counter to the local vs global ruin problem Taleb talks about. If BT corn, for some unknown and unforeseeable reason, were to lead to ruin, it would affect Iowa and not Arkansas, which would be growing soybean and cotton. Nor would it affect any farmer not grown BT crops. (It should also be noted that non-GE farmers use BT as a method for reducing pest pressure. BT is a natural, evolutionary product that has been tested by "Mother Nature.") A complete loss of corn crops would definitely hurt and be a problem for many people, but it would not lead to global ruin. His criticism is only valid under a certain assumption which is dependent on specific domain knowledge that Taleb doesn't have, and by judging from how he has responded to criticism, doesn't care to know anything about.

It's a shame because Taleb's paper is good up until he tries to use it as justification for whatever anti-GMO bias he has. Being anti-GMO is fine, if that's your thing. But trying to create evidence to support your biases is the cornerstone of pseudo-science. It's really a shame that Taleb feels that is what he has to do.

Steven Slezak writes:

Norman Borlaug, the Father of the Green Revolution, saw much promise in genetically engineered crops, not the destruction of humanity.

The Telegraph lists the top four threats to the world today as 1) state conflict, 2) water, 3) state collapse, and 4) unemployment. GMO doesn't make the cut. And there is good reason, as argued in this piece by David Zilberman of UC Berkeley and me.

There is nothing worse than the fear of imaginary swans.

Dallas Weaver Ph.D. writes:

The big issue with the precautionary principle is determining the difference between a real issue and paranoia. Paranoia supported by a lack of knowledge in a specific scientific area is still just paranoia. For example, the anti-vaccine activists lack of understanding of basic biology and immunology doesn't make their "concerns" or use of the precautionary principle rational as they are adding a real fat tail to their childrens lives and to my grandchildren.

In the case of increasing CO2, we do know that we are dealing with a non-linear dynamical systems with possible tipping points (non-linear instabilities) that are related to both absolute levels of CO2 and the rate of change. In that case, the fat tail problem is obvious and very real.

In the case of GMOs, the situation is different in that proper inclusion of the scientific details makes the addition of GMO to conventional plant selective breeding look like a fat tail thinning addition.

Most people referencing the precautionary principle attacking GMO's don't know that conventional selective breeding practices have increase the natural toxin in some plants to the point of harming the workers or people who ate the plant, often not known until people got sick or workers skin blistered. This was all done with old fashioned 100% organic method of selecting the plants that looked good (that the insects wouldn't/didn't eat). To get more mutations to select from, radiation and chemically induced mutations were added to seeds to speed up the selection process. This is all random changes and we had no clue about any changes beyond the phenotype observable changes. There are a lot of fat tails in these random process resulting unknown, unknowns, but we trust these methods after several thousand years of use.

Now that we can take apart genomes as a reasonable cost, we are finding that nature has moved a lot of genes around horizontally (one species to another) and what man is doing with GMO is really not all that new to nature. Think about how mammals evolved from egg laying animals with an immune system that should recognize a fetus as a foreign parasite. Apparently the genes that keep the mothers immune system from killing the fetus are related to a bacterial set of genes used by parasitic wasps to keep a caterpillars immune system from responding to their parasitic larva eating the caterpillar alive.

Nature has tried a zillion experiments and found most wanting (every organism without offspring is a failure of that genetic experiment). For example, nature has tried many times giantism, where growth hormones stay on, then selected against this change (including rare human giants).

Nature has probably produced the fast growing GMO salmon millions of times and found that its demands for food in the winter were too high to support. The added genes for growth rate and freeze resistant are only relevant in aquaculture, where the fish have a year around food supply and must stay near the surface (net depth) when the water becomes super chilled, while wild fish go into deeper warmer water. This GMO products has been subject to a paranoid version of the precautionary principle for a decade.

Actually knowing what genes we are adding, where we are adding them and what secondary impacts are being created is a tail flattening relative to the unknown, unknows of present day "organic agriculture".

As far a man creating a super organism that will out compete nature in a real environment and take over (the blop from science fiction) is more of a paranoid fictional fear rather than a realistic fat tail. One, nature hasn't done it with a zillion experiments. Two, if we did make a super organism that would start to take over large ecologies, it would become a significant food supply for a parasite, or bacteria/fungal or viral pathogen to evolve to take advantage of. Nature kills the "winners" back to reasonable populations. Without mans intervention, nature would succeed in cutting the world wheat, corn, etc. supplies down with an evolving rust or other pathogen. This is an ongoing war.

The real fat tail is from nature looking at the human biomass as a food supply. Imagine a virus like HIV that could transmit like the Flue with no symptoms for months. That virus would have a very big food supply.


Brian Vree writes:

Nassim Taleb seems to hold that there is unknown risk unique to GMO but couldn't his same logic be applied with byproducts of most modern energy and manufacturing systems, knowing the complexity of cause-effect that exists in our modern world? I suspect that his ideal system would ultimately cause mass starvation, having to rely on substantially less efficient agricultural processes.

I also suspect that he has a narrow view of the diversity of humanity and ecology. If an entire system of food is wiped out by an unforeseeable event there are lots of plants and animals not commonly consumed that men would shift to. Likewise the plants and animals that populate nature will evolve or adapt to changing conditions. Men could do a lot of harm to the environment but there is nothing that men can do to destroy all life.

At best his ideals suggest that there should be a diversity of approaches to the problem of food production and that different societies should consciously attempt to take different approaches for the sake of diversity.

Ling Q. writes:

The idea of precautionary principle is an ideal that makes people think twice about the potential ramifications of their actions. But in practice, it is not a panacea.

Just like Russ said, innovation is a human nature and human is part of the evolution system. What is considered as falling into the precautionary principle domain is matter of judgment. And fat-tail is a matter of judgment in the absence of 'negative' evidence (or perceived silent risk), or short history or relatively small sample sizes.

In the domain of computer and internet, the world is increasingly connected. The systemic risk is known in that the internet could transmit 'virus' around the globe and be hacked by the criminals that impacts millions of people. One also worry that someone's retirement fund may be wiped out overnight due to hacking or digital storage failures. So it seems reasonable that internet should also fall into the PP domain; But it will be unthinkable today for us not to digitize information and stop using the internet.

GregS writes:

It really seems like Taleb has invented for himself a space where he can be completely immune from criticism. He appears to insist that experts in biological science can’t criticize his statistical argument, because he’s making a statistical point that biological facts have no bearing on. (?!) Nor even can ordinary statisticians criticize his meta-statistical argument. In his most recent book he invents two very strange arguments for dismissing his critics. One is that if his argument is wrong, all critics will offer the same criticism; if dozens of critics have dozens of completely different arguments, then his argument is basically right. He also says that a valid critique should be extremely short, especially compared to the original argument. I find this all incredibly off-putting. It’s as if he’s erected an imaginary barrier between himself and his critics. An extremely confused thinker might be contradicted by the actual subject matter exports on an important “non-statistical” point. His writings might be so confused and full of logical/factual/statistical/conceptual errors that he inspires a large number of different criticisms from a broad range of critics. Likewise, confused thinking may lead to a long rebuttal that explores and rebuts unspoken assumptions. I say this all as a former Talebophile who finds his more recent writings a bit pompous.
It’s kind of telling that he doesn’t feel the need to posit a mechanism for “ecocide” from GMO’s (I’m speaking to the interview; perhaps he does so in his paper or his other writings that I haven’t seen). I think that’s a bare minimum for having his argument taken seriously, but doing so would expose him to substantive criticism from experts on the actual subject matter. His argument seems to imply that we should ban anything new so long as somebody can argue that “something terrible might happen blah blah fat tails blah blah entire world.” Seriously, if nobody can name a mechanism that’s even remotely plausible, we should still ban GMO’s because of some unforeseen risk that we can’t imagine? If that’s a legitimate argument, it applies extremely broadly. It actually applies to anything and everything that’s new. These crops are feeding the world. No, we wouldn’t *all* starve to death without them. (I caught that comment, and a similar comment in last week’s interview.) But surely millions would. I think it’s facile to say, “We COULD feed the world without GMOs” when in reality we wouldn’t.
Russ, you really need to invite a guest who will give an informed response to Taleb’s anti-GMO argument. Maybe you can have Eric Falkenstein on to critique high volatility portfolios, too. :) Very much looking forward to it. Thanks to both of you.

Eric writes:

Great podcast as Always.

I have a few issues with Taleb's arguments. First, for invoking Hayek's "Pretencse of Knowledge" in this conversation. I do not understand the reference. How could GMO-proponents be the "scientism-crowd" in this case? The only way that would be true would be if all experts and decision-makers decide that GMO should be implemented everywere at once. For what I understand, there is no one forcing farmers to switch to GMO because they claim that GMO's are superior to conventional crops. I usually find myself with agreement with much of what Taleb says so could anyone clarify what I am missing?

Also, i reacted to Talebs Hayek "Pretence of knowledge"-reference. Hayek stated that no one expert could have all the diverse knowledge of the populus. Taleb's quote was that "Why did Hayek want distributed knowledge in society, nothing, no monopoly of knowledge by anyone? Because he wants the errors to be distributed."

Whenever I read the Pretence of Knowledge-piece or anything else by Hayek for that matter - I never read him as wanting knowledge to be distributed - he mearly stated it as a fact.

Wesley writes:

I am just a carpenter (molecular biologist) who is never going to understand the 'statisticism' (my word for scientism of statistics) used to justify Taleb's anti-GMO stance. I do want to know, how can he say that GMO use is a fat-tailed risk, then proceed to say he does not need to prove that claim. Surely the burden of proof is on the person who makes the claim? I think this is a cop out.

He seems to think GMOs are all the same and will make for a global monoculture agriculture. Each GMO whether its BT toxin or Golden Rice or Roundup resistance target completely different aspects of agriculture and have different biochemical mechanisms. If one technology fails how does it cause global ruin, everywhere on the planet?

Keith Vertrees writes:
Dallas Weaver Ph.D. writes:

The real fat tail is from nature looking at the human biomass as a food supply. Imagine a virus like HIV that could transmit like the Flue with no symptoms for months. That virus would have a very big food supply.

I supposed that, because this risk can be imagined, the precautionary principle dictates a ban on air travel?
Joe Torben writes:

Very interesting as always, but this time you could and should have ripped some of the outrageous arguments to pieces, Russ.

Some of the "fat tail" stuff is quite incomprehensible. So there is a potential way in which GMO leads to complete ruin (but NNT doesn't have to explain how, exactly). How is that any different from "ban nuclear power, because even if an accident is localised, it could lead to a mutation that has fat tails and is ruinuous". Ban all forms of radiation, when we think about it.

Likewise as previously noted all forms of travel could lead to the spreading of terrible, life ending diseases. And using fossile fuel is of course a complete non-starter, with NNT explicitly acknowledging that climate change has the same properties as GMO.

With enough imagination and just a little bit of handwaving, there is no end to all fo the stuff that should be stopped according to "the uncertainty principle". NNT completely fails to describe why GMO is any different from the others.

Also, again as noted by others, he is shielding himself from critizism in a way that should raise all kinds of red flags. GMOs are fat tailed with a certainty of ruin eventually, because he can imagine it, no model or description needed? Please...

Chad writes:

In the interview Taleb asserted a fat-tailed global "risk of ruin" from two activities: GMOs and carbon emissions (global warming). But he didn't give specificity to what "ruin" means in either of these cases. Neither is going to cause the earth to explode, so what does he expect a disaster to look like? What is the cause, the nature, and the magnitude of the "ruin"? Does this mean every single plant, animal, and human on earth dies? Does it mean one variety of food doesn't work and we use substitutions? It sounds like he's using "ruin" to mean something between these two, but it's not clear what, nor does he present what the mechanism of ruin is.

And this is where specific knowledge of the involved sciences seems to be needed...to present the "ruinous" causal chain and outcomes. Yet, it sounded like Taleb was saying that specific scientific expertise is irrelevant to validly hypothesize a vague "ruin" potential that someone else than has the burden to disprove. It sure sounds non-falsifiable. How can a science or technology overcome/disprove a "risk of ruin" that is based on reasoning that doesn't admit causes?

Mark K writes:

Out of all the comments, Dallas Weaver's is the only that gives me pause in accepting Taleb's arguments.

To the rest, many point to the lack of a mechanism as a strike against Taleb's argument or that because we have been genetic modification of plants for millenia GMOs should be safe.

However, I think both of these points miss the fundamental shift in GMOs from selective breeding, etc. In every case before, we've harnessed the natural evolutionary process which slows things down (and localizes them for a time).
One of the posts above linked to an article with a wonderful graphic: http://kfolta.blogspot.com/2012/06/more-frankenfood-paradox.html?_sm_au_=iPVnN6QWv7HRSQN2
GMOs cut the time for introducing a genetic modification down from 5-30 years to under 5. This doesn't seem wise when much of biology is still a mystery - I'd be happy to be corrected, but I believe we're only starting to sort out the effects of something like hormone replacement therapy after decades of prescribing it.

Second, (again, happy to be corrected by a biologist) GMOs are fundamentally different than selective breeding. Horizontal gene transfer may occur in nature, but the timescale requires many many generations for a (successful) gene to become widespread.

Why isn't it possible we might add a gene into corn which would both become widespread (say, present in 80% of U.S. corn) and would have a serious negative health effect we might be unable to detect for decades?

To clarify two other points, widespread adoption of potential genes very quickly is what could create fat tails. Since fat tailed distributions can deceptively appear thin tailed, the burden of proof is on those who would claim a distribution is thin tailed - you have to be able to very thoroughly explain the process underlying the distribution.
The "pretense of knowledge" came in because some biologists believe they can easily alter very dynamic and complex systems in predictable ways.

Mark K writes:

I also wanted to give my perspective - I come to this as a student of dynamical systems and complexity in finance. Almost categorically, humans are horribly wrong when they try to predict outcomes in dynamic systems (The Logic of Failure is a good book for those interested).

Whether it's Soviet Russia, quants in finance, or those doing economic development, people have been utterly unable to make predictions or alterations to complex systems. More than that, they often end up ruin. Nobel prize winners in economics thought they understood risk in financial markets, but the hedge fund they started generated wonderful returns for a few years only to blow up and go broke.

Now, biologists say they have their interactions with even more dynamic and complex systems (humans and ecologies) well under control. Biology tends to have thinner tailed processes than finance, but the repeated mistakes in medicine leave me very skeptical about our knowledge of the effects of genetic alterations. Combined with the speed with which such alterations can now be introduced, I'm on the same page as Taleb.

I'd go so far as to say, all of the current science saying GMOs are safe so far is likely correct. I don't know enough to critique it and trust well-planned impartial studies. However, that doesn't translate into a belief that GMO creation is safe. People are playing with dynamic and complex systems with large numbers of people and the food supply at stake.

Martin Dertz writes:

Re: Justin P (Carpenter Fallacy)

Perhaps the carpenter metaphor could be elaborated, then, to show that trusted experts can help a statistician understand phenomena.

Taleb's use of the carpenter fallacy is naive. Could be elaborated on to show that if there is a lack of domain knowledge and/or trust on either end (carpenter or statistician), then predictions are unreliable. For example, a carpenter could design a roulette table to have some numbers imperceptibly larger, or the pinwheel to wobble slightly. The statistician's predictions, then, would be less useful than the carpenter's. Perhaps Taleb would address his by requiring the carpenter who constructed it to be exposed to losses from any non-uniformly distributed roulette wheel? But the underlying issue of lack of domain expertise still exists.

In the example of GMO's, it disappointed me Taleb included in his PP paper no equivalent of a carpenter to make sure his assumptions about construction were correct. Yes, I'd rely on a statistician's explanation of and predictions about a roulette wheel, but if and only if he/she (at the least) talked to a carpenter (actual table, not theoretical) who is trusted and inspected it. Similarly, while I agree the burden of proof ought to be on the supporters of GMO's, Taleb's use of the PP would be bolstered by including a trusted (doesn't benefit directly from GMO's) geneticists and/or biologists etc. to ensure the underlying assumptions about the system are correct.

mtipton writes:

Mark K - Don't have much time to write. I'll just say comparing the financial system ie. lending, stocks and investing to agricultural products like Genetically Engineered seeds seems a stretch.

Mark K writes:

I'm sure you have more to say, but I want to respond because this also feeds into why Taleb doesn't really need to provide the mechanism for why GMOs could be ruinous.

First, organisms are complex and dynamic. There are vast number of components (which we don't fully understand) and feedback loops (which we also don't fully understand).
When you alter a component of an organism, you can't tell in advance the full implications of the change - you often can't tell for decades (smoking, hormone therapy, Vioxx). Corn is less complicated than people, but it's still a complex system.

There's an economic analogy. Let's say an economic development director (or central planner) is going to do A, B, and C to create improvements X, Y, and Z in some country.
Most all of us following Econtalk would agree they're almost certain to screw something up, even if they get X, Y, and Z.

The fundamental issue is that we can't predict 2nd, 3rd, 4th, etc. order effects. You test, find out, and live with the consequences.
Say you improve health outcomes, but now you have overpopulation. Overpopulation strains water resources, causing water depletion, and eventually famine. If you saw these things coming, of course you'd avoid them.

However, none of us have perfect information. People keep intervening in dynamic systems.. and there continue to be unintended consequences. You can rarely tell what's going to go wrong, otherwise you'd design around it in the first place.

Why is genetic engineering more like economic planning than designing a toaster? Feedback loops and the number of components make it a lot more complex than toasters.

Justin P writes:

Martin Dertz -

Taleb's use of the PP would be bolstered by including a trusted (doesn't benefit directly from GMO's) geneticists and/or biologists etc. to ensure the underlying assumptions about the system are correct.

I mentioned having Dr. Kevin Folta (And linked to his blog in case Russ wants to get ahold of him) on the program because he is exactly the kind of expert needed. He's a public university plant scientist and doesn't receive any funding from Biotech industry. (It is fun watching people play 6 degrees of separation to suggest Kevin is a shill.)

I feel that, instead of bolstering Taleb's argument, it would hurt it, as it would expose some serious flaws in base assumptions Taleb makes.

Another guest that would make a good counter would be Jayson Lusk, an agricultural economist at Oklahoma State, who wrote the Food Police.

The Carpenter Fallacy is only one flaw in Taleb's argument. He tried to make himself immune from criticism by preemptively claiming critics are using fallacious logic. It seems kinda high schoolish to say but...all Taleb has done is commit the biggest Fallacy Fallacy ever in a paper.


Mark K -

GMOs are fundamentally different than selective breeding.

Actually they are not. They are just another tool to use. I think the question you should ask yourself is compared to what? What I mean is, right now it takes about a decade of trials and tests before a GE trait can be deregulated to market. Contrast that to the Zero years of trials , zero regulatory oversight and the zero years of controlled laboratory tests done any any other breeding method. All these test help weed out any potential negative interactions. GE crops are the most heavily regulated and studied food stuffs on the planet. Now again compare that to a chemical mutation that creates the Star Ruby Grapefruit...that has no tests done before going to market.
With GE crops, there are (I know I'm repeating myself) years of row trials on small to large plots before they can even be submitted for deregulation for market. The next question, which was iterated above, is how much is enough? What is the sufficient time and how many studies must be done before you deem them okay? Because after that you have to think of costs and benefits. The benefits of GE crops are enormous, both monetarily and environmental.

Now compare drought tolerance. Monsanto, in collaboration with other organizations have been trying to develop GE drought tolerance in Maize. Conventional breeding has done the same. Conventional breeding involved moving multiple genes into new hybrids, where as GE techniques focus on just one or two at a time. The end result has been that conventional breeding out performed the task but what does that mean from a NNT Precautionary Principle perspective? Both techniques came to the same end product. The difference is that one has to undergo testing and trails; GE traits. The other does not have to undergo any trials or tests of any kind. Which one has more risk? (Needless to say that BT and Glyphosate resistance were created by natural selection in the first place)
Drought tolerance can be compared to Papaya Ringspot tolerance, where the GE crop actually saved the entire crop from near extinction, which is can do with the American Chestnut and Florida Orange crop. These are examples where conventional breeding hasn't been able to develop solutions to issue affecting farmers and consumers. These examples do more harm to Taleb's argument because the Precautionary Principle is status quo bias and going extinct, obviously, hurts the status quo.

mtipton writes:

Mark K - Don't have much time to write. I'll just say comparing the financial system ie. lending, stocks and investing to agricultural products like Genetically Engineered seeds seems a stretch.

Floccina writes:

The was a fear inspiring discussion and I am with Russ their is no way to stop it. Still I like the our chance with biotech. (I am hoping for freeze tolerant mango trees.)

Wouldn't his principle be better as cases where very small chance events wold likely surpass all the gains the tech produced and then some?

Anton writes:

Great talk but some comments here in support of GM lack in logic.

I do not think Nicholas Taleb has to produce the mechanisms by which GMOs can be dangerous given that we never saw what hit us in the past in advance of the event (Hayek's statement about prediction), especially since he is positing that the problem is that the error rate goes out of control. So his point is whether circuit breakers are absent or not.

Sameer S writes:

Being a computational biologist, having experience with evolutionary biology/genetics as well as statistics, I feel I have some leeway to comment...

I was mostly struck by the off-handed justification for "fat-tails" in nature. Largely, the regulatory mechanisms that lead to a mouse being small and an elephant being large are the same! In fact, the accumulation in phenotypic changes can be described by Brownian motion (thin-tailed). These differences are attributed to natural selection but often are actually the result of neutral changes that accumulate and drift to fixation by chance alone.
See Phylogenetic comparative methods for some more details.

Greg Linster writes:

Great episode! It reminded me of a popular quote from the Fatal Conceit that is mentioned often on the program. It goes something like this: "The curious task of economics is..."

The real challenge, in my opinion, is marketing the anti-GMO message to the voting public and to other nations. On the bright side, it does seem that many people find something intuitively wrong with GMOs already, as is evidenced by the growing body of legislation around aggressive GMO labeling. Furthermore, from what I've seen, Taleb and company have been fairly persuasive in the scientific community.

Anton writes:

@Sameer S. It is not true that a Brownian motion ends up thin tailed, under summation and transformation (lack of independence) one gets power laws.

Mort Dubois writes:

I just didn't buy the argument that GMOs will suddenly displace all natural life at some point in the future. First of all: animals or plants? I think that Taleb is worrying about plants, but I find it difficult to believe that GMO crops, which are presumably developed to enhance a certain subset of genes, are actually better at surviving the rough and tumble of the natural world better than their natural counterparts. When you think about GMO animals, this is easier to see. Would Dolly the Sheep do better in the wild than a mountain goat? Would a Frank Perdue big-breasted chicken be able to escape from a hawk?

The modern economy rewards specialization. GMO products are highly specialized to optimize yields within a very particular, managed, human-controlled environment. It's very similar to what has happened to human jobs in the last few years. That specialization makes us more vulnerable to unexpected challenges, not less. The natural environment is chock-full of unexpected challenges - this is why human civilization is largely devoted to creating a different type of environment. For GMO crops to run amok, they would have to be able to suddenly break free of human control, and spread through a wild environment. It's hard to imagine that a variety of GMO crops could simultaneously perform this feat and wipe out the whole natural world.

I'm also surprised that Taleb, if he wants to worry about failure, isn't applying himself to global warming. The potential for disaster seems much larger to me, as that there's an easily understood mechanism by which widespread harm can occur. So why food? It takes me back to the interview Russ did with Jonathan Haidt a few years ago about The Righteous Mind, and in particular the concept that one of the moral axes is Purity/Pollution. Taleb's focus on food seems like a variation on that issue - a modern version of Kosher laws. "Thou shalt not eat animals with cloven hooves, or man-made genome."

Barry writes:

I always enjoy listening to Nassim Taleb.

This episode made me think of another podcast I just listened to with Jesse Ausubel. He is a GMO supporter but suggests we don't really need GMO, that we could reduce agriculture output by 50% and be fine. The amount of agriculture we use to make beef, whiskey and ethanol might be made available for food.

The Long Now, my second favorite podcast, features guests who are mostly former interventionist environmentalists extolling the value of innovation. I often wish I could follow that show by hearing their guests interviewed on Econtalk.

Thanks,
Barry

Daniel S writes:

Great Comments overall. I think the thing that bothers me the most about the interview, and those defending Taleb's argument, is the idea that you don't need to provide a mechanism. Using that logic, I could just as easily argue that we should apply PP to nuclear energy because for all we know the event could go global and cause ruin. Of course that's silly because there's no contagion effect that could spread radiation globally and experts would be right to dismiss my statistical arguments to the contrary.

David McGrogan writes:

In response to Daniel S, I don't believe Taleb is using the PP as a trump to simply stop anything happening, ever. It's rather that a sufficient amount of evidence ought to be accumulated to prove that something is safe before it is done - but in a very small category of things no amount of evidence will really suffice when weighed against the risk, because there is one potential outcome (destruction of the global food supply) which is incredibly dangerous. Moreover, "sufficient evidence" is really impossible to come by in a fat-tailed domain like the environment.

With nuclear power we have sufficient evidence to speak with confidence about what the worst thing that could happen is, and sufficient evidence would indicate that it is "safe" (where "safe" means "may cause horrendous accidents, but is not a fundamental danger to life as we know it").

In other words, I think Taleb is putting agriculture or the global food supply or whatever you want to call it into a very small category of things where there is a 'systemic risk', as he puts it. Which I think only also includes the climate and the global financial system. Here, risks may be low but potential outcomes are ruinous, and they are 'fat tailed domains' so the PP requirement for sufficient evidence will probably never be met.

Michael Byrnes writes:

Daniel S wrote:

"I think the thing that bothers me the most about the interview, and those defending Taleb's argument, is the idea that you don't need to provide a mechanism."

I think the point is that we aren't as smart as we think we are. There was, for example, a plausible mechanism behind the use of hormone replacement therapy to prevent breast cancer - plausible enough that this approach entered clinical practice... until a large randomized trial found that HRT actually increased the incidence of breast cancer. Oops. You don't have to look that hard to find numerous examples of people overstepping their knowledge, to bad effect.

I would have liked to hear Taleb to compare and contrast GMO with more "primitive" innovations such as selective breeding (which has been going on for as long as agriculture has been going on) and introduction (intentional or otherwise) of various species into new regions. (Examples: Zebra mussels causing problems in the Great Lakes, cane toads in Australia, gypsy moths).

Mark Bomford writes:

Picking up on Mort's rhetorical question, "...would a perdue chicken be able to escape from a hawk?"

Clearly not, and this can be generalized to any life we've domesticated (including ourselves, but that's another matter). The traits that we breed into our domestic plants and animals, by any mechanism, are genetically costly for that species and only confer a competitive advantage in the presence of sustained (and also costly) human intervention. In the case of today's HT crops, this means that if nobody intervenes to apply a broad spectrum herbicide at a specific time, the species stands as much chance as a Perdue chicken agains a hawk.

In other words, domestication comes with a built-in dead man's switch.

If you're going to invoke a generalization like, "complex systems are beyond our understanding," it seems highly improbable that our set of "unknown unknowns" in the fat tail would contain exclusively positive feedback loops and no negative ones.

Chas writes:

@Justin P

You seem to have not read the paper or listened to the podcast, as your concerns are directly addressed by Taleb in both.

Re: carpenter fallacy - You have either misunderstood the argument or are intentionally creating a straw man here. The point is we don't need to know the type of wood or style of joints used to understand the mechanism of the roulette wheel; a person versed in probability can observe and understand the process without these details. Likewise with GMO- the specific organisms or genetic modifications themselves are not relevant to the risks Taleb worries about; observing nature (and its fat-tailed processes) is sufficient.

Re: difference between GMO and controlled breeding - From the paper: to claim that GMO is effectively equivalent to natural selection (includes breeding) misses the process by which things become "natural." Genetic modification via breeding requires many iterations with numerous small errors ("bottom-up" in Econtalk terms)- it is these errors that give nature its robustness. This is categorically different from "taking a gene from a fish and putting it into a tomato;" it is the process (and the exposure to numerous small errors that accompanies the natural process) that matters and is missing in the GMO case. This is what Taleb means when he refers to "skipping steps" in the interview ("top-down" in Econtalk terms).

Jayson Lusk writes:

Separating the probability theorist from the carpenter is not as easy as Taleb suggests. A probability theorist needs to understand the mechanism before accurate risk assessments can be made. There seems to be a lack of understanding of modern agriculture and modern plant breeding that makes me question the dire probabilistic forecast uniquely ascribed to GMOs. More thoughts here: http://jaysonlusk.com/blog/2015/1/21/taleb-on-gmos

Chas writes:

@Jayson Lusk

You are taking the carpenter fallacy quite a bit too literally and entirely miss the point for it. The probability theorist does not need to know the precise methods and tolerances to which the roulette wheel was constructed to understand the risk associated with the game of roulette. He can deduce enough to make an informed judgement by simply observing the game play. Likewise we can understand the mechanism of nature by observing it; we do not need to be versed in GMOs molecule by molecule to understand their potential effect on this mechanism.

As an aside, it is interesting that you make this argument, as it illustrates one of the naive notions that seems to permeate the biotech industry (or at least those that post on twitter). Suppose the roulette wheel is indeed slightly off balance. To determine the effect on probability of a given outcome in the game, one could take two approaches: he could take extremely precise measurements of the wheel, ball, and other materials used and combine these with physics equations to predict where the ball will be more or less likely to land, or he could simply observe the game play over several iterations. In the latter case, he will very quickly and reliably discover any biases in the wheel and adjust his assessment of risk accordingly. Contrast that with the former, where he is completely at the mercy of the precision and accuracy of his measurements, as small errors could meaningfully alter the outcome (and the tedium of his work creates many more opportunities for error). Bioengineers naively seem to think we ought to (or even can) determine risk using the former method. Even in the (thin-tailed) roulette wheel example this seems silly (unnecessarily tedious, although theoretically doable), but in the fat-tailed domain of nature, it is not even possible.

Josiah writes:

Wouldn't Taleb's approach imply that we should ban international air travel? Humanity is potentially threatened by the outbreak of some pandemic, and air travel would allow the disease to spread throughout the world, whereas if people had to travel by streamliner this wouldn't be an issue. Granted, banning air travel would be very inconvenient, but apparently that doesn't matter.

mtipton writes:

I have such a hard time understanding Taleb. Wish he could explain his ideas better. What is his main thesis? Some of the statistical references are flying over my head and the examples don't help clarify the point because they don't make a lot of sense. Did some wiki research to try and make sense:

Fat tails - Probability distribution for which the variable variance is not bounded. The fact that probability distributions are based on limited data can really distort your risk assessment for these kind of variables that have fat tail distributions. Because you are under representing the probability of a freak value, AND you actually don't know how "big" the freak value can be, how much can it deviate from the mean.

I think I kind of get this point about probability distributions......what does this have to do with GE seeds?

Would appreciate if someone could explain his thesis in a concise fashion. Thank you.

Allen A writes:

I'm immediately skeptical of people who immediately shift the blame to their interlocutors and feel no need to demonstrate that their side is correct.

No, no, that's exactly what we want to avoid, having to talk about scientific evidence when the burden of the proof is on the GMO people to show us that they understand anything remotely about the tail risk. Which they don't.

In other words, I don't have to even argue that there is a threat, the other side has to prove there is no threat (and presumably, they shouldn't be allowed to sell GMO seeds commercially until Taleb is satisfied). I'm sorry, I don't trust anyone that starts with the presumption that the burden of proof is on the other side.

Taleb claims to not want to decrease risk taking, but says it may take generations to see the risk

Evidence showed up late. Sometimes--even in one case across a generation.

So, on its face, Taleb's argument is we need generations of testing before we can allow GMO food. Given the dollars and time scale involved, that threshold eliminates the risk-taking that he opposes, whether he wants it to or not (and I suspect he does). But, I was willing to give him the benefit of the doubt until he ridiculed the threats of AI (at the very end after the transcript ends as I write this) by saying, and I'm paraphrasing, AI isn't a risk, we can always turn it off.

That seems to prove beyond a doubt that Taleb wants to pick and choose which risks he thinks we should prevent people from taking. The biggest risk that many AI people are concerned about is that we WON'T be able to turn it off, so Taleb just assumes away the risk. Further, Taleb seems to ignore the possibility that if GMO food becomes a problem, that we can turn it off too (by no longer planting GMO seeds). We have seed banks, and it wouldn't take more than a growing season to revert to non-GMO corn (and we have seed banks that preserve non-GMO seeds).

After listening to some (not all) of the podcasts dealing with AI, I think over the next two generations, the possible economic dislocation of lower-skilled workers through AI poses a MUCH greater risk to our society and even our existence than GMO corn. I would take Taleb's concerns much more seriously if he didn't seem so quick to dismiss what other people perceive as existential threats.

Steven Slezak writes:

The BBC has a good infographic on apocalyptic threats facing the planet. GMOs are not among them. Nuclear winter is a more probable fate.

Historically, the BBC says there have been five extinction events in the last 450 million years. Taleb thinks GMOs will precipitate the next.

Probability theorists will tell you that if you flipped a coin 20,000 times and won $1 for each head (and lost $1 for each tail) that turned up, the expected value over the long run should be $0. In fact, if you run 500 simulations of 20,000 coin tosses $0 is the actual value only between 0% and 2% of the time. Mandelbrot pointed out that randomness is not well understood, even by probability theorists.

The criticism that Taleb fails even to attempt to describe a process or mechanism by which the GE apocalypse would unfold is valid and devastating to his argument. Though not impossible, it's hard to assert absolute certainty about the existence of a mechanism one cannot describe. As others have pointed out, it is easier to make the assertion than it is to make a convincing case in support of it.

GMOs present a problem of uncertainty, which does not yield itself to probability calculations. Probability theory is not appropriate to problems of uncertainty. And Taleb is a probability theorist, not a carpenter.

Nathan Coles, PhD writes:

I am a long time listener of EconTalk, but this is the first time I have felt compelled to comment on an episode. I am a plant breeder, using both native and transgenic technologies to improve crop production and quality. I do NOT work for Monsanto, but I defend their right to use this technology. I found the information provided in this episode to be lacking. During the entire episode, Nasim was conflating his fear of large agribusiness with the technique of altering DNA through recombinant DNA technology. Ecological collapse and human extinction were among the fears of GMOs extolled in this podcast. But, none of these fears is founded in the technology that Nasim is decrying. Humanity has been selecting for favorable genetic mutations in crops during millenia without the systemic failure Nasim fears. Recombinant DNA technology is just the next logical step in the evolution of plant breeding.

I'm sure that I am just a foolish carpenter trying to tell a statistitian the probability of his next roulette win. In this analogy, however, Nasim would not be the statisitian telling me the odds of winning. That is the job of the government regulators, statistitians, peer reviewers, and independent boards who work with biologists every day to make sure that the transgenic crops we produce are as safe as any other crops sold. Instead, in this analogy Nasim would be the man outside of the casino yelling to everyone the odds that the casino will collapse and kill all inside. While there is always a chance that this building could collapse, there is an almost equally likely chance that the one next door collapses, or the on next to it, or the one down the street. There is no logical reason to fear GMOs, other than that they are a relatively new technology and new things can be scary.

Russ, now that you have ventured into the field of GMOs, it would be responsible of you to bring on a GMO biologist or a regulator to discuss this technology in a future episode. I would find their opinions on this topic far more credible.

Anyone who would like more information about GMOs should go to http://gmoanswers.com

Mark K writes:

It seems like people are posting without reading prior comments (which there are a lot of).

To all pro-GMO people, please tell me why it isn't possible we might accidentally introduce a gene grouping that has seriously harmful consequences for humans but was undetectable for decades?

First, we don't have a through understanding human biology. The problems with hormone replacement therapy, Vioxx, and our continuing ignorance of something so common as SSRIs clearly demonstrate that. So if some sort of harmful chemical were introduced, why is there any guarantee we would be able to detect it before millions are affected?

Second, how can we possibly be sure we'll understand the long-term consequences of horizontally transplanting multiple genes? We have a huge amount of difficulty understanding drug interactions, let alone the interactions among multiple horizontal gene implantations.

Yes, we've been selectively breeding for thousands of years, but it's both slower and requires numerous iterations. If there were a problem with some gene variation, there's much more time to detect it. As things are, how can we be sure we'll find out if there's a problem adding a group of genes before hundreds of millions eat products containing poisonous corn?

Combine speed (a particularly favorable group of genes could be added to 80% of corn in under a decade) with the inability to understand the interaction among multiple horizontal gene implants, and you the potential for ruin.

No? Then why not?
I was actually pro-GMO before this podcast - Taleb made me realize genetic engineering a complex dynamic domain where we'll never be able to fully predict the consequences.

Marin Dertz writes:

@ Chas

Perhaps I'm missing something in my reading of the PP paper as well. Taleb et al. state that, in order to avoid paranoia and paralysis by decision makers, the PP should be evoked only for decisions which have a non-zero probability of causing ruin. Therefore, if it can be demonstrated there is a possibility of a course of action creating systemic risk, it ought to be avoided. The issue raised in the comments, which is very well articulated by Jayson Lusk, is that there are a large number of biologist/ecologists who disagree that there is a possibility of systemic impact b/c some of the assumptions made by the authors about interconnectedness of the system's components are false; that a more nuanced view of GMO's needs to be considered before blanketing them all with the PP stamp. The actual mechanism which may cause ruin doesn't need to be demonstrated, but certainly the entanglement of the system does (and I don't think the argument 'b/c .

On a semi-related note - while reading the PP paper I was disappointed to see Taleb et al. cited the Ebola outbreak in 'east Africa' as a demonstration of global harm possibilities. Maybe he'd dismiss this criticism as nitpicking (he shouldn't given some arguments I've seen him engage in wrt East vs near-east vs middle east vs orient etc), but I think it's a micro-chasm of a larger issue: Taleb making claims in areas in which he has a serious lack of domain knowledge that limits the impact of his (brilliant, I think) ideas. In the Ebola case, he has been citing the non-linear rate of spread as a reason for the world being afraid of Ebola. But if you research Ebola (at least enough to know it is in West Africa!), you'd know the current epidemic is spreading uncontrollably because of a host of local issues in Sierra Leone, Liberia and Guinea including culture, lack of infrastructure, and lack of human capital which just don't exist in other countries. A concrete example on infrastructure and context: in terms of travel times, it'd be the equivalent of someone suspected of Ebola in Dallas, going to Fort Worth to draw blood, the blood going to Wichita for testing, and the patient going to Little Rock for treatment. That's just not the case in the US, for example, and is why (among a host of other local differences) there's not a reason to fear it causing a global epidemic: it's ultra dangerous in undeveloped urban environments; others not so much.

Glenn Mercer writes:

This discussion needs more comments like the USA needs more reality shows on TV, but a few GMO points:

1. Do we think "natural" processes are more inherently benign? Volcanoes are natural, asteroids are natural. If we leave an infant out in the snow, in the natural environment, away from its artificial house with its furnace, it dies.

2. I know this verges on an ad hominem attack (gulp), but I think NT didn't like GMOs and then dreamed up this line of assault, rather than the other way around.

3. GMOs are not unitary. I would agree with NT if (sort of like in the movie Interstellar) we all depended on ONE crop globally. Then I would NOT want to monkey with it. (Hmm, bad word choice....) And if conditions were the same globally. But neither applies: we have a huge diversity of crops, and strains within them, and legacy seeds in vaults, etc. And just because we are joined globally doesn't mean everything flows globally. Smallpox did, damn straight, but just because I can fly from Melbourne to Murmansk doesn't mean I'll be growing the same plants in both places.

4. Okay, now I'll take a cheap shot: looks like the FDA works on this principal... and are we happy with how well that is going?

End of rant.

Glenn Mercer writes:

Darn, wish I remembered this before: using this principle, we'd never have turned on the Large Hadron Collider, right? Unknown risk of entire planet being sucked into a black hole.

Okay, I'm done.

Jon writes:

Russ,

What is with the Apocalyptic themes:GMO, AI, Global warming? It makes for interesting chatter but also plays to the mob...

80% of public in one survey thinks that DNA shouldn't be in food

My wife and I have created 3# GMOs. We fervently hope that on long time scales they are extremely invasive. I doubt that any intellectual Precautionary Principle could have prevented this event...

Would very much enjoy an antidote to this episode, perhaps a talk on the FDA (referred to at the recent American Epilepsy Society meeting as "Revenge of the C students") or mission creep at Institutional Review Boards...

Robert Swan writes:

I typed up this comment before reading any others so that I wouldn't be swayed by the crowd. Hard to know whether I would have been.

With other Econ Talks, when I come away thinking there wasn't much in it, I listen again; in fact I did that last week with Greg Page who was a bit standoffish at first listening, but I was more prepared for him next go. I won't be listening to the Nassim Taleb interview again. There really wasn't much in it.

As he says, the nature of the tails are that they cannot be explored. It therefore leaves me perplexed how he knows that nuclear power and AI don't have lurking "fat tail" problems, and that genetic engineering and climate change do. "Because I say so" doesn't strike me as evidence.

He apparently believes that "Nature" is benign -- that if we do nothing, nothing will go wrong. As I understand it, it is widely accepted that a rather fat-tailed black swan fell from the sky and ended the reign of the dinosaurs. Who's to say one won't land on us?

His view on GMOs isn't compelling. Evolution relies on occasional random mutations, some of which die out, some of which lead to immediate advantage, but most of which lie dormant, only wakening if some new selective pressure makes them important. The upshot is that every cell division, many trillions per day, has a small chance of randomly mutating; and sometimes it will be into something bad. A benign bacterium can become malign. A minor virus may turn into a major virus. The game of Russian roulette is going on anyway, and utterly out of our control. Will a few thousand humans experimenting with DNA really add any extra uncertainty to the mix?

The real risk, which Taleb conflates with GMOs is monoculture. That many farms grow identical crops is indeed a risk. But the Irish potato famine came well before GMOs.

Taleb's dismissive attitude to the possible dangers of AI struck me as particularly shallow -- not that it worries me, but if it amplifies human technology, does it not increase the ability to do "bad things" like creating GMOs?

When they were about to test the first atomic bomb, the physicists discussed what the risk might be that the atmosphere would ignite and pretty much sterilise the planet. We now know that it was not possible, but at the time they didn't. They thought the risk small and went ahead and tested the bomb anyway. It says something of human endeavour, courage and, yes, folly. They're all part of us.

As far as I am concerned:


  • Humans are part of nature, and whatever we do is also part of nature. A bear digs a den, ants build a nest, we build skyscrapers, computers and GMOs. Basically, nothing that happens is outside nature and natural/artificial is a useless distinction, just emotional claptrap.
  • We have reached this point largely by using our brains to overcome the disadvantages of our physical weaknesses and tilt the competition in our favour. The game is a complex one, and every individual has his own idea of the score. Urging us to stop using our brains is unlikely to be a winning strategy.
  • There is potential for a "fat tail" on the positive. If we refuse to play, we may miss out on something spectacularly good.

In Geneis there is the tree of knowledge of good an evil, but I rather like the somewhat similar Richard Feynman anecdote -- not sure which of his books I read it in, though I'm pretty sure it was in relation to the Manhattan Project -- the upshot being that the door to Heaven and the door to Hell are the same door.

Bill Boyd PhD writes:

The central problem with the paper is found here, "GMOs have the propensity to spread uncontrollably, and thus their risks cannot be localized. The cross-breeding of wild-type plants with genetically modified ones prevents their disentangling, leading to irreversible system-wide effects with unknown downsides." Ruin is because modified plants can breed with wild type plants. Not quite, they can breed with some plants, corn with corn, cotton with cotton. They cannot breed with all plants but only those within the same species. Even within species this is problematic for example Teosente, from which corn originated from is immune.

How can I say not? This leads to a question concerning models of the natural world. That boils down to a specific models, DNA and quantum mechanics. These are hardly fat tailed models. To put it in perspective one can ask, "Why won't the large hadron collider create a black hole?" Clearly "ruin" would result. Is there a probability this could occur; yes but that would depend on the standard model of physics being wrong. Probability of that?

I raise this example for a reason. The authors use the carpenter fallacy oblivious to the role statistic plays in the natural sciences. Quantum mechanics and by extension biochemistry is almost entirely probabilistic, and this began with Hiesenberg's uncertainty principle. Physicists actually had to answer that question about black holes. That they could actually not know at least this, demonstrates how flawed their article is. I can't wait to see what Journal publishes it.

mphilip writes:

First, I think that Taleb is a great guest and that the risk of fat tails is both an important topic and one where we do not pay enough attention.

With that out of the way, let me sling some criticism. While stated clearly by many above, I feel the need to echo.

I don't want my statistician telling me how to design cabinets any more than my carpenter regaling me with his risk analysis.

Subject expertise is very important and often dismissed by Taleb.

I would really like to get a better understanding of how and when to apply the PP. After listening to the podcast, the only answer I have is "When Nassim says you should."

J. Austin writes:

Today, on a podcast that has, more ably than any other broadcast in recent memory, dedicated itself to extolling the merits of distributed thinking and decision-making and exercising the utmost caution with respect to top-down proclamations and edicts from so-called experts, we have heard from a guest who was not only admittedly and demonstrably out of his element with respect to the subject matter, but whose express purpose in appearing on the program was to persuade us listeners that elite modelers and statisticians such as himself can do better than hopelessly parochial "carpenters" plodding along in their narrow biological disciplines and that on that basis, we should enthrone him atop a command-and-control policy-setting hierarchy and categorically subordinate, if not outright silence, the dangerously ignorant "carpenters."

I imagine the ranks of the Politburo were lined with rhetoricians every bit as subtle and vainly self-confident as Taleb, no doubt abetted by peers and supporters every bit as blithely worshipful as our host was today. That these two should trade jabs about those very same central planners is a dark absurdity I suspect Orwell himself would marvel at. What is the precautionary principle? Taken to the extreme, it would prohibit the introduction of the Internet, the printing press, antibiotics, fire, and probably human language. Taken more reasonably, it's a moderate "look before you leap," or simply, "be careful," as if elaboration could serve any purpose but to obscure understanding.

What this episode lacked in educational content on GMOs or how/why they really are more dangerous than nukes, viruses, AI, etc., it certainly made up for in artful and elaborate twists and perversions of logic and reason, with respect to the formulation and evaluation of public policy. That being the case, while I will not be voting for it as "best episode of 2015," I may consider it for "best unintentional piece of statist propaganda."

That I use the term "unintentional" is a tribute to our normally outstanding host whose tireless work has been instrumental in building the online community whose thoughtful and articulate comments above are nothing short of delightful. Econtalk, you are still a national treasure, and you still have my undying devotion. I can't wait for next week. It won't be a hard act to follow.

Mark K writes:

@Bill
You say quantum mechanics and by extension by biochemistry are entirely probabilistic. I assume you mean probabilistic in the sense that we can actually determine the probabilities with some confidence.

My issue (and Taleb's) comes in at the larger scale. How can you possibly determine the effects on a organism (a complex dynamic system where we can't get a handle on Nth order effects) of multiple horizontal gene implantations? You can test and find out eventually, but how can you possibly know in advance?

If you don't know in advance, how can you possibly say GMOs will continue to be safe since there's no requirement for such extensive testing?

Mez writes:

I'm glad to see that at least a few people in these comments have pointed out the fallacy that just because there are crops all over the world that are GMO, that doesn't mean the same issue will happen everywhere, and also even if it did and an entire crop went extinct - even if that were corn that doesn't mean that civilization as we know it would be in danger. He does a very bad job of explaining why his argument wouldn't hold for all scientific innovation that gains broad utilization and therefore why it wouldn't suggest why we shouldn't slow down all innovation, which would be absurd.

Anton writes:

@Jayson There seems to be a problem with the reasoning of fighting Prof Nicholas Taleb saying Taleb shows no evidence of eventual Black Swans. This is naive and is strangely the topic of his paper.
He is saying there is no convincing material showing that GMOs cannot produce huge tail risks, and the arguments brought here show it.

GregS writes:

In the May 3, 2010 interview Taleb did for Econtalk, he discusses convexity. He explains why a probability (or more precisely someone’s estimate of a probability) can’t ever be zero, because there is always uncertainty around that estimate. He said it very well, and it was an extremely clear-headed explanation of a statistical principle. In this interview, he’s insisting that the risk needs to be zero. He actually insists, “But few understand that risk needs to be zero. Not small.” I guess this is my problem with his argument. You never get “zero risk,” you never get a “probability of zero,” at least according to an argument that Taleb himself believes. It’s not at all clear when his precautionary principle applies. A risk can be so small that it’s ignorable for all practical purposes. A risk can be much smaller than, say, the chance of a world-killing meteor strike or a super volcano or extreme solar activity, at which point we can probably ignore its marginal impact. An appropriate degree of caution is certainly justified, but magnitudes and costs must always be considered.

Bill Boyd writes:

@Mark
The problem with your and Taleb's model is precisely that it's a single equation non dynamic model that tries to capture risk in a complex dynamic system while apparently missing that there are models in existence that capture this.

Indeed in the paper you posit steps to ruin. Here's a simple model. Suppose the road to ruin is a coin flip with not one step such as your fat tail model in one part of your paper but a multi step process which is described in the GMO section.
You would have the ProbabilityA x ProbabilityB x ProbabiltyC of 0.5 x 0.5 x 0.5 = .125 or 12.5 percent. Simplistic but this actually captures somewhat the Fukushima disaster. What was the probability of of an earthquake of a certain magnitude occurring; what is the probability it would generate a tsunami of a certain size; what is the probability that the plant would have a core meltdown given these events. There are probabilities attached to each of these and even a tsunami model that is complex, dynamic etc., the engineering should take into account the probabilities of these previous 2. (It didn't so Fukushima was close to ruin).

In terms of GMOs what is the probability that they will transgenicaly spread, that that these transgenic plant's characteristics will be superior and in turn dominate plants without GMO characteristics, and finally the probability of a blight eventually affecting these now universal spread plants.

So you have similar but multiple possibilities. First, not just that a GMO will spread to another species but to the entire plant kingdom. My point about DNA, but you are also positing that geneticists will discover a universal gene and not notice. I assign the probability of transgenic spreading zero. You can disagree with that, but at least acknowledge the fact and explain how. Note too that mathematically zero Probabiltyalso means no tail, after all it's "moments about the mean."

The next 2 steps actually encompass your complex dynamical systems and deserve comment. Would current GMOs that contain 2 qualities; a weak pesticide and resistance to herbicide spread "in the wild". As to the first the first lots of plants have strong pesticides already, after all arsenic is a naturally derived substance. These have never been universal super plants as would be described in the paper. The second, herbicides, would offer no natural advantage. And herbicides such as Atrazene were developed and used on corn long before GMOs. So should this occur then broadleaf weeds would invade human spheres. No one ever found a broadleaf weed that had taken on corn genes and thus resistance. In fact Atrazene, which has been in use since 1955 is still effective .Probability very low.

Finally could a "blight" arrive that successfully wipes out the entire, now genetically engineered plant kingdom. Almost certainly it could arise but once again the way nature works say in predator prey models or in epidemiology is successful blights must at least leave some of their victims alive. Of course there is an evolutionary history of a blight arising that wipes out whole species but that involves humans.

The truly odd thing is your introducing clones into the discussion. Bananas are clones, there is currently a blight affecting them and spreading. Surely this should affect your paper as more than an analogy.

So the problem with this model is that it does not actually capture risks at all in a complex dynamic system. It should at least mathematically tie the model in the first part to the multi step process described in the GMO section. It is clearly a paper that needs a lot more work. Probably drop the GMO section and have a Mathematical section that actually ties the fat tail to actual complex models of nature.

Dave N writes:

First a comment on the Precautionary Principle and nuclear energy. While it's true that a single power plant disaster does not threaten the world, the research into nuclear energy that brought us the Cold War certainly did. It's quite easy to imagine a scenario where most of mankind's progress over hundreds of years could be wiped out in the space of a few hours. Seems like a no-brainer for the PP and as a result much more care and consideration should have gone into the process. But of course it all took place in the shadow of WW2 and so no PP was ever going to apply.

As for air travel I think we should be taking the danger posed by rapid virus spread much more seriously and health screening should be a much higher priority for international travelers. It doesn't mean 'banning' air travel.

As for GMO's I would argue for a more measured approach there as well. They may be subject to a series of trials etc., but just the fact that such a high % of various staple crops are now GMO in such a short period of time seems to me a violation of the PP almost by definition. This is a far cry from natural selection taking place in regional areas over much longer time scales.

I would even go so far as to apply it to diet. We have made major changes to our Western diet over the last 20-30 years and it looks like we have really opened Pandora's box on that one.

Robert Swan writes:

Dave N, It sounds to me like you would like the Precautionary Principle applied to everything. My problem with that is that the Precautionary Principle is not defined. The words express a nice sentiment, but how does it translate to policy?

To take your example of nuclear research and the hazards of the Cold War, the arms race, MAD and all that, can itself be justified by the precautionary principle, with each side hoping not to need the weapons, but perceiving that they must build them as a discouragement to those loose cannons on the other side.

Likewise, do you not find it conceivable that denying future generations the benefits of genetic R&D could cost just as many human life-years as the worst-case scenario of genetic R&D gone wrong that you fear? How can this be weighed up?

Maybe we just accept Nassim Taleb's arbitrary rulings on what is or is not precautionary.

Without a clear definition the PP is a mere slogan.

Bill Boyd writes:

@Mark
In answer to your question "How can you know...the effects of multiple horizontal gene implantations." The answer again is the molecular structure of DNA. It has been mapped, that is what makes genetic engineering possible. We know, for instance that Neanderthals probably had red hair. The "model" of DNA makes precise predictions possible. All engineering rests on the ability to take the order of the natural world, and make precise prediction with it.

Mark K writes:

Bill,
Great comment, very well thought out rebuttal to Taleb.
I'm persuaded the chance of ruin is near 0, but we could certainly still destroy smaller ecosystems if not careful. Imagine we make some crop much heat resistant (which I imagine is coming as global warming continues), then isn't there a non-zero chance it could end up entering and completely disrupting some new ecology? It wouldn't be ruinous or new though.. we've done it multiple times before genetic engineering.

As for engineering taking order of the natural world, there's a difference between engineering appliances and financial engineering.

I agree we have a very good mapping of DNA. I agree that we can probably predict the effects doing something like altering alleles to give someone blue eyes (or make peas smooth). However, adding entirely new genes from other species would (to my guess) bring us solidly out of the realm of the "engineering", at least out of the realm of the predictable.

If you're adding multiple genes from multiple other species, couldn't you generate an effect that's greater than the sum of the parts? In their native species, gene A does X, B does Y, and C does Z. However, when you add them to a new species, you have three completely new cellular alterations - couldn't they combine to add a completely new effect Q? My problem is that I think Q could be harmful to humans, and medicine won't necessarily be able to figure that out before hundreds of millions consume the plants with it.

I'm not against GMOs, I just think they should be regulated at least as heavily as medicine. Most pharmaceuticals have a much smaller market, but if we make a mistake with GMOs, it could be most of America that suffers.

Dave N writes:

Robert, I'm not sure how you took away that I would like the PP applied to everything. Perhaps my comment about diet at the end muddied the waters so I'll flesh that out at the end. As to how the PP "translates into policy" I've no idea. I don't expect politicians to have the faintest idea regarding most of the topics that get discussed here much less this one. But I have to say I was surprised at the dismissive backlash in the comments.

As it happens, I don't think we've done very well at risk management in general and the PP seems to be trying to address the most extreme, most dangerous manifestations of this. My take on Taleb's explanation of the PP is that it applies where the systemic risk is global in reach and potentially ruinous along the lines of doing more damage in a relatively short period of time than all the accrued benefit.

Hence my comment on nuclear power and the obvious dire consequences of even a limited nuclear exchange. Even without the pressures of WW2 I imagine we still would have rushed headlong into doing things like painting clock faces with glow in the dark radioactive material before we even understood the effects of radiation on the human body. But that's a side issue and not what the PP is about. Shouldn't a civilized species have been able to reach a solution that didn't involve building thousands and thousands of nuclear weapons with the potential to obliterate our species?

We suffer from survivorship bias since our current world has avoided the worst case nuclear scenarios. But I imagine mankind would have a much different take on those past decisions if there were just a billion or two people left struggling in a post-apocolyptic world. And it would even be interesting to know how many of those making decisions on the Manhattan Project lost any sleep over unknown unknowns.

Similarly, if a particularly nasty virus is spread to a few dozen countries in a matter of days and ends up killing a billion people don't you think we're going to look back and say "Hmm, we should have been a bit more careful about the globalized movement of people"?

Yes there is a cost to being too cautious, but I'm not saying ban air travel. I'm saying be more cautious where these extreme scenarios are possible. And to that point I have to laugh at all the comments about how 'we understand the risks with GMO's, yada, yada" Really? Have we still learned nothing about the nature of risk? Remember Black Swans? There are always risks we don't even know are there. This in addition to understanding and managing the ones we actually can envisage.

By all means do GMO research, but maybe go a bit slower on the rollout? I don't even think the issue is with direct consequences on human health; I think he's talking about global ecosystem risks. I can think of a big one off the top of my head. How about bees? Just look at the concern over bee colony collapse. Let's speculate on a GM side effect that wipes out 95%+ of the bees. Or perhaps disturbs the soil ecology significantly? I'm guessing we understand the symbiosis of the microorganisms in the soil about as well as we do the ones in our own gut. Let me sum it up this way: Is our current agriculture system becoming more fragile or more anti-fragile?

Global warming and climate change are another obvious candidate for the PP from Taleb's own words. Maybe we get to 2100 and the temp's only gone up 1 degree and we have more time than we thought. But what if we get to 2100 and temperature is up more than 10 degrees? Once again, I imagine that the people alive then will be cursing us with every breath for failing to act more cautiously, while trying to adapt to the absolute chaos and destruction they've been left with. (Or perhaps it will be brought on by rushing to geo-engineer our way out)

I suspect Taleb would disagree with me regarding dietary changes though I know he's said in the past that he's not a fan at all of these new 'foods' and prefers the Mediterranean diet that his ancestors ate. But I think it's becoming obvious that the Western diet in general is very bad for our health and that the most recent changes are particularly harmful. The current GMO crops may turn out to safe for consumption but all this sugar, seed oils, trans fats, etc. is clearly not.

The US and several other countries are in the midst of an unfolding health crises that's resulting in a mind-bogglingly huge number of 'life years' lost, not to mention putting an enormous burden on the health system. But even that's probably not a case for the PP, just some sensible risk management.

Via negativa

Robert Swan writes:

Dave N, my "applied to everything" was a bit of hyperbole -- you having only cited things you saw as being candidates for the PP. I shouldn't have said it as it added nothing to my point.

By "policy" I wasn't intending to mean only public/government policy, but any decision making. Just what is this principle of precaution? How does it help Taleb decide that we shouldn't be planting GMOs? How does it help Taleb decide that nuclear energy is OK and should go ahead? How can this one "principle" bring you to a different position on nutrition from what you think Taleb might reach?

I think the old cliche of "err on the side of caution" probably captures what people understand the precautionary principle to be. That's easy (to the point of uselessness) when we are approaching a cliff: back off. But what do we do when we're on a ridge?

IMO, any contentious issue must be a ridge, not a cliff, and any people recruiting the PP to their side are doing little more than saying "I'm right because I am".

Here's a different example. On the one had, a surgeon may remove the barest minimum amount of bowel to remove the cancer, minimising loss of bowel. On the other hand, he can take a good distance on either side of the cancer, to be confident that all cancerous tissue is removed (Just as an aside, I have heard different surgeons refer to each of these diametrically opposed policies as "conservative".) There are risks with either approach. The PP can be used to help advocate either extreme, but, should it ever come to it, I'd prefer my surgeon to choose neither extreme, but the right blend.

Josh Marvel writes:

Many people don't know about the Southern Corn Leaf Blight epidemic that occurred in the 1970's. I loved this episode because of the localized vs. global perspective. Luckily, this epidemic was localized, but could you imagine this on a global scale? Here is a link that describes the event in further detail.

http://www2.nau.edu/~bio372-c/class/sex/cornbl.htm

Also, I liked that he touched on invasive species. One invasive species may no cause much trouble, but when you have Dutch Elm Disease killing all the elms, Chestnut Blight killing all the Chestnuts, Asian Longhorn Beetle killing out hardwoods, Emerald Ash borer killing the Ashes, and Kudzu dragging down forests, you can see how the ecology and ecosystem will change over 20, 30, 40 years. Protection from invasive species is very under funded in a world were transportation around the world is increasing.

Great episode! Thanks!

Dave N writes:

Robert, "I think the old cliche of "err on the side of caution" probably captures what people understand the precautionary principle to be."

But this is not the PP at all. It does sound like the way you're interpreting it, along with quite a few others commenting here, but from the paper:

"The precautionary principle (PP) states that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety...placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure...Traditional cost-benefit analyses, which seek to quantitatively weigh outcomes to determine the best policy option, do not apply"

Hence my remark about public policy because the average individual is almost never in the position of making a decision that carries this kind of weight. The PP does not apply to bowel surgery or whether one should travel to the Middle East or swim with sharks etc.

IMO it does apply to the decision to create a nuclear stockpile (the paper talks about nuclear energy (i.e. power stations) and does leave the option open for inclusion even there depending on scope of implementation), a carbon tax or even the threat of an asteroid impact. In the last couple you have to include in the definition of 'policy' the failure to do something proactive to mitigate the risk.

My take is that it would not necessarily apply to research of GMO's per se, but more to the rapid, far reaching, dominant use we have seen in the last 2 decades. That is what is introducing the systemic risk, not simply the technique itself. Likewise it is not CO2 that is dangerous in and of itself. It is the emission of tens of gigatonnes per year that creates the systemic risk.

Finally, on nutrition, I believe we are rapidly approaching the point where general global health is being put at risk. Clearly the US, UK, Australia and some others are already there but it's not global yet which is where the room for disagreement comes into it.

Josh Marvel writes:

Dave,

I'm not sure that most people misunderstand PP, but rather it's the definition of "ruin". For me, very small populations of humans left on the planet would be considered "ruin", but to others, a change in our consumption based economy would be considered "ruin". During the 2009 collapse, I'm sure Ben thought the US was facing "ruin", but I have a feeling many humans would have lived. My comments earlier might seem that I am paranoid over local events, but as our economies because more entangled and resources became stretched, doesn't it seem that small ripples in food supplies could change the paradigm of our global addiction to status quo. After all, the big banks didn't learn to avoid risky bets, but rather to become more linked to each other to ensure the government will have to supply them funds. As resources become tighter and populations exponentially grow, small disruptions in the food supply could cause ripples that not only topple 3rd world governments, but have greater ramifications throughout the world. The world economy seems like a very intricate engine with many moving parts and as resources become stretched and demands grow, the oil in the machine might become more sensitive to smaller shifts. I know that's not PP and I'm not sure what that's called, but I'm sure some people would consider that "ruin"

Robert Swan writes:

Dave N, Thanks for quoting the paper -- lazy of me not to have read it myself.

My "err on the side of caution" paraphrase would appear to cover about half of Taleb's definition (which is similar to that in Wikipedia, now that I look). I missed out on the restriction that it should only apply to matters of severe risk to the global public domain. IOW, in matters where there is severe risk to the global public domain, decision makers should err on the side of caution. Anything wrong with that as an "in my own words" version of the Taleb's PP?

I utterly fail to understand why these matters are restricted to public domain and global scope. Doesn't the same approach apply to all "off a cliff" decisions? Is it not this exact precautionary thread that runs through (say) an air traffic controller's work (i.e. not global)? So what's wrong with in matters where there is severe risk, decision makers should err on the side of caution. Great. Can't disagree. Bit trite though. The hard part is knowing which matters these are. Taleb's definition squibs this, and "because I say so" is all he offered in the podcast.

Lastly, just to acknowledge your mention of nutrition, I agree that there are a number of areas of concern, but I don't see that there is a "cliff" to pull back from. In your earliest post you talk about supplements not being as effective as "real" foods, but I'm afraid I just don't see how any PP (of global scope) applies. In your latest post, I'm not sure where the "there" is that US, UK, Aus. all are. Is it a good place, or bad? I'm guessing bad (obesity, heart disease), but again, don't see where to apply the PP.

Justin Clark writes:

First, I'd like to comment on the commenters thus far. Many of you are very insightful and intelligent. All the comments I planned on making have been made, with more eloquence. I'm sure it has a bit to do with the filtering, but it's not common to see such intelligent discussion online.


The fallacy of the carpenter's fallacy: While it's true that the carpenter will not be able to better speak to the probability of the roulette wheel after a particular number of observations than the statistician, the carpenter is for a while the only person who knows how the game is supposed to work. Much the same with these single play, ruinous events. Because we can not observe the ruinous game of nuclear energy, GMO's, or (insert catastrophe du jour here), we can not depend on the statistician or the risk manager. They simply do no speak the language of single event systems. Instead, they will use their clever brains to divine a model that represents the behaviors of this system. To construct the model, they have to depend upon the carpenter to understand the system. What's worse, we don't have designers who can inform us of the details of this complex system. We have scientists who examine the system in, often times, the same manner as the statistician. One thing is certain, You can count on the imagination to fabricate a domain in which absolute ruin is not only possible, but inevitable. This illuminates a weakness in the precautionary principle, the human imagination. A system that depends on the imagination and fear, is an arbitrary and weak algorithm for action.

Not only can the imagination prescribe a non-zero probability to some catastrophic event, it can also assign any arbitrarily large value to the ruinous event. Thus making any improbable probabilistic event significant in terms of it's impact on expected outcome. The PP suffers the same fundamental folly of all attempts to mathematically quantify human life and human existence. Arbitrarily defined value, to the extent that ones philosophy, not a market, must price it.

Rick Camp writes:

I am not against GMOs, but there are alternatives to GMOs for no-till farming, most notably cover crops. There is also evidence that as we have bred for calories, we have reduced nutrition. So reduce starvation and increase malnutrition. I think that any time you look at one (yield/calories) or a small subset of variables(yield, erosion, cost of inputs) in a complex system there can be unintended consequences whether its agriculture yield or trying to increase home ownership.

See the research by Ross Welch of Cornell with the USDA. I think that he would be a very interesting guest on EconTalk. http://www.ars.usda.gov/Research/docs.htm?docid=9445

[broken html revised to be readable.--Econlib Ed.]

Kevin writes:

I came to say something, saw Justin P said it way better, and I add my virtual support to his comment and a little more.

The idea that GMO producers have to prove the negative - that it will cause no harm is not possible. This PP applied so broadly would kill all human innovation that someone somewhere could link to a possible worldwide calamity.

So, without a mechanism of why GMO is DIFFERENT than other current methods of agriculture this just becomes very sophisticated GMO hysteria. The theory is still brilliant and relevant for many problems, but it takes more than "this new thing seems scary".

Ton writes:

I am a big fan of mr Taleb and his books are well worth reading. However I feel that he steps to easily out of his field of expertise. It seems to me that his theories about risk and probability can be used in finance and economics but if they deserve a place in other fields seems very doubtful. I guess it is the common mistake people make that if you have a hammer everything looks like a nail.

Greg Linster writes:

Ton: I don't think you understand what Taleb's field of expertise is, because it's certainly not finance and economics.

Few things are more ironic than watching a "carpenter" who thinks he understands the Carpenter fallacy publicly demonstrate that he doesn't.

I agree with others that there have been many thought provoking comments thus far on both sides, but I think Rick Camp brought up a great point. Reducing starvation through increasing malnutrition is not a great human achievement.

We are often told that we need GMOs to feed the world, but that's just emotional rhetoric. As horrifying as it must be to see a child starving, our naive attempts at reducing starvation have created other nasty health problems too, e.g., rampant obesity and diabetes. Why are we not equally horrified with those problems?

Many of the proponents of modern food systems and medicine celebrate the improvements in life expectancy we have made, but make no mention about the quality of the years people are alive. As Taleb said in The Bed of Procrustes: "Modernity's double punishment is to make us both age prematurely and live longer."

Robert Swan writes:

Risking outing myself as a crashing bore here...

I was a bit embarrassed that I had posted here several times without actually reading the paper. I have made amends now, and made notes. Will condense them down to what I believe is the key point, and follow with a comment or two.

The main point: Ruin is a catastrophic failure state from which there is no return. If a choice is to be made where ruin, global in scope, is a possible result of deciding one way, the decision should go the other.

That's pretty close to my last approximation anyway, and my criticism stands that this is a truism. The real difficulty is in identifying which problems have the potential for ruin. Table 1, which is stated to "encapsulate the central idea", isn't very clear -- it might be missing a column -- but it's as near as the paper comes to listing the characteristics of PP vs. Non-PP problems. It doesn't do the trick for me though and, as many have said, it comes down to Taleb to rule on the various issues.

While the reasoning is given for not including nuclear energy as a PP candidate, I'm sure a suitably cataclysmic case could be concocted. We only have to go back a few weeks to have ruin through AI in our podcast. Until an agreed mechanism to objectively identify whether or not the PP applies, the PP will be nothing more than a tool of activists.

Amusing was the tacking of "fallacy" on every argument against the paper's position. FWIW, I'll throw my hat in with the "paralysis fallacy", with a side bet on the "fallacy of misusing the naturalistic fallacy". Taleb is a virtuoso of the apt analogy. Black swans, turkeys at Christmas -- and I liked the collecting pennies (or $1000 notes) in front of the steamroller. It's quite a talent, but I think he's a little prone to the "Taleb is always right fallacy".

I laughed out loud at the "skepticism about climate models should lead to more precautionary policies". Wondrous illogic!

I must point out that I'm neither pro- nor anti- GMO. It is possible to criticise Taleb's weak argument without wanting to hop into his chair as substitute ruler.

Lastly, I see the irony. In my first posting I said I wouldn't waste my time listening to the podcast again. I have now spent several hours on it. Particularly fun when my original statement was right -- there really isn't much in it.

emerich writes:

Hugely flawed arguments by the guest generated masses of great comments. The bottom line is that Taleb's position is the classic, unfalsifiable proposition, hence unscientific. Anything can be declared fat tailed because--well, because of the math! Or in mere words: I don't know what all the bad outcomes might be, and how really bad they might be so the precautionary principle says don't do it!

He loses his way before he's through his first methaphor: we have no "evidence" that water on the floor is likely to be dangerous? No evidence except 9th grade biology about pathogens, not to mention accumulated evidence since Anton Leeuwenhoek discovered bacteria. He says we have no evidence about the consequences of climate change! How about 500 million years of geologic history?

You see?

crcarlin writes:

In case nobody else above brought this up (I didn't see it) Taleb's conversation was amazingly lacking in consideration of the benefits of taking the risks he's addressing.

It's as if in his mind we're just deciding arbitrarily to engage in risktaking.

If he spent more time considering the benefits that come from things like GMO, perhaps he'd realize that there is room for different risk appetites that seek to balance risk and reward, that the world isn't so black and white.

But then, the funniest part of the interview was where he criticized others for seeking out nails for their hammers... when he seems to be engaging in precisely that behavior.

All he has is the study of risks, so everything looks like a reward-free risk to avoid at all costs!

Ahmed writes:

It is remarkable how many of the commenters here are rehashing things that Taleb has responded to.

Ahmed writes:

Using the argument of "unfalsifiable" against Taleb in particular and tail risk in general is nonsense: we check people before boarding the plane without trying to falsify whether they are terrorists.
Why? Because we cannot afford to take the risk.

Simon writes:

I had the same observation as crcarlin. This was a very odd EconTalk episode, since the purpose of economics is to teach us about trade-offs, yet this episode was 99.9% about risk. Only in the 59th minute did Russ briefly bring in the concept of benefits, but even that discussion didn't go anywhere. I chuckled when Taleb talked about "global warming". There was no appreciation at all of the benefits of using fossil fuels, which surely must be weighed against any perceived risks. In that respect, I would recommend to those who haven't read it Alex Epstein's new book "The Moral Case for Fossil Fuels".

Stephen Reed writes:

My e-mail to Nassim:

"Hello Nassim,

I have enjoyed listening to your commentary about GMO crops and the possible risks of such. However, one thing that puzzles me is your relative lack of worry about seeds created using traditional techniques that are prevalent in our food supply but are not considered GMO varieties.

Such techniques include the following:

"mutation breeding, which is the process of exposing seeds to chemicals or radiation in order to generate mutants with desirable traits to be bred with other cultivars. Plants created using mutagenesis are sometimes called mutagenic plants or mutagenic seeds. From 1930–2014 more than 3200 mutagenic plant varietals have been released."

http://en.wikipedia.org/wiki/Mutation_breeding

Other techniques include plant tissue culture breeding:

"When distantly related species are crossed, plant breeders make use of a number of plant tissue culture techniques to produce progeny from otherwise fruitless mating. Interspecific and intergeneric hybrids are produced from a cross of related species or genera that do not normally sexually reproduce with each other. These crosses are referred to as Wide crosses. For example, the cereal triticale is a wheat and rye hybrid. The cells in the plants derived from the first generation created from the cross contained an uneven number of chromosomes and as result was sterile. The cell division inhibitor colchicine was used to double the number of chromosomes in the celland thus allow the production of a fertile line."

http://en.wikipedia.org/wiki/Plant_breeding

Both of these methods introduce changes to many unknown genes. Sometimes, as is the case of mutation breeding, a completely brand new gene that appears to have beneficial traits is formed.

Seeds created from these techniques are nowhere near as well studied for safety as GMOs. Seeds created from these techniques do not require FDA approval.

The safety and general wholesomeness of seeds created from GMO techniques have been supported by hundreds of studies. Every scientific and medical organization of prominence has attested to the general safety of GMO varieties:

http://2.bp.blogspot.com/-9gJwulXnO_o/Ue8hgkh4YkI/AAAAAAABCT8/1o4qQ40IjHQ/s1600/GMAuthoritiesnew1.jpg

http://gmopundit.blogspot.com/p/450-published-safety-assessments.html

Not only that, but GMO seeds only introduce a single new well studied gene to the seed.

The same can't be said of the hundreds or thousands of varieties of crops that have been introduced into our food supply using these other techniques.

Do you at least accept that there is far more risk to us with all these other varieties that have been introduced into our food supply that were created with other techniques?

If not, why not? What is your rational and scientific basis for being more worried about GMO varieties vs. varieties created with these other techniques?

Kind regards,
Stephen"

Myung S Kim writes:

This episode reminds me of a previous episode in 2013. (Oster on Pregnancy, Causation and Expecting Better Oct, 7 2013) I had also written a comment at that time.

When viewed under the precautionary principle, the claims during that episode that because there is no hard evidence of harm from low amount of alcohol in pregnancy it is not evidence based for obstetricians to recommend against it seems silly, thoughtless and even reckless.

The host and guest make fun of how physicians are not able to discuss detailed study results with their patients. However, obstetrician's role is to guide pregnant women regarding how to manage risk and strike a balance. When there is very clear evidence that high amounts of alcohol cause significant harm to the fetus, I would think it wise to advise pregnant women to stay away from alcohol altogether even if there is no evidence that low amount of alcohol is not harmful. The observation studies quoted by Ms Oster should be distinguished from evidence that there is no harm.

Ron Crossland writes:

Listening to the podcast and reading 77 comments, many of which are as thought provoking as the podcast, suggests that the PP is a useful tool AND that NNT didn't adequately address domain knowledge. Merely asserting that risk statistics and the experts therein can claim little or no knowledge of the domain they are assessing is weak.

Moreover, when it comes to GMOs, the risk is not knowing what nature will do with any genetic modification, regardless of cause. The entire episode had a fairly strong anthropomorphic tone. All the risks were about risks to humans, without adequately considering humans are another factor in a large, very complex ecosystem.

With that in mind, the risk to humans due to climate change is likely a fatter tail than GMOs. While the failure of a single or even multiple crops during one season could be dramatic, our ability to correct for it, using other varietals and food substitution would likely prevent catastrophe as NNT describes it. CO2 changes are not easily reversible and some of the consequences would likely be great for some lifeforms and disastrous for others.

Ton writes:

Greg Linster:

I did not say that Talebs field of expertise is economy or finance. I said that his "hammer" could be properly used in economics and finance but not in the field of genetically modified crops.

Few things are more irritating then people commenting on something that they have not read carefully.

Hacky Dacky writes:

Russ, at about 44 minutes into the podcast, you said: "And I feel bad--we should have made it clear, because I forget that not everybody has been listening to EconTalk since 2006, but: Thin tails means that the probability of remote events is very, very, very vanishingly small. And fat tails means it's small, but not zero. Is that a good summary?"

Very true -- not everyone who listens is familiar with the econometric and statistical jargon. Although the term was used throughout the podcast, it took this listener 44 minutes to hear the explanation of what "fat tail" means.

But another term was never defined; viz "left tail". At about 54 minutes Dr. Taleb says (my transcription), "The left tail reacts vastly more to the scale of the distribution (which for a Guassian would be the sigma, the standard deviation, ...) so the left tail reacts more to the scale than to the mean of the distribution, and the more you go to the left tail the more it reacts to the scale and the less it reacts to the mean, you see?"

Russ, I wish you would have said, "Yes, but for the sake of our new listeners, would you please explain what you mean by 'left tail'? "

Thanks for your consideration of those of us who aren't economists or statisticians.

Comments for this podcast episode have been closed
Return to top