Continuing Conversation... Nassim Nicholas Taleb on the Precautionary Principle and Genetically Modified Organisms

EconTalk Extra
by Amy Willis
Nassim Nicholas Taleb on the P... Alex Tabarrok on Private Citie...

Nassim Taleb returned to EconTalk this week, to discuss his recent paper on the risks inherent in genetically modified organisms. Contra last week's guest, Greg Page, Taleb sees the potential for global ruin in GMOS.

Share your reactions to the following prompts with us, and let's continue the conversation.


Check Your Knowledge:

1. Because GMOs are a "fat-tail" phenomenon, the precautionary principle should be invoked, according to Taleb. What does he mean by this? How can we know that GMOs do not represent a "thin-tail" phenomenon, according to Taleb?

2. What is the difference between harm and ruin? How is this represented in the distribution of possible outcomes?

Go Deeper:

3. Taleb describes a hierarchy of experts capable of weighing in on GMOs. To what extent do the risk analysts at the top of his hierarchy face the same bias issues as the people at Monsanto?

What would Hayek say about #GMOs?4. Taleb invokes F.A. Hayek frequently during the conversation. Given his views on "the pretense of knowledge," would Hayek regard Taleb's invocation of the precautionary principle as "paranoia," or as justified? Explain.

5. In an article published this past spring, Scientific American claims that 1,424,000 life years have been lost since 2002 as a result of the opposition to genetically modified "golden rice." How does Taleb suggest responding to the problem of Vitamin A deficiency in this week's episode? How do you think Taleb would respond to the accusation made in this piece by Scientific American? Where does your sympathy lie? Explain.

Comments and Sharing

CATEGORIES: Extras (194)

TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (34 to date)
Brian writes:

I am skeptical of Taleb's position. One contradiction is that he claims one should assume fat tail in the absence of deep knowledge yet he dismisses warnings about AI out of hand.

I do not know Taleb's prescription but guess it entails force. So we stifle things we think might be ruinous, things politically unpopular, and some things merely expedient (people involved in coercing others have their own motivations). We don't squash some extremely dangerous things because their proponents are well connected or performed by those beyond our reach(different set of thugs). Meanwhile things we have no control over (HIV goes airborne, giant asteroid impact ...) continue. We simply have less wealth and knowledge to adapt.

William writes:

The funny thing is that if we don't have GM food, we would not have 7 billion people on the planet, and probably that would slow down the global warning.

Other than that, the fear of the unknown is totally ... unlike human. We got where we are today because very few people decided that they won't let the fear of progress stop them from introducing steam engine, fossil fuel, electricity, nuclear power, hadron collider, stem cell research.

The future will not stop coming just because we want to prevent the fat tail chance of extinction. Neanderthals ruled europe for 300,000 years, and now they are gone. Homo Sapiens existed for a mere 100,000 years and now we are at the verge of extinction due to our exponential growth in the past 200 years.

99.99% of the species that have ever existed on earth are now extinct. Rather than fearing it, humanity should go out with a daring bang, and if GMO or AI or asteroid hitting the earth is it, so be it. It is better that living for 1,000 years in medieval dark ages.

David L. Kendall writes:

One wonders how Taleb knows much of anything about the tails of some distribution concerning GMOs. Just for openers, for what variable is the presumed fat-tail distribution that he claims to know is fat-tailed?

The ultimate fat-tail distribution may be a continuous uniform distribution. Such a distribution may have a stable mean and variance, but really represents substantial risk and ignorance.

Is the distribution of which Taleb speaks generated by a stochastic process? If it is not, then one wonders why the notion of a probability distribution comes into play at all.

I find it difficult to evaluate Taleb's claims, based only on the EconTalk interview. Just what is the so-called "precautionary principle"? I can't recall a succinct statement of it from the interview, but perhaps I was not paying close enough attention.

We may also have a bit of a chicken-and-egg problem here (something Russ knows much about!). Risk of bad things happening often occurs because we have incomplete knowledge. But we gain knowledge by experimentation in science. How much will we learn about the advisability of GMOs without researching them?

Derek writes:

Taleb explicitly claims that the burden of proof is on those who claim the safety of GMO's, but he should know this position is unprovable--it is only falsifiable (by showing proof of harm). And likewise, his own position is unfalsifiable, yet could easily be proven. It would only take a single example of harm to falsify the pro-GMO position; meanwhile, he can perpetually maintain his position without evidence by claiming that we just haven't yet discovered the harm.

Taleb's inconsistency in applying the precautionary principle (PP) to nuclear ruin reveals the biases in his assumptions. If all nuclear warheads were launched simultaneously, or if all nuclear power plants went into meltdown, the ruinous effects would not remain local. So by claiming the PP does not apply to nuclear ruin, he must be assuming the probability of such an event is so far left-tailed that we don't need to consider it. Yet. He also claims that the left-tail probability of GMO ruin is unpredictable. so how can he assume the relative probabilities of nuclear and GMO ruin? He must be assuming that GMO ruin is more likely. But based on what? He admits that scientific evidence need not be discussed--that it should even be avoided. He has built his own domain-specific ignorance into the necessary conditions of his argument. If he did have knowledge of GMO technology, I think he would see that a Frankenfood crop completely destroying Earth's ecology is equally as unlikely as a terrorist organization (or Super A.I.) gaining access to all available nuclear weapons and destroying the world.

His description of the hierarchy of scientific authority at the beginning of the interview (biologists < statisticians < risk managers/Taleb) was almost cringeworthy in its arrogance.

[broken html fixed--Econlib Ed.]

Hudson Cashdan writes:

I appreciate Taleb’s general irreverence while challenging prevailing views in a logical manner but I remain unconvinced on this matter due to his failure to sufficiently address certain questions:

Is his issue with GMO or GMO when done by one or two large companies in a top-down manner?

Why does he assume that GMO will be done by large players- why won’t there be distributed tinkering with seeds from small biotechs challenging Syngenta and Monsanto, etc. for market share?

Why should the traits of a GMO seed manifest identically across varying ecosystems? Why assume that Golden Rice will dominate Brazil just because it may have dominated India? And if there is a major problem with North American corn derived from a certain GMO seed, why should that affect European corn (or wheat, soy)?

What might the fat tail look like: is it a total eradication of North American corn or would it be the eradication of all crops everywhere leading to the complete inability of the human race to meet nutritional needs? The former seems like something that can be adapted to, the latter just preposterously outlandish. Likewise, IF warming caused NYC to be like Venice and Siberia to be as fertile as Iowa, isn’t that the definition of anti-fragile? After all, there may be no net loss and humans would adapt to the change (Snowbirds might winter in North Carolina).

Margaret Aten writes:

I wonder if antibiotic use should not be adopted by Taleb et al along with gmos & climate change as fat tailed risk presenters. Could not a highly resistant strain like MRSA gain precedence and eventually face no defenders in the body? Instead we should perhaps let people die while fighting various infectious organisms and the survivors will slowly develop the ability to resist the now highly virulent strains which could become more common with the passage of time. Even now in Norway physicians do not use antibiotics for common ear infections in children on the theory that the childrens' bodies will marshal the needed defenses on their own and the common organisms killed by the common antibiotics will slow down on their resistance development. I am not suggesting this for myself, of course.

keatssycamore writes:

"Why does he assume that GMO will be done by large players- why won’t there be distributed tinkering with seeds from small biotechs challenging Syngenta and Monsanto, etc. for market share?"

In a word, money. In a few more, there's enough money to buy politicians (and continuing patent protection via those bought off legislators), to fight smaller competitors in courts with the intent to break them, and, if all that doesn't work, they can just buy them out and close them down (as they've already done for many small to mid-sized companies and their non-GMO seed lines).

Hannah W writes:

Totally don't buy this line of thinking! What about things like yoga or turmeric or coconut oil? Today they are considered good for us, using the precautionary principle we would have said there is no evidence so they are bad. No one could live like this. We would not cross a road till we had evidence it was safe. Complete malarkey as far as I am concerned.

Joe B writes:

There seemed to be a lack of acknowledgement of the massive potential gains from gmos. If it were so easy to feed the world without them, what's the point? I feel like Norman Borlaug would take issue with Mr. Taleb. Ruin has been the normal state of the world, and gmos can help undo that fact.

Hudson Cashdan writes:

That is a political/economic issue separate from the argument being made by Taleb...or perhaps it is part of the argument and the entire thesis is invalidated if this concern is mitigated.

I haven't looked at big pharma co's in a while but many are forced into acquisitions due to lack of a pipeline...and many targets get away from them. Also, much of the value in big pharma is in navigating the costly and lengthy approval process as well as distribution- wouldn't the best way to mitigate this power be to address the reasons why the industry tends towards consolidation? Why should GMO be much different than pharma?

dan writes:

The concept seems sound, but are GMOs truly "fat-tail" phenomenon? How about "Global warming"?

It seems not. People adapt, and the planet is fine with or without us - just look at the lush natural park Chernobyl has become.

Thor Sigurdsson writes:

The most important take-away here is the distinction between systems whose maximum downsides are catastrophic vs non-catastrophic.

Whether or not you agree with Taleb's classification of a particular domain (nuclear energy, GMOs, global warming) into one of those two categories is less important.

His general methodology for how to treat domains within those two categories remains sound, as is his criticism of tail-risk blindness among expert operators in negative fat-tailed domains.

Neil writes:

Can anyone help me understand the following assertion by Taleb in the PP paper:

When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite.
I fail to understand why the impact of the harms are summed across all future times but the impact of the benefits are not.

Yunfeng writes:

I have the same issue with the argumentation of Taleb as many of the posts above. Although his work in financial markets has been relevant, I can't help but feel that this efforts is quite misguided and holds a lot of subjective opinions.

- An over-belief in statistics is also harmful. Statisticians help subject experts make the right interpretation of data but it is fool-hearted to believe that they would be much use if they only had the data to look at. Nobody would be there to make the relevant experimental design and interpretation of data.

- Arbitrary or subjective interpretations. It is hard for me to see how nuclear risk is not as dangerous as GMO. You could say that there has been long use of nuclear energy to infer from but then you wouldn't be comparing apples to apples when you don't have as much historic data from GMOs. I'm pretty sure that Nuclear energy would also have been classified as fat tail if you just to 50 years back. In the same manner

- Fear mongering vs choice. It is easy to drum up the risk evaluation but it can be as dangerous when you don't evaluate the risk or cost of your other options. If the alternative is to do away with all genetic modification and set the clock back 150 years, we would be back to substance living or not able to feed even larger parts of the 7Bn we are today. The question then becomes, would the alternative be better or have less risk?

- Disruptive innovations. GMO technology is certainly disruptive in how it gives mankind a greater ability to shape nature. It is not surprising that it is unsettling or even upsetting some. This unease or sometimes perhaps even fear, creating a lot of misguided rhetoric in the debate. If the options are to continue as now with us using ever more fertilisers, ground water and other un-replenishable resources to feed our world, I think the GMO technology would be seen as a new technology that can potentially save us from ourselves.

Andrew McDowell writes:

Taleb makes a great deal of use of the notion of ruin, and of the notion that ruin is an infinite loss. Since it has been noticed that it is difficult to make reliable quantitative decisions when infinity is allowed to enter into the calculations, and people have often reframed their calculations to remove it. One option would be to introduce a discount factor, if we did not feel that our 10000-great-granddaughter was as valuable to us as our daughter.

Another would be to minimize the risk of extinction of Homo Sapiens, but discarding Taleb's apparent assumption that the only risks worth thinking about are the risks identified by Taleb and being taken in the present moment. If we expand the set of present risks we might consider the risk that lack of resources due to hidebound inefficient agriculture with a higher ecological impact triggers an ecological collapse, or a war that leads to extinction. If we consider future risks we may suppose that being richer today due to the use of GMOs allows us to build up resources that counter those risks or allow us not to take them, or allows us to put sufficient resources into science today to learn to identify them.

Even taking Taleb's model to its logical conclusion, for there to be an infinite loss due to taking a risk today there must be an infinite future existence if we avoid taking that risk. But under Taleb's model, unless the world suddenly gets more benign, those risks will still be out there, and in an infinite future sooner or later somebody will make a mistake and take a risk that kills the species, so an infinite future for humanity is not in fact possible, even if we make Taleb God-Emperor of the human race today and do everything he says.

LowcountryJoe writes:

I was disappointed in this podcast. Some issues that I took with Taleb in this one:

1) why is it again that the burden of of proof is on the GMO supporters?

2) I find his "managed risk" comment to sweep away any potential danger with nuclear power usage a rather convenient cop-out on precautionary principle grounds. Personally, I agree with generating energy through nuclear reaction but don't like how he can dismiss the risk there and still be consistent.

3) human beings are a part of nature. By genetically modifying foods, humans are adapting to their environment...are we not a part of the same natural ecosystem that Taleb does not want tinkered with?

4) medical treatments and medications themselves are a form of affecting the ecosystem...we are improving our mortality. Would Taleb argue that this is a bad thing; that we our engineering and modifying organisms (ourselves) and the resulting outcomes could be catastrophic?

Atlien writes:

[Comment removed pending confirmation of email address. Email the to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Hugh D'Andrade writes:

Taleb knows a great deal about the consequences of fat tails. That can not mean that he is competent to distinguish a fat tail from a thin tail in every domain of human activity. See Matt Ridley who does know about the GMO domain:'s-over.aspx

Donald Smith writes:

I argue for the judicial use of Biotechnology and whenever an opponent cites eating a tomato containing a fish gene I suspect they suffer from confirmation bias.

Harvey Cody writes:

The interview became confused as to the definition of "Fat Tail" at ~40min. Russ summarized a “Thin Tail” as follows: “Thin tails means that the probability of remote events is very, very, very vanishingly small. And fat tails means it's small, but not zero.” When asked if this was a good summary of the distinction, Taleb unfortunately said “exactly,” even though that summary missed the points Taleb had raised before that summary, and, after saying “exactly,” Taleb proceeded to contradict the summary.

A “tail” refers to the right or left side of a bell curve which plots events or instances. For example, consider a chart of the height of adult humans with the y axis being numbers of humans of a specific height and the x axis being height. The middle of the curve will reflect the incidence of people of middling heights. As the curve moves to the left or right of the center, the gap between the zero line and the curve thins, eventually to the point that it becomes a “thin tail” of the curve. The probability of a randomly selected human being of a height at the right or left edge of the curve is “very, very, very vanishingly small.”

As Taleb uses the term, however, that quantitative “thinness” is not particularly important. In addition to the quantitative thinness, a curve of human heights (unlike curves of events which concern Taleb) gets “thin” in a qualitative sense as well. The quality of a human being either less than 2’ or greater than 9’ tall might be somewhat different than a person of average height, but such quality differences, if any, could not lead to devastating consequences to anything important. From Taleb’s perspective, when you are seeking a general understanding of what is going on with respect to human height, whether 2’ or 9’ human exists can be safely ignored because the general understanding of the situation with respect to human height will not change whether they exist or not. Because both the quantitative and qualitative aspects of what is possible at the tail ends of the chart can be safely ignored, a curve of human height has a “Thin Tail.”

A “Fat Tail,” as Taleb uses the term, refers to a tail of a curve which appears, from a quantitative/ probabilistic/statistical standpoint to be so thin that it can be safely ignored, but which, in reality, contains data points/possibilities which are so enormous and negative in quality that they cannot be safely ignored. Generally statistics is silent with respect to the qualitative nature of the points in the tail – because they are “statistically insignificant.” Too many people confuse “statistically insignificant” with “insignificant.” Taleb, like no one else, correctly points out that doing so is a huge mistake. For example, if one or more of the points in what appears to be a thin tail represents a chance that all life on Earth will be destroyed in the event that highly unlikely event happens, it is very significant – even if it is statistically insignificant. Another example: There is a qualitative difference between the one hole in the chamber that contains the bullet and the other 5, 5000 or 5,000,000 other holes in the chamber. A statistician would say that for a small amount of money you should be willing to spin the 5,000,001 holed chamber and pull the trigger, Taleb is saying, “Don’t play Russian Roulette.” The statistician would tell you that his answer is based on science. Taleb would disagree. The thin tail based on statistics alone in both of the examples is in reality the opposite of a thin tail, it is a “Fat Tail.”

Harvey Cody writes:

While Taleb’s approach could be great for eliminating candidates for the label, "potentially catastrophic," as evidenced by the comments about GMOs, identifying what is potentially catastrophic will in many, perhaps most, situation be less than clear cut. From what he said, presumably the 2007 financial collapse was a catastrophe. Perhaps so, but it was a far cry from world annihilation. In fact, a collapse of the US economy would be a cause for joy to many people around the world. An objective standard for what is potentially catastrophic is likely beyond human ken for all practical purposes.

Taleb glosses over the above problem by invoking the “precautionary principle.” Dreaming up theories as to how something might be catastrophic without concern about it likelihood [something required by Taleb’s premise] is super easy. Once someone dreams up such a theory about a proposed something, then, according to the precautionary principle, the proposed something must prove a negative, i.e., that it cannot cause a catastrophe.

Proving such a negative as a precondition to the introduction of products could make the cost (in terms of money and time – must one wait 10, 100 or 1000 years to see if the product is really safe? Is even that long enough?) of the product so great that many products which would have ultimately proved to be completely beneficial will not be introduced. This will be a huge loss to mankind. [Moreover, Taleb is unclear what the standard of proof is (beyond a reasonable doubt, more likely than not?). Given, however, that no or insignificant data is available on the subject of future consequences according to his theory [you cannot gather data which will be generated only in the future], the standard may not be important because, to be precautious, no amount of uncertainty can be permitted, and few things can be proved with certainty.

Taleb appears to be focusing only on the unknowable negative consequences of potentially catastrophic innovations. Theoretically, the knowable and unknowable benefits to be gained from the catastrophic innovation between its introduction and its catastrophic manifestation could be greater than the detriments of its catastrophe. If the knowable and unknowable benefits are greater than the knowable and unknowable detriments, it would be a mistake not to take the action, regardless of how large the detriments are. To illustrate the unreasonableness of the precautionary rule taken to the extreme Taleb proposes, consider drugs which will cause a person to die sooner, but which will make the time between now and death much more pleasant than would be the case without the drug. In many of those circumstances it is best to take the drug. Not crediting an eventual “catastrophe” with its benefits between now and the advent of the “catastrophe” is not rational.

An additional complication to the above calculus is: When making such a calculation, how would one factor in the combined probability of worldwide nuclear war, solar flare toasting Earth or a solar EMP wiping out all electronics before the nascent catastrophe strikes? It makes no sense to eschew the benefits of a "potentially catastrophic, product, if the catastrophe would be redundant.

Defining the value of a benefit or detriment (or in some cases its sign) of the consequences of an action is ultimately a subjective value judgment. For example, one person may believe the value of punishing Americans for all the harm America has done over its history and will continue to do if not destroyed is valuable, and improving the lot of Americans today and in the future would be unjust. Others, including myself, have a different take on the subject. Short of a universally accepted good (perhaps, the avoidance of a destruction of all life on the planet), Taleb will have no way to assign values in his models other than subjectively. Even the sign of each value could change from person to person.

The implementation of the precautionary principle to big, important issues would require government action. To apply it to matters of global scale would require a global government. In all instances effective enforcement would be essential. Expecting top down management of big items to be done efficiently, effectively and without favoritism is inconceivable.

Despite my criticisms of prescriptions Taleb advances in the interview, the underlying insights and concepts for which prescriptions are needed are great extensions of Taleb’s previous groundbreaking work and are fantastic. His approach appears to be the worst approach except for all the others that have been tried.

JCS writes:

[Comment removed for supplying false email address. Email the to request restoring your comment privileges. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Gary writes:

As I listened to the Taleb podcast and read the article he coauthored several questions arose. Hopefully, someone can help provide some answers. They are:

When Mr. Taleb talks about a probability distribution, to what does it apply? I ask this because the Taleb et al. article states, “The more uncertain or skeptical one is of ‘scientific’ models and projections, the higher the risk of ruin… (p. 8)” He shows this as a flattening out of the probability distribution with the tails getting fatter. Given my perception of the probability distribution, the likelihood of ruin would not change just because I know less about the models or projections. The probability of an event causing ruin would remain the same. Perhaps the authors are assuming more chances will be taken if there is skepticism.

Is the risk of ruin certain? Mr. Taleb asserted that the likelihood of ruin approaches one as more attempts are made. Doesn’t this fly in the face of the concept of the precautionary principle in that it is based on the idea that more information is needed before further work is allowed? If ruin is certain, what is the need for additional information?

Does implementation of the precautionary principle require stringent, non-market based control? That is to say, prohibition with the threat of government force. Taking the climate change example, I have faith that carbon emission taxes will work in spurring a hunt for market solutions, but I’m equally certain they will not completely stop CO2 emissions. Does this leave the possibility of ruin? Might dogged attempts to implement the precautionary principle push a country toward Hayek’s “Road to Serfdom”?

Is the level of harm/cost from implementing the precautionary principle of any concern? The Taleb et al. paper indicates that the cost of ruin is infinite, suggesting that any harm/cost from implementing the precautionary principle is immaterial. With a serious scaling back of CO2 emissions I can envision a significant burden for a large portion of the world’s population, and for the world’s poorest residents the burden might have life threatening consequences. They desperately need energy resources to move away from near subsistence levels of income.

Harvey Cody writes:

Gary: Re: "Is the risk of ruin certain?"

Taking the Russian Roulette case as an example, it is true that the more times one pulls the trigger, the likelihood of ruin approaches one.

The precautionary principle necessarily assumes doubt as to whether something is eventually ruinous - otherwise, if we already know for certain the something is ruinous, the something should be banned without further consideration.

Therefore, Taleb must think only things which are potentially ruinous should be subjected to the precautionary principle.

Surely, though, there are cases in which the cost of avoiding an unknown potential ruin is so low one shouldn't care whether the risk is certain or not, e.g., go ahead and wipe up the spilled milk and throw the paper towel away.

Andrew Fischer Lees writes:

What catastrophe are you imagining, Taleb? I really don't think that we COULD invent tomatoes that could cause our extinction. I mean, we could always burn them if they get too out of hand - I'm pretty sure we couldn't design a combustion-proof plant.

Don't just refuse to argue - that's childish. At least give a plausible example of what a catastrophe might look like, so that we can discuss whether it is a valid concern.

Gary writes:

Thanks Harvey for your comment which included, "Therefore, Taleb must think only things which are potentially ruinous should be subjected to the precautionary principle." This is what I thought, although his paper seems to say otherwise.

A problem I have with applying the precautionary principle is the implementation cost. Take GMOs and climate change as examples. Banning GMOs could be life threatening to many people and certainly banning fossil fuels will be life threatening to many people. So how do we determine which group to let die--many people now or potentially everyone at some future, unspecified date? Now throw in the possibility that future deaths might be less than current deaths (i.e., GMOs and climate change were viewed as potentially ruinous but turned out not to be). A biologist friend of mine guessed that more than a billion people would be at risk today if GMOs were banned. If that were the case, how would it impact the decision to implement the precautionary principle? My friend went on to ask what if we were guaranteed to be in the billion? See how difficult the problem can become? Russ Roberts has said in the past that a guiding principle of his is to "First, do no harm." Is this possible when the precautionary principle is involved?

By the way, here are two recent articles that relate to making choices in the GMO and climate change realms.

USA Today Millions of genetically modified mosquitoes could be released

Washington Post On Obama’s India visit, climate-change deal unlikely as Modi boosts coal production

Thomas A. Coss writes:

The Golden Rice solution to Vitamin A deficiency is a great example of the hubris of some who, supposedly "knowing better", prefer the complicated over the simple. Taleb addresses the vitamin A deficiency directly, offer free vitamin A pills rather than trying to "sneak" it into the native food chain through clandestine means which, by the way, don't work. Turns out people don't like the look of the rice in some cultures now how well it might be spun as "golden".

Derek writes:


Dispensing vitamin A pills is not as "simple" as you seem to think. For starters, the people affected by this deficiency are often in sparsely populated rural areas, which makes distribution very difficult and expensive. Especially since these populations have no means of producing the vitamins themselves and would have to constantly be resupplied. This type of solution is a band-aid that doesn't address the causes of vitamin A deficiency.

If Golden Rice were distributed to these populations, then the local people would be producing their own vitamin A and would not require constant intervention by aid workers. Unlike other GMOs, Golden Rice is not a transgenic mutation (ie, a mixing of genes from different (trans) species). The gene that produces the beta-carotene comes from the same rice plant, which is normally only expressed in the roots and stems of the plant. The modification causes the gene to also be expressed in the grain of the rice, but contains no other genes or proteins not already found in the plant.

The simplicity of the Golden Rice solution is that it's self-perpetuating. The rice saved from one year's crop is replanted the next year, and the farmers continue to produce their own beta-carotene.

Jason writes:

I find the potential large danger of GMOs to be limited because nature has not produced ruinous dangers. There are an estimated 500 gigatons of bacteria number around 4-6x10^30 cells. These bacteria breed very fast, mutate very fast and share genetic material extremely easily. This is not including the large amount of genetic transfer that comes from viruses as well. These bacteria have been doing this for 100s of millions of years. These are huge numbers and if there was some combination of genes that could lead to ruin you would think that this process could find it. Sure once in a while we get a spanish flu or something similar but this does not seem to fit into the definition of ruin as presented.

Joe Blow writes:

[Comment removed for supplying false email address. Email the to request restoring your comment privileges. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Andy Kaufman writes:

So, I'm thinking the Precautionary Principle might come into play with the UK's GMO equivalent for humans? [BBC link:] MPs say Yes to Three Person Babies

PaulSpring writes:

I have to commend Russ for having a guest whose conclusions run opposite to his. I am also surprised by the number of commenters here who seem to have missed the point. On one hand Taleb relies on mathematics to support his contentions, but on the other, common sense does a pretty good job on its own.

Russ seems to have an affinity and respect for Taleb. I wonder how that squares with his skeptical bias in the Christy/Emanuel climate change debate. The global warming denier side clearly does not place any value in the "fat tail" , moderate risk/extreme catastrophic consequences philosophy. Clearly laissez faire approaches are impotent in dealing with fat tail situations - self-interest is deaf to the signals of a low probability situation regardless of consequences.

While Taleb dismisses AI as a fat-tail problem, that is more due to his ignorance than any fault with his theories. If AI is given control of our electrical grid, transportation systems, surgical procedures, food supply, etc, one can see that the consequences could be catastrophic.

odinbearded writes:

[Comment removed pending confirmation of email address and for policy violations. Please read our Comment Policies. Email the . A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Dallas Weaver Ph.D. writes:

Talebs concerns regarding GMO gave me pause to think. It took a while to understand that man can't make a super organisms that will take over the world like a virus that will kill everything. We don't have a super fat tail on GMO.

Considering that every living thing has virus that will kill it, including all bacteria, and there are 10^23 virus in the oceans alone and they outweigh all the elephants on this planet. Even virus have virus, we can make some positive statements about what probably can't be done.

Nature with that 10^23 experiments every few hours get to explore a lot of possible DNA combination space over a few billion years. During these billion years, we have no indication of a drastic biologically caused catastrophe caused by making a very nasty piece of DNA. Nature did cause the oxygen catastrophe by dumping toxic oxygen into the atmosphere, but made advanced life forms possible. If you are an obligate anaerobe, you are now restricted to swamps, bogs and sewerage treatment plant.

So I can conclude that the probability of man doing a genetic modification with truly catastrophic consequences is trivial relative to a fat tail from other factors, like a few Km rock from space or a near by supernova or some political idiot with his finger on the nuclear button or political ideologies that crash the system for the self interest of the leaders. With the ego's of the political leadership of the world, it seem that the last one may be the most significant.

Comments for this podcast episode have been closed
Return to top