Russ Roberts

Taleb on Skin in the Game

EconTalk Episode with Nassim Taleb
Hosted by Russ Roberts
PRINT
Capitalism, Government, and th... David Laidler on Money...

Nassim Taleb of NYU-Poly talks with EconTalk host Russ Roberts about his recent paper (with Constantine Sandis) on the morality and effectiveness of "skin in the game." When decision makers have skin in the game--when they share in the costs and benefits of their decisions that might affect others--they are more likely to make prudent decisions than in cases where decision-makers can impose costs on others. Taleb sees skin in the game as not just a useful policy concept but a moral imperative. The conversation closes with some observations on the power of expected value for evaluating predictions along with Taleb's thoughts on economists who rarely have skin in the game when they make forecasts or take policy positions.

Size:28.8 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:33Intro. [Recording date: August 1, 2013.] Russ: Our topic for today is skin in the game, and as a jumping off point, we are going to use a recent paper Nassim has written with Constantine Sandis, titled "The Skin in the Game Heuristic for Protection Against Tail Events." Nassim, welcome back to EconTalk. Guest: Thanks for inviting me; this is becoming a habit. I am never invited by the same person twice. Russ: I'm afraid that says at much about me as it does about you, Nassim. But here we go. Let's start with this expression, which is familiar to some in America, but maybe not to non-native English speakers and even to some native English-speakers. What do you mean by "skin in the game"? Guest: Meaning I cannot take risks entailing that might harm others without being subjected to them myself. That's what I meant by skin in the game. In other words that you cannot possibly make a bet entailing a random variable that can harm others without yourself being somewhat harmed. It doesn't need to be as harmed; you need to incur some personal harm, enough to be a deterrent. Now, this is quite potent because it is probably the earliest idea that ever decimated a society. It's the first idea probably; it's definitely behind the first document, I mean the oldest document we have, Hammurabi's Code. So it is very potent. And in it, I'll explain in a few minutes, there are so many things--I mean it allowed society to function for several thousand years. Particularly--maybe not between societies but within societies. But unfortunately it's gone today from consciousness because of modernity and all these fancy ideas. Russ: How does it show up in Hammurabi's Code? Guest: Hammurabi is probably not the oldest mention but probably the oldest one extant. It says the following: The architect, or the engineer--can't translate Babylonian--if he builds a house and the house collapses and kills the owner of the house, the architect shall be put to death. And it also continues that if it kills the firstborn son of the owner, the firstborn son of the architect shall be put to death. Now the point is that Babylonia didn't have much against architects. They liked architects. They had of course their suspended gardens. But the thing is, they wanted it as a deterrent. And behind it lies this very powerful idea, that the architect can hide their risks, delayed blowups, because the cheapest, the place where you can hide the risk is by cutting corners in the foundation. And of course he is not going to be there when the thing collapses. The Babylonians detected it; they understood that no inspector, no regulation, will ever ever outperform that simple rule. And behind that rule of course Constantine Sandis and I went back, did some archeology, some cultural archeology about the dig, and we found that that rule was directly behind the next [?], namely you know, the eye for eye, that of course the Semites spread, 'an eye for an eye, a tooth for a tooth.' In Exodus, we see it. And of course we have 'love your neighbor as yourself,' Leviticus, and of course it kept going. Civil rule by Hillel, Rabbi Hillel--he was asked to explain the Torah standing on one leg; he said: Don't do to others what you would not have done unto you; and the rest is commentary. And Isocrates at about the same time, came up with his own version of Golden Rule, but with a difference, and that's because it's almost never mentioned: that he applied it to states. And he said: States should not do to other states, weaker states, a strong state should not do to a weaker state what he wouldn't want a stronger state to do to it. Russ: Who applied it to states? Guest: Isocrates. Russ: Isocrates. Yeah, yeah. I say 'yeah, yeah' like I know Isocrates. But I remember from your paper. Keep going. Guest: Isocrates, we are talking 5th, 6th century already. And of course we have the Golden Rule that we see in the Old Testament, which is a positive--up till then it was a negative rule: 'Don't do unto others what you don't want them to do to you.' And then the Golden Rule: 'Do to others what you want them to do to you' and so on. Up to then we had a civil rule. What you see behind this is the foundation of moral philosophy, as a foundation of ethics and a foundation of civil society. But in it we saw something much more potent--we saw the foundation of risk management. This is why you cannot disentangle risk management from ethics, you cannot disentangle economics from moral philosophy. Some people try it; they can't. So, this is our paper and of course we put some probability theory, and I'll talk about it I'm sure a little bit later, about how Taylor[?] risks are impossible to figure out unless you are the one calling them.
6:43Russ: I want to go back to Hammurabi and the builder, the architect. And before we do, I just want to mention, because it's just a pet peeve of mine, that in the Book of Exodus, when it says 'An eye for an eye and a tooth for a tooth,' in its application in Jewish law as elaborated in the Talmud, it substitutes the physical eye for the economic value of the eye. So if by an accident you poke out somebody's eye, a Jewish court would not poke your eye out. They would force you to pay damages equal to the loss of an eye. And a fairly sophisticated way of how that would be measured is elaborated in the Talmud. But I just want to mention that--that the 'eye for an eye, tooth for a tooth' was not implemented in Jewish law by the literal description of the text. But I want to go back to the builder-- Guest: Yeah, sorry, one thing I want to mention, said about the same time, Isocrates and Hillel--it was not at the same time; it was still several centuries separating them, between Isocrates and Hillel. I just realized. Russ: They probably didn't know a lot about each other's work. Hillel came after Isocrates. He may have known. I don't know. Guest: There was a strain, what people call the 'Great Transformation,' when the thing started spreading. And then finally, of course, when you look at Kant, then you have the culmination of this idea. Kant of course is actually in accordance with that maxim which at the same time you will make it a universal law. Russ: Yeah: What would the world be like if everybody behaved as you did? Guest: Exactly. A universal law. So it's more generalization of all these laws, culminating in Kant, and we're still stuck in Kant. Russ: And it's the categorical imperative. I see that, by the way, as a way to get people to morally overcome the free-riding problem, the idea that you can get away with something. Which you often can. But what Kant is telling you is that even though it might be rational in some narrow sense, even though it might be in your self interest, it's immoral to act that way. And Kant is reminding you of that. And just as a side-note, Adam Smith's concept of the impartial spectator is another way to think about some of these moral rules. Smith wanted you to think about your behavior as if someone were watching you. He said that's the way we actually do behave. So it's a positive theory; it's not so much a normative theory. It has normative elements. He's trying to describe the way the world works, and he says we act as if someone were watching us. We step outside ourselves and put ourselves in the shoes of an impartial spectator. And there's an interesting question about whether that influenced Kant or not. They were contemporaries, unlike Isocrates and Hillel. But I'll put a link up to a paper on that as well. I want to go back to the builder, though. So, the builder: Explain why the incentives for the builder are so powerful and how they relate? What are we trying to avoid? And basically, you are talking about opacity--that is, nontransparency--and information asymmetry. These are two ideas; I think they are important to generalize. Guest: There's a third one involved here, which is what is behind our paper. Because our paper doesn't bring anything new except for this concept behind it. And that's as follows. I mean, I've spent all my life dealing with tail events. And there's a very simple property to probability distributions that generate tail events, particularly ones that are asymmetric. And you know, you make a penny, you make a penny, you make a penny, and then you lose dollars. And the property is very, very simple. As I mentioned, in the mathematical parts of the paper, I say: If you have a process such as a Pareto 80:20--you know that 80-20 that people talk about? Okay. Russ: No. What do you mean? Guest: 80-20 means 80% of the company should reap 20% of the profit, and 20% of the company should reap 80% of the profits. Russ: Or 80% of your inventory accounts for 20% of your--20% of your inventory accounts for 80% of your sales. Okay. Guest: Exactly. So if you look at something that is 20% of the days[?] trying to reap 80% of the returns. And stuff like that. Okay. Which is what we effectively observe in many domains. Well, the point is that in any given year, you will have 90% of the observations above the true mean. Russ: Because it's asymmetric. Guest: What is asymmetric, people, you don't observe the mean. You don't typically observe--you observe profits--profits, profits, profits, profits. And in fact the mean can be negative. Take the banks, for example. The banks, you know, someone who is boasting, they had the whole quarter they are losing money every single day--it's totally irrelevant. Banks lost $4.7 trillion during the Crisis. So, if you make small gains and large losses, you cannot observe the statistical properties from a small sample. You have small sample effects. And I've done something. What we are talking about is an antifragile. Okay? This ethical rule is an antifragile. It's behind what I call fragility of society, or someone who trades antifragility at the expense of someone else. Someone has an option against someone else. But the ideas that I have been working on now are the ideas of fat tails[?]. Trying to mathematize it. And I have showed the following things. The most misunderstood thing in society is the law of large numbers. You take, you know, observations of many things and try to figure out the mean just from observing what's going on. In reality, it's not that way. Most distributions have slow numbers, if you want to say. In other words, the mean doesn't reveal itself very easily. So people put themselves in a situation where the frequency of profits are very high. So, I manage money for someone, say, a fund or whatever it is. So I don't have skin in the game. My optimal strategy is to shoot for small steady gains and rare losses. Russ: This is picking up nickels in front of the bulldozer. Guest: Exactly. Russ: So, most of the time, the bulldozer is moving slowly; there's plenty of time to pick up the nickels; you look like you are really smart because you've got all these nickels. And then one day you get run over. You say: That was a bad day. It was just bad luck. As you point out in the paper, you say things like: Based on the information, I did the right thing. Guest: Exactly. And the [?] is, the law of large numbers, is quite shocking. Because if you look at fat tails--and that's what, as I said, my new work is about--in a fat-tailed series, eventually the maximum you will have will be equal to the mean. To the totals. Russ: Explain. Guest: In other words, let me explain: If you have an option trader, someone involved in the fat-tailed game, say you trade for 10,000 days, your maximum, your extremum, will correspond to the totals. So in other words everything will be dominated by a single observation. Russ: Right. So basically in that case, the case we are talking about of the nickels in front of the bulldozer, the outcome is you are wiped out. Guest: The outcome for the bulldozer is not the nickels. Russ: Right. The nickels are irrelevant. In other words you don't observe--in the process we observe commonly, as I said, 90% of the observations will be below the mean, which means that 9 years out of 10 will not reveal the mean. So you have at least 9 years of profits ahead of you. So what happens is, you catch a bonus in year 1, bonus year 2, bonus year 3, ... bonus year 9 and then year 10 when you lose all the money-- Russ: You're a genius, too. Guest: And you are a genius, of course. And as I said, if you make money 4 years in a row, in New York people start saying hello to you. They eventually laugh at your jokes. So you make money 9 years and then the 10th year everything is gone and you claim, well, there was this adverse event, everybody lost money, it was a difficult year, whatever it is. And like the banks in 1982, Walter Wriston of Citibank said, oh, it's only one bad quarter, what is everyone talking about? Yeah, the one bad quarter, that wiped out everything made by Citibank.
15:32Russ: Nassim, I thought of you last night. We had a friend over for dinner and she was talking about how scary the roller coaster is at the Santa Cruz boardwalk. It's very rickety; and it's built I think in 1924. This example has nothing to do with the actual Santa Cruz roller coaster, because for all I know it's beautifully maintained and it's extremely safe. But when she pointed out that there hadn't been an accident in maybe the whole life of the roller coaster, I naturally thought of you. Because, you know, you could have 365 days a year for 90 years with no accidents, and it's obviously very safe; and then one day, it's possible for 150 people to be killed; and you say, well look how good my safety record is; it's 99.9 out of 100. But that one day is devastating. Guest: That's true. But that's mild. Because you don't have as big a problem for the rest of society. Russ: That's because you don't kill all the people who rode it on the other days. That would be the analogy. Guest: Exactly. The analogy is it killed everyone who ever rode on it. This essentially would be a fat-tailed process. So, if you take--now what is happening in today's modern society is you have invisibility of the risks. So someone is shooting of course for a high probability, low impact benefits and low probability, high impact losses. If he doesn't bear the losses, that's the optimal strategy. Plus, think about the metaphor as given earlier; if you've given money to manage and you are going to make money 9 years in a row, if you make money for 2 or 3 years you are going to get a lot more capital. So you are going to get bigger and bigger and bigger and bigger until you blow up. So when you blow up, you'll get your maximum. Russ: But that's when you have no skin in the game. Guest: Exactly. Russ: Your point is that if all of your money is in that fund, that picking up nickels strategy is not going to be your natural incentive. Guest: Exactly. Your incentive will be very different and you won't really care about perception. You care about your own money. So what happens is you won't hear people saying, look, he is losing money, or he is making money. You don't really care, it's your money in it. So your reputation doesn't matter. Repetitions are very easy to game; you just shoot for a high probability, low impact event. So, what you require if you invest with someone is: You lose money, I want you to be harmed a lot more than me. And of course, the reputation, harm the reputation, you mean harm financially. That would be enough of a deterrence.
18:18Russ: I'm going to raise enough of a problem now with that claim. And it goes back to the builder claim, as well. You can have too much skin in the game. And I don't think you deal with this in the paper. It seems like an issue you should deal with. Let's start with the builder case because it's so easy to understand. If I'm going to execute a builder every time that he builds a house that collapses and kills someone, there are going to be a lot of costs to being a builder. And if it's possible--and it is--that sometimes houses collapse that aren't the fault of the builder. There's just bad luck. There's random events. And there's a continuum on which we would judge irresponsibility. There's a continuum on which we would judge safety. So, if we say to the builder: You have to build the house safe enough so that if someone dies you'll be killed, you have to build a very very safe house--it could be that house is so expensive I would really rather take a little more risk than that. Or if I could [?] a different standard of the value of a life. And similarly, if you tell me-- Guest: There's got to be some equilibrium. People will ask the builder to cut corners, otherwise they can't afford the house, and you will end up eventually with the right balance. There's got to be some equilibrium. People will ask the builder to cut corners, otherwise they can't afford the house. Russ: Correct. Guest: And they end up eventually with the right kind, the right balance. Russ: So if I say to you-- Guest: But we're not in Hammurabi's days. So at I said at the very beginning, 'skin in the game' doesn't mean matching the exposure of others. In other words, you don't need to kill the owner of the plane company every time there is a plane crash. But on the other hand there has to be a painful disincentive beyond the cosmetic. This is what I was talking about. There has to be some--I mean, look at capitalism. Capitalism has a built-in asymmetry, in the sense that bankruptcy has a zero in it, there's no negative for a company. But you still can have skin in the game by forcing people to lose a little more money. It doesn't have to be unlimited. So, you have unlimited profits and limited losses, but still maintain skin in the game. And I think we are reaching that equilibrium in economic life, outside of course of government intervention, banking and bailouts. You have that equilibrium. In other words, the builder isn't put to death. There are financial penalties. When you go to a doctor, if the doctor amputates someone and he takes the wrong leg, you don't take the doctor and amputate his leg in return. But there is a penalty, you see. So, we're not worried about places in which this equilibrium has been discovered heuristically, bottom up. I am worried about modernity. I am worried about bureaucrats causing hyperinflation, affecting savers and citizens but not harming them at all. I'm worried about that kind of stuff. We're not worried about contracts between individuals that can find equilibrium in some way or another.
21:17Russ: So we've been mixing together a couple of things here and I want to pull them apart a little bit. Skin in the game--it has positive impacts, meaning, in the technical sense of economics, meaning it has natural incentives for the people making decisions on the behalf of others. What you are pushing in this paper--you are pushing a lot of things--but one of the things you are pushing is that it's a moral concept; that I should--that it is immoral to make decisions without skin in the game. That I should insist on having skin in the game. So there are sort of three levels here. One is: Regulation should ideally put skin in the game rather than take it away. And it often takes it away. Guest: No, no, let me say one thing about regulation; and we will hopefully talk about it. Skin in the game, to me, is a heuristic. That heuristic can be enforced between two individuals engaging in a contract. Regulation requires a state. This is the main difference here, is that we are bypassing the state. In the contract. I can eat your food if you taste it in front of me. That is a skin-in-the-game heuristic between two individuals enforced without having recourse to the state. Russ: But you are saying something more than that. You are saying that if I'm the cook, even if my customers don't insist that I eat my own food, I should eat my own food. Guest: Exactly. So here we have the burden. The first level of the contract between individuals, forget regulation and the state. The second level is a moral concept--is I should not make a forecast unless I am harmed by it. Like, people ask me on television or something-- Russ: If it's incorrect. Guest: If it's incorrect I should be harmed by it. So, if I make a forecast, if someone asks me for my opinion, it is immoral for me to say, well, the market is going up or the market is going down or this will happen, unless I stand to lose on that advice. Because people take risks based on other people's advice. This is where it's immoral. This is why skin in the game is very generalized to daily life. I cannot tell you, well, this is good, unless I've tasted it. If there is risk. If there is no harm, then who cares? Russ: We all understand it's immoral to review a book you haven't read. Guest: Completely. Russ: And so to make a prediction about an event that you have no stake in has a similar immorality to it. Guest: Depends on how much harm you can cause. I am saying risk[?] domain. I am confining it to moral philosophy in the negative tail domain. Russ: Yes. Guest: So if I lied to you and told you I predict tomorrow is going to be sunny just because it may uplift your mood or something like that, we are not in a Kantian[?] domain. We are--it's okay because I'm lying to you and I'm not going to be here to enjoy the sun or something, but I think I am making a calculated thing that will make you feel better. There's no harm. If there's no harm, that's not my space. Russ: Yeah, I understand. Guest: So we saw two levels: the first the contract with the individual, the second the contract with yourself, the moral domain, or with humanity, which is the moral domain. Russ: So, it seems to me, so skin in the game emerges naturally in the dealings between two people. And it seems to me that what--and it should emerge, you are suggesting, morally; if it doesn't emerge through the choices of the acting individuals we should be eager to impose it on ourselves as a moral heuristic, a moral code. But it seems to me that one of the challenges of politics, political life, public policy, is that what government's really good at is getting rid of skin in the game. That's kind of their specialty. Guest: Not all governments. Let's look at it. What you and I tend to call governments, because we have the same political colors[?], is the centralized government. And government can be a local, neighborhood union. Russ: True. Guest: And then let's figure it out from the history of countries that have been very successful, like Switzerland or Sweden, places like that. That people making the decisions are usually embedded in a community. And their skin in the game is typically shame. Because they are socialized by the community. Their skin in the game is shame. Whatever government official in Washington can make a mistake, and it's a spreadsheet looking at him. It's not someone in church on Sunday looking at him and making him feel shame. And that's where the main difference is. Russ: You could argue, though, that there's an offsetting effect at the national level, which is history. Ben Bernanke--it's true that I don't spend a lot of time with him in church, or synagogue. But his name will go down in history, to some extent, for his decisions. Now, the real problem for me is that, as you point out, a lot of times there's opacity. It's opaque. People tell me, Well, he did such a great job. You can't even evaluate it; we don't have the data to evaluate whether he's done a good job. So the shame part of a blemished historical legacy is very limited in economic policy actions. Guest: Exactly. Well, it's very simple: the law of large numbers works much faster at the micro level than at the macro level. At the macro level you need thousands of years to figure out if someone, if for a repeated event someone made the right decision. At the micro level, you know, we have thousands of experiments every day. So it's very different. And this is probably the big distinction, and [?] is micro concept, whereas Washington is a macro concept. So my whole idea in fragility, in Antifragility, was to build a system where mistakes can occur but do not threaten the system. Russ: So let me go back to the point about government. It's true, at the local, local level, there are some natural incentives. But at the national level, say in the United States, a lot of what government does is to remove skin in the game--bailouts, insurance policies, do-overs, ad hoc interventions. Is there any hope that government policy might take your idea and think of it as a way to guide regulation? Guest: I think that the only hope is to rely less on public policy. Because [?] systems, when they are localized, tend to enforce skin in the game very naturally. You look at, systems when they are small, you can identify cause and effect very easily, and then you can force the cook to eat his own cooking. And you see it in small communities. Like the army, for example. In almost every country I looked at, people who repair helicopters are sort of forced to take rides on them. And in almost everyone folding for [?] parachute jumping, has that sort of skin in the game involved, where if you fold someone's parachute you may be asked to jump in it, jump using it. So you have this, so there's a natural tendency by systems, when they are small, to produce a skin-in-the-game rule and function that way. But it looks like when things get large there is a size effect. Large is ugly. And large doesn't have skin in the game. And large doesn't have these warped incentives. And large, of course, likes regulation rather than the simple heuristics. As a libertarian I like the last resource. It doesn't mean no government at all, which would be minimum, but whatever is less; not the first. So when I think of things that would be less of government, these things are related. Of course, law enforcement. And things that we have either market failures or systemic inability to protect individual against larger threat[?]. And there, maybe, you need regulation. Like for example, I'm very concerned about genetically modified food because the person producing them may threaten my backyard. So I want protection. And I can't really protect my backyard because I can't fence it against the strangeness[?] in ecology that can happen to it. So maybe the regulation is something to consider in these exceptional cases. But sort of exactly like medicine: Emergency room medicine is completely different from cosmetic surgery. Russ: Agreed. But my point only is that if government stopped bailing out banks, there would be a natural skin in the game. And we seem to find it difficult to avoid--there's no 'we' here--the politicians find it difficult, I don't--the idea of it doesn't bother me, but somehow, politicians can't seem to bear to let these people have skin in the game. Guest: Yeah, that's true. This is where we have problems, metastatic government. The larger and more convoluted government is, the more the [?] in the game; and the more they have interest in the game, is that people who have everything to benefit from causing you harm, namely the lobbyists.
31:12Russ: I want to go back to the philosophy, to the moral principles you talked about. You talked a little bit about the role of asymmetry and the negative asymmetry. Because I want to talk about positive asymmetry as you do at the end. So talk about what skin in the game has in common with an eye for an eye, with the silver rule, the Golden Rule, etc. Guest: What it has in common is establishing symmetry. The way I approached it in Antifragile, and people are starting to get there, 9 months after its publication, because it's [?], I start talking about skin in the game, is the way I looked at it in terms of fragility. If I had more up-side than down-side, name a positive asymmetry, I tend to benefit in the long run from random events. I am what's called 'antifragile.' So I benefit more than I lose. Now, if I have more downside than upside, then I am fragile. And it's very strange how the thing--one of the wonders of science is that the asymmetry, of course is what everything that's fragile on earth shares. And also it comes from disliking random events, disliking volatility, disliking stressors, disliking things. They all, it's the same equation that explains them all. And the equation that we use in option trading when you short volatility. So basically people are short an option or long an option. So the problem is the class of people who are long an option, namely bankers, their volatility is either break even or make money, they are not harmed; and society is short an option--basically, if the bankers make money we don't get anything; if they lose money, we end up paying for it via taxes, more regulation, more burden of having people to understand 2500 pages of code, stuff like that. So this is where the asymmetry is: very, very similar to short an option. And of course people not understanding options at the core there is this misunderstanding that when you have an option--because economics is still in 20th, not even 20th, 19th century statistics, where people don't really understand probability distribution and they think that you can figure out performance from short period of time and that the distributions are sort of symmetric. That kind of stuff. So there's a mismatch between compensation periods, whatever you want to call compensation, and the time it takes, because of large numbers, for the variable to show its properties. Russ: Yes? Guest: So, this is a big problem, is that you have an asymmetry and it's undetected. The undetected part is insisting[?] on. When we looked at the paper we realized that while economists understand the agency problem, they understand moral hazard. But they don't understand moral hazard with respect to fat tails because they never really gave too much thinking to fat tails other than thinking, well, it's a peso problem and we can't talk about it because we don't understand it; thanks, bye. Whereas instead of saying this is a central part of life what they call the outlier, and then let's deal with it as a component of life; and if we don't deal with it, it's so large in effect that it's going to end up eating us. So this is sort of like the background. So, when I say the assumption of optionality is quite general. You can look at any individual and say, this guy is long optionality at the expense of others, or this guy is short optionality, being used by others. Russ: Explain that last case. That would be the taxpayer. That's the taxpayer. Guest: The taxpayer, or sometimes the investor. The investor in a company. A lot of the returns are going to go to the industry, are going to go to Wall Street. Because the fund manager makes money, it's his money. And if he loses money, you pay for the losses. So you get smaller share of profits than you do of losses. In the long run you are going to lose money. And this is an effect that people don't realize about hedge funds, is that they think that hedge funds are the best thing after sliced bread. Some are very good. But in general, hedge funds have a high level of fees and a lot more optionality to the hedge fund manager, but hasn't corrected, thankfully, by people forcing hedge fund managers to have a lot of their money in the funds.
36:10Russ: But you also talk about positive asymmetry, where people assume risks for others to help, altruistically to help them in society. Not so much in the financial sector but elsewhere. Right? Guest: Anyway, in a lot of places people voluntarily take risks for the sake of others. And this is what economists are fully equipped to understand. And honor. When you take the notion of honor, honor is someone who has courage, someone, a person, he or she has a lot of courage. And using that to save others or help others. So people realize, when you see Solon's statement that you are only happy if you have a glorious death, and you see--that's the mentality. The ancient world, you are only as good as the risks you are taking for the sake of others--city, or even yourself. So in other words, you are not transferring risks to others. You are the one who is taking the risks. And of course the two most famous figures of today are people who died for their ideas. Not just to fight for others, Isocrates. And a few hundred miles to the southeast another fellow who died for, who was crucified. When you think about it, the prestige that we have gotten has almost always been proportional to the risks that you take for the sake of others. So, heroes in war and stuff like that. That's the notion of hero. And of course the lords in medieval England and medieval Europe were people who were forced to protect the peasants. And they had the high rank, but it came as obligations. You took more risks; you were the first person to die. And of course Hannibal was first in battle. Same with Julius Caesar. Same with all the big warriors. With almost no exception. Now let's talk about our generation. Who do you have? George W. Bush, right? He escaped war, although his father, of that generation, there were war heroes. Russ: It was honorable. It was dishonorable to find a way out. It wasn't considered clever or smart. It was dishonorable. Guest: So your rank. And I studied a lot the Mafia in Antifragile, the same--you know what kind of behavior. People take risks. You are not a risk-lover; but if someone has to take the risk, you are the one who takes it. And that's sort of like the idea, not to die in a nursing home with tubes coming out of your nose. The idea is to die in battle. That's what you are made to do. And so prominence came with that kind of lord--that was a lord concept. So, I think that was prevailing in society. But today, when, you know, the gentleman who was head of the CIA was a big military person, when he was busted--was his name Petraeus?--I thought--I mean, I looked at his Wikipedia page and there were all these decorations. Hundreds of decorations. I saw that the fellow jumped from helicopters at night, climbed walls. Threw grenades through small windows. Turned out, the fellow had never been in battle. He had never been in battle. So here we have a generation of people who have never had to take risks for the sake of others. And society cannot function when you have an imbalance between, like in the first column is people who make others take risks for them. And then you have in the right column people who take risks for the sake of others. It can't function that way. It cannot. You cannot have too many of the traders--and George W. Bush--who would have never taken personal risks but engaged others in a war--you can't have too many of these. We need the reverse. And we had plenty of these, as I [?] earlier. Russ: I wonder why that changed. Guest: Technology. The problem is technology, is modernity is causing disruptions in the entire system. Russ: I don't know. I think part of it is how wealthy we are. Staying alive is definitely what we call in economics a 'normal good,' meaning we want more of it as we get wealthier. So I think we value our lives and our health a little higher than we used to, and so our willingness to sacrifice it is a lot harder. Guest: I don't think--I think this is probably the culture. Because the change is very recent. It just took place very abruptly. Russ: Right, but why did that culture change? Guest: [?] look for an explanation. The thing you have to look at the world today. It's highly technological. We have dangers but not the same kind we had before. And modernity, put the bureaucrat in place of risk-taking[?]. Now, risk-taking isn't just physical. Risk-taking is entrepreneurship. So instead of worshipping entrepreneurs, they [?] really for the sake of others, because of probability success is much slower than that of say venture capitalistic. So they take risks for society. And they save a lot, but collectively we need them because otherwise we can't advance. Instead of having all these people glorified and put on a pedestal, you put on a pedestal Harvard grads. That's not how a society can evolve. Because instead of Harvard grads who-- Russ: Did you say 'Harvard grads'? Is that what you said, Nassim? Guest: I used 'Harvard graduates' as a metaphor. England was built and America was built by adventurers, in the economic sense, what Adam Smith called adventurers. Not by bureaucrats. And then after on the benefits are reaped by the class of bureaucrats who come and try to control the process. Russ: Yeah. Your remark about entrepreneurs reminds me of when Amazon started, and they were losing money every year. And I remember being so grateful to the people who had invested in them, because they probably--at the time, most people thought they weren't going to make it. But I was building a beautiful library of inexpensive books that arrived at my door very quickly and had a wonderful way to shop for them. And they were bearing all the costs and I was getting all the gains. Guest: Look at restaurants. Open a restaurant in New York City, is like suicide. But yet without these people, we wouldn't be where we are. What I'm saying is that we no longer are giving the respect due to these people who take risks for the sake of others, whether in the military domain or in the economic domain. Or in other domains as well. In the political domain. People have the courage to voice their opinion. But we can protect ourselves just like the ancients by finding a very simple heuristic. Like, Ralph Nader had a heuristic for war. He said that if you are going to vote for war, you should have a member of your family--a descendent, a son or grandson--on the draft. And then you can vote for war. And in a way it's liberating. On both sides. When I manage money, and of course I lose money very frequently; I lose every battle but not many wars. So you lose money every day. A client would call me; you pick up the phone, and you are not uncomfortable because I lost 10 times more, at least 10 times, up to 50 times more, share of my net worth that day than he did from the loss. So, it's liberating on both sides. You don't feel guilt if you have skin in the game. If you are sharing the losses. You don't have any guilt. You don't have that. And people start having these conflicts. Russ: Right. So the investor sleeps well at night knowing that the manager is sleeping with the same portfolio as he is. And you don't have the guilt. It's good. Guest: Right, and then the manager doesn't have the guilt because he has 50 times the relative exposure. You have half your net worth in your fund, that's the rule of thumb today--half your net worth in a fund--and these people are very diversified across funds. I mean the clients.
45:03Russ: So, I want to get to a paper you wrote with Philip Tetlock on prediction, because I think it's full of some insights and they tie back into these ideas that we've been talking about. You suggested we talk about this as well. So, in that paper you talk about a distinction between binary variables and vanilla events. Binary events and vanilla events. Explain what the difference is. Guest: If I tell you the stock market is going up or the stock market is going down tomorrow, I'd be right or wrong. That's a binary event. There will be a war or there won't be a war--that's a binary event. Most predictions are binary, in the sense that binary can take a 0 or 1. The event can happen or not happen. In real life we have a third dimension, the [?] event. So, for example, you can predict the market is going to down, but it can go down 20% or 0.1%. And the problem is that people who forecast are judged on how frequently they are right, when in fact how frequently they are right doesn't matter. It's [?] that matters. So the dimension makes the dimension makes a huge difference, and of course there are a lot of pathologies. In Fooled by Randomness, I identify the problem, because one day I was asked by someone: Hey, what do you think, the market is going up or down? I said: Of course it's going to go up, with high probability. And then the person saw my trading account and realized that I was exposed to the market going down big time. Russ: You were betting on the market--when you say exposed-- Guest: Yeah, my exposure, my what I call exposure--bets are binary so people may mistake them for binary proposition. My exposure in other words I had options that paid off big time for if the market went down. So he said: What's going on here? You told me the market is going up. I said: I think the probability of the market going up is very high; but should it go down, it could go down a lot. The real world is about expectation, not probability. In other words, probability times how much it goes down. So the rational thing is to be short the market but it's more likely to go up; but if it goes up, it will go up small; if it goes down, it will go down big. He was involved in trading, said: Oh, you are right, but I didn't think about it this way. I was, well you better think about it this way because that's the way the world works. So when, for example, we are talking about the first war, some people predicted that the first war would happen. But they imagined it would last 2 weeks. Russ: The first World War. WWI. Guest: Exactly. So an event, in the fat-tailed domain an event is not defined. So you can't say up or down. You have to say how much up or how much down. Russ: It's very deep. I had the same thought about the Civil War. The Civil War I think people thought was going to last 2 weeks. They knew a war was probably going to happen. If they had a little bit of imagination and thought about the possibility it would last 5 years and kill billions of people, they might have had some second thoughts. Guest: Exactly. So what happens is a war can kill 5 people or kill 5 billion people. So, in fat-tailed domains, the probability event is much less than the payoff. If it matters. Which is why, so when we talk about prediction markets, people think that prediction markets hedge you. They don't hedge you. A prediction market is a binary; and an exposure is open-ended. Russ: And life is open-ended. Guest: So natural variables are what I call 'vanilla.' And I wrote a book on the mathematical difference 20 years ago, published 18 years ago, called Dynamic Hedging: Vanilla and Exotic Options. I used the word 'vanilla' at the time because I was a trader. That was my language. I was not a scientist. Yet. Or was not involved in active science. So, when I wrote that paper with Phil, Phil is involved with prediction. And I said: If a guy can have a perfect track record, it doesn't mean anything, if the payoff is small. In a binary space, things are very different from the way they are in the natural or 'vanilla' space. So we called, of course, that natural variable vanilla. And it did effectively change--this paper was great because we sent it to an agency that's involved in forecasting and immediately they realized: Hey, forecasting is not done properly. We should not ask people if this event will happen. We should have buckets: What's the probability of having an earthquake of 3 on the Richter scale? or 4 on the Richter scale? or 1? You should have different buckets for different sizes of exposure, or sizes of the event. Russ: Yeah, obviously, it's like you say. If the odds of an earthquake tomorrow--I'm right now in the Bay area; I'm on the campus of Stanford--so the odds of an earthquake tomorrow are 0, close to 0, very unlikely. It's very unlikely. So I can--let's make a prediction; I'm going to predict no earthquake. If there is an earthquake, what really matters is whether it's a little murmur or if it's giant. That's what matters. Guest: Exactly. So when you look at economic variables that are fat-tailed, meaning the probability doesn't matter much--it's the event that's associated with the probability that matters more. So you can have a very small probability of a large event. Now, there are some pathologies. People didn't understand it when they created prediction markets. I was then opposed to the idea of prediction markets; people didn't understand me. So I went to Phil and I said, Listen, let's write this paper so we can have grounds to discuss prediction and separate predictions between binary and vanilla prediction, or binary and natural, ecological I call it also, prediction. So when it comes back to skin in the game, if you have skin in the game, it's never binary. Russ: Because what matters is the amount. Guest: Yes. If you don't have skin in the game, you are going to game the reputation system, and the reputation system is based on binaries. Perception of others is more binary. So this is where we connect the two papers. Russ: I guess you could turn skin in the game into binary with execution, in some sense, right? The reason that it doesn't work very well is that without skin in the game, if you say, if you lose money, I'm going to slap you on the wrist; and if you make money, I'm going to give you an ice cream cone. That's not a very good system because it-- Guest: No, exactly. Because then you make 5000 ice cream cones; if you buy an option that has a very, very small probability of being [?] money, then you get slap on the wrist once. It reminds me of your essay where if the event happens it kills all the people who took that roller coaster. So this is where we have to be very careful when we deal with fat tails. The probability in a fat tailed domain, if I [?] on the tails of the distribution of a given event drops, but then the magnitude increases. So I was very, very annoyed that the interpretation of Black Swans, people thought that I was saying that black swans are more frequent. My point, no. They are not more frequent; they are less frequent. But when they happen, they are deeper. So they have a lower probability than you think, but a deeper effect. Which is what makes them vicious. Russ: A dangerous, evil, crazy man with a pistol can do some damage. A dangerous, evil, crazy man with a nuclear bomb is not just dangerous. Guest: Exactly. Horrible. And the problem is that with the nuclear bomb story, you don't have enough of the record to figure out what is the probability of having such a fellow. And you never have an idea about the danger until it's too late. That's the problem.
53:25Guest: Let me tell you why it's important to talk about binary in the context of skin in the game. Because people who talk without harm, who talk about events without harm with impunity, write papers and do this, people who are not active, don't really understand the real texture of reality. When you have skin in the game you are much more in a way scientific about things. You are much more rigorous about things, much more rigorous about risk. And there is a class of people who are trying to promote the idea that small probabilities are over-[?] by the system. And they find experiments that in fact are binary by nature or where a variable is bounded, and say: people tend to overpay for lottery tickets, overpay for official setups. Russ: They are overly cautious about events that are unlikely. Guest: People are cautious--that pathology doesn't extend to options, financial options, for example. People don't understand the following: That if you have an option and there is a stock market crash, as the one we had in 1987, the option can cover 30, 40, 50, 100 years of p&l[?], in some cases 300 years of p&l[?]. Because it's a very small, tiny payoff. So, true, we may not have an idea of that probability. But we definitely shouldn't make statements about things that have a very, very strong payoff like that. And people naturally, in a natural setting, tend to be careful about it. We overpay for insurance. We overpay for lottery tickets. So there's something called the 'long shot bias'. And people think that it's a pathology. Where I think that it's only a pathology in things that are modern. And that pathology doesn't extend to vanilla variables. Russ: Let me see if I understand what you are saying. I think you are saying that in a lot of laboratory experiments that psychologists and some economists do, people overestimate the probability of very unlikely events. Guest: Exactly. Russ: They so-called overreact. So they worry a lot about a plane crashing; in fact, a plane is very safe, the odds of a crash compared to a car. But they overreact. But what you are really saying is they don't overreact, because the costs of a plane crash are very bad. Guest: No, no. For plane crash maybe you may overreact but it's not a big deal. The point is that you cannot generalize from an experiment that is not natural to natural settings, and you identify exactly the mistake people make because you are making a statement derived from a variable that has a bounded payoff and generalizing to things that have open-ended payoffs. Russ: Yeah. My example of the airplane is a bad example. Guest: People overpay for lottery tickets, for example. Which is true: they overpay for lottery tickets. Russ: Meaning the value is negative. Guest: Hence, people overpay for financial options. Hence let's sell[?] remote probabilities in finance. Well, anybody who has a brain would realize that banks aren't engaged in the business of selling small probabilities in finance. And they lost $5 trillion in 2008. More money than they ever made in the history of banking. So therefore, that statement, that people overpay for protection in finance, is false. Russ: When you said the number $5 trillion, for a minute I just thought you meant a lot of money, like a zillion. But it actually is close to $5 trillion, right? Guest: Yeah. That's what they lost before the government bailed them out. But $5 trillion is a lot of money. So what I'm saying is that instead of doing a lot of experiments, look at the variables themselves. Insurance companies haven't really made money till recently. All it takes sometimes is one event. And insurance companies are involved in very complex fat-tailed things, typically. Except for reinsurance. So look at the banks: bets on small probability events by financial firms have proven disastrous in history. And wealth came from bets on open-ended remote-probability events, namely entrepreneurship.
58:09Russ: It's beautiful. I want to close with one last topic, something that I thought about when I was reading your paper. Which is: Parenting. It seems to me that when our children are younger, we don't want them to have skin in the game. Literally. We don't let them get near the stove--that's hot--because they'll burn their hand. And as they get older, good parenting, it seems to me, which is hard to do, means letting our children have their own skin in the game rather than the skin of the parent. Do you think that's right? Guest: I think that's right. I think traditional parenting has some merits, in the sense that you protect--there is an expression in Lebanon that the first 7 years, you play with them; the second 7 years you let them get in trouble; and the third 7 years [?] on how he got in trouble. Russ: What's the last one? Guest: Advise him on how he got in trouble. Russ: Oh, you explain it to them. Guest: Exactly. And there are 21 years. That's a Lebanese expression. The first 7 years you protect them, because they are fragile. The second 7 years are antifragile; they need to get in trouble because they never learn unless they have skin in the game. Now before finishing, I'd like to talk about something--we talk about skin in the game which is quite relevant here--and that is to talk about academia. Because we are both academics. Russ: Sort of. Guest: We're academics. The only way you can reform economics is by installing some kind of skin in the game mechanism. Because it looks like now the system on its own allows them to be wrong with total immunity. Russ: Agreed. But doesn't that come back to the problem we talked about in the first few minutes about the Chair of the Fed? You're an economist; one economist says, in 2008, we need to spend $2 trillion by the government, and it doesn't matter what we spend it on; the other group says, no, we should spend $0. Then we spend about $1 trillion--we spend $800 billion roughly--and things didn't come out so well. But it's very possible that's not proof that they were wrong, the people who said to spend a lot of money. It could be a thousand reasons. I think we just have to accept the fact that economists can't have skin in the game and therefore we should discount what they say. Guest: Exactly. The point is, we need to lower the dependence on people who don't have skin in the game. Russ: Yeah, or ignore them. Guest: But you cannot ignore them. You have to build a system. Because people can take over, prestige, a lot of pathologies can control in that way. The best way to do it is build a society in which mistakes made by economists stay on campus. That's the idea. The idea is that if Larry Summers wants to make mistakes, more mistakes, let him make them at Harvard where we are insulated from it. It's like the ivory tower; it's because we are protected from them, not because they have to protect themselves from us. Which works both ways, you see. Russ: It's a great slogan: What happens on campus, stays on campus. Guest: That's exactly it. So it should keep the mistakes local on campus. And that way everybody will be happy.

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (44 to date)
honeyoak writes:

Here is a SSRN link to this Paper:

On the Difference between Binary Prediction and True Exposure with Implications for Forecasting Tournaments and Decision Making Research,


http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2284964

Stuart writes:

Just a comment on the "politicians starting wars with no skin-in-the-game" topic:

I think the idea attributed to Ralph Nader of only allowing congressmen to vote for war if they have family members in the military is extremely unrealistic. But, I have read the idea that every war resolution would require an across the board increase in taxes, something that would be unpopular with many of the constituents. In a way it would put the politicians skin in the game as well as the population as a whole.

SaveyourSelf writes:

After listening to this podcast, I bought Taleb's book "Anti-Fragile". In the book's prologue, he comes across as a raving mad, antisocial, genius. It is a fun read so far, but I can see why he doesn’t get invited to Harvard—or any other university—very often. It is interesting that he comes across so differently in this interview with Russ.

I think Stuart's idea [above] about requiring a flat tax increase with every war resolution is a wonderful idea--provided the increase is only temporary--, but then again his idea is just a restatement of "Pay-Go," so the same benefits would apply to EVERY piece of legislation with a price tag attached to it.

Gerry writes:

[Comment removed for ad hominem argument and pending confirmation of email address. Email the webmaster@econlib.org to request restoring your comment privileges. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Ramakrishna Shanker writes:

I have read all of Taleb's books and believe I understand his thinking at a deep level.
I have some thoughts on this "skin in the game" business.
1. The older societies - eg. Asia subscribe to this idea. Indeed "shame" is a major consequence in societies like Japan. This leads to the population at large being extremely conservative.
2. I think the new world - the US - has succeeded in large part because people have much less skin-in-the-game. Bold new innovations by definition have unforeseeable future consequences. You invent the internal combustion engine, or the computer chip - you have no idea what effects it will have on society - but society adopts it anyway. This would not happen if the inventor was mortally-scared of his future - or if society was going to impose capital punishment on the inventors should unfortunate consequences happen.
3. In short, there is a tradeoff between innovation and Taleb's "SITG".
4. I strongly believe in the overall benefits of innovation. Actually that is weak - I believe that innovation is the Primary Driver of human progress. And therefore we cannot have people mortally-scared to try new things - because after all no one can know with any certainty what the consequences will be. I would therefore argue that the primary American virtue is in fact an ability to enable people to take risks ( some of which may indeed wind up hurting others).
5. Asian societies, in particular the younger generation - have recognized that their weakness is in fact a fear of failure, leading to a lack of innovation. They are working to correct this !
6. Having said all that, I dont disagree that the current crony-capitalism with a small group of people effectively above the law - or better yet with the ability to modify the laws on the fly to suit themselves ..... is disastrous.

Steve Sedio writes:

This podcast focuses on making the stick harsh enough to temper the carrot, and I fully agree with this opinion.

The problem, as Russ identified, is government has excelled at eliminating the stick while making the carrot ever more attractive. The reason politicians give is to "protect the public" (the typical apple pie and motherhood crap from government - after all, who paid the price in un / underemployment, and who will pay off the debt? The public).

How do we implement these controls with government going the opposite direction?

Michael writes:

I like the concept of "skin in the game" as a moral imperative.

However, there would seem to be some situations where skin in the game is not enough.

Jimmy Cayne (Bear Stearns) and Dick Fuld (Lehman) lost hundreds of millions of dollars... but they are still very rich. However much of their own personal wealth they had tied up in Lehman and Bear, it wasn't enough to stop them from playing their role in the crisis.

Ramakrishna Shanker writes:

I am sure Hammurabi was relevant for 1750 BC - but we have progressed. We now have regulations, laws, professional standards.
Now, wall street had a crisis due to poor regulation and lack of enforcement. I am not persuaded we need to regress to 1750 BC in response to that!

Ramakrishna Shanker writes:

As for tail risks. I think modern US capitalism does to some extent remove tail risks from individuals and absorb them in society. i think this is essential for a heathy entrepreurial environment. After all we want young people to take risks and innovate and not be hiding under their beds waiting for the black swan to descend! And we have our billionnaires ( esp in Silicon Valley) willing to invest their billions in crazy ideas and back unknown youngsters. In the old world the billionnaires buy a lot of real estate and gold- which I suppose would be the logical thing to do if you wanted protection from tail risks.

Ramakrishna Shanker writes:

take the average 25 year old entreprenuer in Silicon Valley who manages to raise a few million dollars in VC. How much skin in the game does this kid have? very little. How much skin is demanded by the VCs ? very little.
Thats why it works!

rhhardin writes:

38:28 As I recall, Bush 43 flew F104s in the Texas Air National Guard, which he joined when their being sent to Viet Nam was an open possibility.

The F104 was an exceptionally tricky airplane to fly as well.

It seems like a needless slander.

Russ Roberts writes:

Michael,

Cayne and Fuld did lose hundreds of millions of dollars but they were also able to extract hundreds of millions. If they had really had their wealth tied up in the stock they would have been wiped out. They were not. They had extracted a great deal of wealth along the way. I write about it here.

Michael Raijin writes:

Perhaps I need to read the paper, but on the basis of this podcast, I don't see much in the way of new ideas. We have moral hazard and other conventional concepts gussied up with statistics and finance jargon.

There are many ways people avoid "skin in the game." The corporation with its limited liability does just that. The limited liability partnership is another example. Originally the partners each had joint and several liability. Then came the limited partnership where one or a few partners assume all the liability. Indeed most investment banks were once partnerships, and then adopted the corporate model to protect the owners from their mistakes.

In the taxi cab industry in New York City, some companies would incorporate each taxi separately, then drain the asserts out of the corporation leaving a mere shell. If the taxi got into an accident, the injured passengers had little recourse. One would think the corporate veil would get pierced, but evidently not if the corporation keeps good records. The beneficial owners had "little skin the game."

One question posed during the podcast is thought provoking. Why do we have so much moral hazard in the modern world? At one time captains went down with their ships. For example the captain of the Titanic along with his officers went down with the ship. They literally had skin in the game. Today captains don't do that. Indeed I remember one incident where the captain was the first one off the ship! A special helicopter came and rescued him, and only him first. What happened? I don't really know; perhaps the decline in religion, or a change in the values we give our children.

Again while the program was interesting, and fun to listen to, I can't see much that's new or deep. I will read the paper.

Jim Feehely writes:

Hi Russ,

Thank you for another conversation with Mr Taleb.

In a world that is saturated by propaganda that seeks to convince us all that experts are certain about everything, he is very refreshing. The fact is, with the exponential accrual of knowledge over the that past 100 years or so, we now know with a high degree of certainty that we actually know very little compared to what we can estimate there is to know; ie we should know now, more than ever before, how wildly uncertain the world really is. Yet we are deluded enough to model it and then build enormous financial systems on the basis of what those models tell us.

Taleb's recommendations in 'Antifragility' are, in my humble view, brilliant. The recommendation, building on his theories in Fooled by Randomness and The Black Swan) that prediction is not only almost always wrong, it is unnecessary, is something that every economist, bureaucrat and politician needs to learn. We can, however, actually figure out, from what we know with a high degree of certainty, and allowing for unknown uncertainty, about the present, what is fragile and we can use what we know to take steps to antifragilise that thing, institution etc.

As Nassim points out, the best lesson we have is nature. We are surrounded by an anti-fragile system (ie a system that actually benefits in the long run from shocks and stressors) but we take no heed of that. Instead, we seem to be intent on finding that shock or stressors with which even nature cannot deal. Indeed, Taleb's explanation of nature as the supreme antifragile system, is really an auxilliary or alternative explanation of evolution.

Whilst Nassim alluded to the 'big is ugly' problem, that was, alas, not the subject of the interview. But what he says about that is to antifragilise the system, and preserve the religion of Capitalism, we must disaggregate everything - nations, governments, corporations, social institutions - everything. Small structures cannot destroy society, but big structures can and probably will. It is the most powerful and helpful idea anyone has suggested since the great enlightenment philosophers. Nature knows its limits. We arrogantly refuse to recognise limits. Witness the reaction to Pindyk's interview.

But, unfortunately, because Nassim is unkind to economists and academics (justifiably in my view), his ideas get smothered by breathless nonsensical defences of economic theory and from academia. Just search for rebuttals to Taleb to get a sense of the irrational vitriol he attracts from charlatan economic predictors. Bravo to Russ for giving him this forum so often.

Russ, as you know, there is much more to discuss with Nassim and I look forward to the next interview.

Regards,
Jim Feehely

Ken From Ohio writes:

Thanks for another interesting podcast

During the run up to a Syria military strike, I've considered the issue of military skin in the game.

I know that Milton Friedman was a strong advocate of an all volunteer military. But maybe it's time to rethink this. During the civil war, a person could buy his way out of the military draft for about $300. Isn't that the same as a "volunteer" military - where the volunteers are being paid and the non-volunteers stay home?

Without a draft or a flat military tax to directly pay for the military intervention, the average American has no skin in the game. Without skin in the game, it's easy to advocate military intervention for just about any reason

Steve Sc writes:

"rhhardin writes:
38:28 As I recall, Bush 43 flew F104s in the Texas Air National Guard, which he joined when their being sent to Viet Nam was an open possibility.

The F104 was an exceptionally tricky airplane to fly as well.

It seems like a needless slander."


Agreed. Clinton a much better example.

Russ Roberts writes:

Ken from Ohio,

It's a good point. But the soldiers still have skin in the game as do their families. And the budgetary cost of a volunteer army is much higher than a draft--you have to pay people enough to want to be there. That gives the politicians and taxpayers more skin in the game. So it's a little complicated.

Ramakrishna Shanker writes:

If you agree with me that there is a tradeoff between innovation and Anti-Fragility then I consider the following hypothesis:
1.Societies that are optimistic, confident and have a high degree of trust are able to accept a degree of fragility as necessary to encourage rapid innovation.
2. Societies that are fearful, and lack trust do not have the luxury of pursuing innovation , because they fear the costs of fragility more than they value the benefits of innovation.

Jim Gatti writes:

Petraeus has been criticized because he never led troops in combat, but that neglects the fact that such experience is most often gained as a relatively junior officer (below major). During that phase of his career there were no significant deployments of us ground troops.

Greg G writes:

Ramakrishna

I agree with your comments above but it is worth noting that Russ successfully raised this point in the interview. He pointed out that, in the paper in question, Taleb had neglected to acknowledge that there can be too much skin in the game. And he got Taleb to concede the point!

As someone who has read a lot of Taleb's work you must appreciate what a feat this was for Russ. I can't recall another time when anyone else got Taleb to accept any criticism of his work at all (which is one reason he referred to the fact that it is so rare he is interviewed by the same person twice.)

Kudos Russ. This is another example of how your civility and sense of fair play make these interviews work so well.

Jed Trott writes:

Ramakrishna,

Taleb concedes, as he has conceded in the past, that the level of skin in the game included in Hammurabi's Code is not necessary. There just needs to be enough skin in the game to incentive response. The scenario of the entrepreneur and the VC fits perfectly with Taleb's skin in the game argument because there is symmetry. The entrepreneur and the VC both know the rules of the game and do not have the ability to hurt each other without the others consent. The problem of skin in the game is most prevalent in cases where one part has the ability to shift their risk onto someone else without there consent. This is why the current too big to fail regime is so concerning. The government can bail out the banks with taxpayer money without the consent of the taxpayer. In contrast the VC goes into his investment with his eyes wide open and can choose to invest or not as he sees fit.

Jed Trott writes:

Russ,

I really enjoyed the roller-coaster analogy as you extended it. Even though roller coasters exist in mediocristan, I think it has great explanatory value. With your permission I'd like to extend it a bit. What would be the first thing you would do if you had a roller-coaster that killed all previous riders if anyone ever died on it? You'd have the owner of the roller-coaster take the first ride with his family.

Mort Dubois writes:

Taleb is an interesting guy, and his earlier theories (Black Swan and Fooled by Randomness) have a great deal of value. I started to read Antifragility and found that his assertion that the best way to survive Black Swans is to learn to master small levels of ongoing trauma to be highly suspect. His Skin in the Game theory is also weak. I'd be very interested in Taleb's explanation of this incident:

http://www.youtube.com/watch?v=4DzcOCyHDqc

If you don't want to click, this is the scene from the first Indiana Jones movie where Indy encounters a highly skilled swordsman in a crowded market. For the swordsman, Indiana Jones is a black swan. Even though the swordsman has subjected himself to years of rigorous training, he has an unexpected encounter with a challenge that is simply beyond his capabilities. Bad luck, indeed. And he's not as antifragile as he needs to be. Also interesting is the reaction of the crowd. How does this event change their life? Hard to say. And for Indiana Jones, what did he learn?

What bothered me about Antifragile, and this interview, is that Taleb implies that the survivors of Black Swans have something to teach us about preparation. That's possible, but sometimes they were just lucky, in that the scale of the Black Swan didn't overwhelm them.

Eric writes:

I heard of one of the best examples of skin in the game at the Army paratrooper jump school. The parachute riggers are all jump qualified. At any time, an inspector can come along and have them jump with one of the parachutes that they themselves had packed.

deusaquilus writes:

About three minutes into the conversation I had an epiphany that you can solve the Prisoners Dilemma with the Categorical Imperative, even in a purely rationalist perspective, and even in a non-repeated game!

Given the normal logic of prisoner's dilemma follows a model M of "defect is my better individual-choice option in both eventualities", you can formulate the following argument:

- If both myself and the other player(s) think by this logic model M, we will both fail to achieve a Pareto optimality
- Conversely, if we reject this model of reasoning then Pareto optimality will be achieved.
- Therefore it follows that as perfectly rational beings we should do the latter.

Said in a simpler way, the prisoners could both say to themselves: "I know it's better for myself to rat out on my buddy whether he's quiet or rats out on me... but if we both thought about it like that we would both be screwed so let's just no use that kind of thinking."

The tough part however is that seeing it in that kind of way is incredibly difficult when faced with fear which highly 'encourages' someone to think in a selfish way e.g. "How do *I* maximize *my* chances of surviving this situation." Therefore in over to overcome this intrinsic fear, as human beings, we need to adopt a moral code that follows the principle of Hillel or it's other alternatives (e.g. Categorical Imperative) that can give us the emotional fortitude not to sink into selfish reasoning.

deusaquilus writes:

About three minutes into the conversation I had an epiphany that you can solve the Prisoners Dilemma with the Categorical Imperative, even in a purely rationalist perspective, and even in a non-repeated game!

Given the normal logic of prisoner's dilemma follows a model M of "defect is my better individual-choice option in both eventualities", you can formulate the following argument:

- If both myself and the other player(s) think by this logic model M, we will both fail to achieve a Pareto optimality
- Conversely, if we reject this model of reasoning then Pareto optimality will be achieved.
- Therefore it follows that as perfectly rational beings we should do the latter.

Said in a simpler way, the prisoners could both say to themselves: "I know it's better for myself to rat out on my buddy whether he's quiet or rats out on me... but if we both thought about it like that we would both be screwed so let's just no use that kind of thinking."

The tough part however is that seeing it in that kind of way is incredibly difficult when faced with fear which highly 'encourages' someone to think in a selfish way e.g. "How do *I* maximize *my* chances of surviving this situation." Therefore in over to overcome this intrinsic fear, as human beings, we need to adopt a moral code that follows the principle of Hillel or it's other alternatives (e.g. Categorical Imperative) that can give us the emotional fortitude not to sink into selfish reasoning.

Jay writes:

I am a big fan of Taleb and I have read all his books especially Fooled By Randomness many many times.

What I am seeing this paper contradicts Taleb's older writings.

In Fooled By Randomness he had two excellent case studies about two characters -
"John High Yield Trader"
"Carlos the emerging markets wizard"

Both these characters had enormous skin in the game and still lost all their fortunes as well as
losing their investor's money because over reliance of Quantitative (John) and economic(Carlos) models.

So a question to Taleb is what has changed in the last 10 years.

Is Taleb under enormous pressure to come up with well sounding nonsensical papers nowadays since he is calling himself a Philosopher?

You can go back to history and see people had lots of skin in the game still did foolish to lose it all because of Greed, over confidence and too much reliance on their flawed methods.
This was Taleb central argument in Fooled By Randomness.


Eg- Victor Niederhoffer, LTCM characters

Nassim writes:

Jay, complementary not contradicting. I answered that skin is necessary not sufficient. Fools by Randomnes + Crooks of Randomness. See note or comment in Taleb 2004, JBF

Cody writes:

I didn't read all the comments so this may have already been mentioned, but there's an inherent problem with Talib's concept of "skin in the game" for money managers.

Money managers can't have enough skin in the game. Even if 90% of their wealth is wrapped up in the investment vehicle they're using, if they blow up they can nearly always raise another round of capital. They make their money from the leverage in the money they've raise and as long as they have a good story to tell about why the failure wasn't their fault, they can raise more money. Only "shame", as another commenter above mentioned, is a strong enough motivator (because of the threat of lost future work) to stop managers from blowing up their funds in exchange for significant growth.

Blue Aurora writes:

Nice interview with Nassim Nicholas Taleb. He makes very good points on agency problems.

Out of curiosity Professor Russ Roberts, would you be interested in interviewing anyone with an interest in decision theory?

Steve Sedio writes:

deusaquilus,

Two point that amplify you comments.

1. 0 years in jail, vs. 10 years (they can't bust us unless we confess) is much different than 10 years vs. 25 years (one vs. both confess).

2. When the police say your buddy turned on you, your buddy didn't. They wouldn't have told you, they would be arresting you.

As far as skin in the game, that isn't enough, as pointed out by Jay. The "this time it is different" mindset kicks in, leading people to think they are geniuses, finally have the right model, finally have enough information - what ever rationale that is needed to convince themselves they are invincible. To convince their investors they are invincible.

In one of the earlier podcasts, Russ made the comment that people will invest with the more expensive hedge fund manager, because they must be worth it - otherwise, how could they charge so much (you get what you pay for). Additional feedback convincing that manager of their genius.

Here economics and biology converge. A flu virus needs population density to evolve, to become deadly, be it pigs, birds, or people. Eliminate the density, and you remove the positive feedback that selects for death.

Creative destruction in low density populations, spread over a million different entrepreneurs, creates a background noise of change with new companies hiring the employees from failing companies.

Creative destruction in the target rich environment of "to big to fail", destroys far more than it creates.

In financial system, like government, the benefits are concentrated (high pay and bonuses to the few "investing" billions of other peoples money) and the costs are distributed (the fee is only 2%, and they are making me 10%). Until they don't.

I see the problem is our mindset that you get what you pay for. Will a hedge fund manager making a fixed salary of $80K a year be as arrogant as one making $10B a year on commission and bonuses? Which is more likely to put their investors money at higher risk

Why do we invest our money with people motivated the same way used car salesmen are?

Until human nature changes I am using Nassim's Black Swan investment strategy. 80% safe, 20% hugely risky. The Black Swan gains are huge, the losses limited by my small investment.

thealaskanred writes:

A great talk by Nassim and Russ. Thank you.

The discussion about optimal skin in the game using the example of an architect building a house got me thinking about the petrochemical industry.

I understand that certain Latin American countries hold the engineers of new pipeline modifications or construction liable for failures or injuries. However, maintenance of existing facilities, where the tie to the original designer is lost, can continue without personal repercussions. This leads to a case where capital is directed towards keeping inefficient or poorly designed systems working, increasing the chance of problem.

I would be interested to hear whether differences in skin-in-the-game account for some of the infrastructure differences between countries.

Ramakrishna Shanker writes:

Jed - Thanks for your thoughtful comment. The relationship between VC and entrepreneur is not based on game-theory , skin-in-the-game style of cynical thinking. If it were - the whole system would have ceased to exist many decades ago.
Thats my point. There is a mutual trust here. The VC trusts that the entrepreneur is genuinely interested in building a company and not just taking out his "management fees". And the VCs are typically rich guys who get a genuine thrill out of being involved in new technologies.
The situation is quite different in the hedge fund world - which I do think is cynical - with many HF managers simply clipping their management -fee coupons and their clients ( state pension fund managers ) not caring as long as they get box seats at the super bowl.

Ramakrishna Shanker writes:

I think that many of the most creative and energetic folks are young - with not much "skin" to offer.
If society takes a biblical stance - these young, creative, vulnerable folks will never ever do anything . If you dont believe that - just look at the best and brightest young people in Asia. Do they tinker in a garage with their pals - risking familial shame and heaven-forbid - Failure? No- much better to join one of the large state sponsored industrial organizations and work your way up the corporate ladder.
The issue is - groundbreaking innovation typically comes with large tail risks . But the people most likely to embark on such innovations hardly have the capacity to bear the tail risks. So - society has a choice to make : Do we want these young explorers or not? If you impose the tail risks on them - they will vanish - and work for the govt instead!

Ramakrishna Shanker writes:

The financial princelings - have no skin in the game and are above the law.
Most of the rest of the economy - people , if anything, have too much skin in the game.
Which is perverse because finance is one area where "innovation' is not needed. More often than not, such "innovation" consists of varieties of ponzi-schemes , front-running and other unsavory schemes to skim money off the unwary.
On the other hand , innovation , is badly needed in much of the rest of the economy.

Andrew McDowell writes:

I don't think skin in the game is effective, because I don't think everybody respects low probability risks enough, or even far off high probability risks. Smokers are one example.

Closer to the article are "The smartest guys in the room" - the title of a book about the collapse of Enron. The key figures here were not stuffing their gains into balanced portfolios of safe securities - their personal finances showed the same chaotic risk taking as their management of Enron. And yet they won for a time - we need enforced regulation to stop competition between firms selecting as winners those who take stupid risks and then get lucky.

Another example of deceptive risk-taking is the seduction of "double or quits" Martingales in various disguises. I suspect that many of those who come to grief over this have previously thought that they had learned that, as long as they had the guts to repeatedly double their bets on losses, their courage would be rewarded with a win.

Gandydancer writes:

@Mort Dubois: Not a good idea to use a movie as an example of something Taleb needs to explain. Now, if the scene had taken place in "A Connecticut Yankee in King Arthur's Court" the swordsman would have experienced a Black Swan. In "Indiana Jones" he was just being stupid for the sake of the gag.

@rhharden, etc: Taleb has a number of problems with his examples. Petraeus didn't know when he went to West Point that he wasn't going to get into close combat before being promoted past the point that that happens, and Bush didn't know when he volunteered that his plane was never going to be shot at, so the assertion that no longer having had to be in combat newly undermines current leaders' skin in the game is just wrong, as well as making little sense -- what does past skin risk have to do with current? And the idea that middle aged politician Julius Caesar led his legions from the front ranks is... silly.

@Michael: Taleb nowhere suggests that skin in the game is enough, and in any case both Jimmy Cayne and Dick Fuld didn't simply have their personal wealth tied up in Lehman and Bear, they had acquired an interest in an enterprise that would continue to reward them magnificently right up to the point the music stopped and they lost what was in the pipeline. Even if they knew the crash was coming it wasn't clear at what point it would make sense personally for them to bail. Suppose they'd realized right at the start that what they were doing would kill their companies... if they'd stopped then they wouldn't have been promoted so high or ended up so rich.

Robert Promm writes:

It occurs to me that investment bank "skin in the game" became non-existent when they were allowed to restructure from partnerships to corporations. Management is far less hurt when a corporation fails than a partner is hurt when a partnership fails. Definitive recent example -- the failure of Arthur Andersen.

G8rDave writes:

My one small beef with the recent interview with Nassim Taleb had to do with a couple of statements which I could not help but disagree with and they were allowed to pass unchallenged. Of course, neither point has to do with an economic issue, on which ground I would be as mute as a tomb.

Nassim characterized recent events in which leaders of combat troops, or commanders who direct the actions of armed forces, in his case President GW Bush and Gen. David Petraeus, were non-combat veterans and were thus unsuited to give commands to those who were asked to bear a risk which they themselves had not borne. In the Case of President Bush, he stated that his influence caused him to avoid combat. In Petraeus’ case, he said he was surprised to find such a distinguished career had not placed him in a lower level combat role. Of course, any soldier who looks at Petraeus chest could see that while he wore the Expert Infantryman Badge (EIB), he did NOT wear the best indicator of combat status for an infantry soldier, which is the Combat Infantryman’s Badge (CIB). With a USMA grad date of 1974, it is easy to see how this would be the case, as opportunities for combat during his formative years as platoon leader or company commander would have been only under the circumstances of the action in Grenada Opn Urgent Fury), or the intervention in Panama (Opn Just Cause). Very few young men served in those theaters in such a way as to earn a CIB, which is HIGHLY coveted by any true infantryman and the scrutiny that would be placed to ensure there were no gratuitous awarding of that most desired decoration was intense. While I never served under David, I was growing up at West Point at the time he was a cadet and I well know his father-in-law, which I offer only to establish that I am not beholden or loyal to him in any way, except as a fellow soldier. (On a personal note, I come from a long line of Military Academy graduates and we find his personal moral failures as a graduate of USMA, as a man and a husband to be utterly deplorable and without excuse.) He missed the opportunity to serve in combat until he was at a level of command that placed him too far from the action to qualify for combat decorations. His assignments, the light infantry and paratrooper units were he served, would have been very likely to put him in harm’s way had a conflict called for their deployment.

It is interesting to note that another distinguished leader of our military forces similarly missed out on opportunities to serve in combat as a junior officer, however, he demonstrated sufficient skill when called upon in a larger role later in his career. Surely Mr Taleb would not begrudge, nor remove him from consideration as a commander, Dwight D Eisenhower’s skillful leadership in World War 2 and later as President and Commander-in-Chief merely because he could not obtain an assignment to the fighting in Europe in World War I. The notion that soldiers will only accept leadership from men whom have been bloodied earlier in their career is foolish. I led men as a brand new officer and my lack of combat experience could NOT have been considered in their decision to follow my orders when given in battle. I may be one of his biggest detractors of his decisions in office, but I greatly admired President Carter for his service in the highly demanding submarine service while he was in the US Navy. Should he have recused himself as commander of the forces he sent into battle in the Iranian hostage raid? That certainly was an ill-advised plan, but his lack of combat was not a factor, nor should it have been. If I was being considered for a role as a chief executive with armed forces under my command, my lack of service in combat could not be allowed to qualify me in decisions involving their deployment.

Similarly, most of us soldiers look at the job Prsident Bush had when he served, i.e as a pilot of a combat aircraft, and no mere fighter but indeed an F104 Starfighter (known as the Widowmaker, BTW, because of its highly demanding flight characteristics. The F104 tended to kill pilots who were not attentive at all times to their job. The F014 was an air-superiority aircraft and was ill-suited to the type of aerial mission over Viet Nam, which demanded larger, more durable, multi-purpose fighters (F4s, F100s, F105s). As a pilot, whether in combat or not, it took skill and courage to fly a fighter of that sort and had it been suitable for combat over Viet Nam, I have no doubt President Bush would have served capably. For this reason, you heard few veterans disparage his record when he served. Many, many of us were prepared to go, trained to go, put ourselves in harm’s way right where we were, and simply never had the opportunity to get shot at. We heard the criticism leveled at President Bush, mainly by academicians and pundits whom never even did as much as the lowliest cook in uniform, and felt revulsion at the commenters for their cowardice. I strongly condemn Gen. Petraeus’ indiscretion and violation of his commitment to the cause of duty/honor/country, but will not accept his rejection as a commanding general because he didn’t fire at the enemy as a lieutenant.

Mr. Taleb noted at one point in the podcast, “What’s said by economic professors in academia should remain in academia.” When it comes to his evaluating fitness of men for command in wartime, I would agree and ask him to stay out of the discussion.

I very much enjoy the discussion here and Nassim's participation in it!

Dan Hanson writes:

Just a quick and nitpicking correction of George W. Bush's record: He flew F-102's, not F-104's. The F-102 was a delta-winged interceptor known as the "Delta Dagger". Like the F-104, it was a dangerous airplane to fly - an F-102 pilot in the Air National Guard in Texas was more likely to be killed on duty than was a soldier in Vietnam.

In fact, of the 14 F-102's lost in Vietnam, only three were lost in the air due to enemy fire. The majority of F-102 losses in Vietnam were simple accidents caused by non-combat problems like engine failures or landing accidents - the same things that could kill you at home.

In addition, Bush's unit was on the rotation list for Vietnam service, so joining it was no guarantee of staying out of the war. Not only that, but Bush specifically volunteered for Vietnam service and was turned down because the F-102 was being removed from the theater in favor of newer, better planes.

Much as I dislike many of Bush's policies as president, he was terribly slandered over his military service record.

Adam writes:

Having skin in the game is not a new concept- anybody who has ever been a salesman, for example, should buy and own their product before earning the credibility to sell to perfect strangers.
In finance and in the markets, anybody who has been a trader with their own money also knows the concept- you can come up with a great idea or strategy but unless you put your own money at risk in that idea there is no skin in the game.
What I do find interesting is that, by implication, if bankernistas and traders who do not use their own money at risk, that the unsuspecting tax-payer has skin in the game instead! If a bank trader - trading the firm's money - has the upside of a nice big bonus, and the downside of merely getting sacked... he will take on augmented risks because if he is right there will be a nice (albeit discretionary) bonus, if he is wrong, he will simply find a new job elsewhere.
In fact, and I'm not sure it is still the case, but many hedge funds, trading firms, or bank desks, will only hire an experienced trader if he or she has 'blown up' at least once... strange..

Miles S. Mullin, II (@msmullin) writes:

Tort is skin-in-the-game in a modern society that embraces the rule of law. The arguments presented here are precisely why Tort reform is not a good idea.

G8rDave writes:

Thanks to Dan Hanson for picking that nit RE GWBs aircraft. The beauty of open forums is that ability to correct and address in real time. I have been years under the impression that GWB flew F104s and respected him for it. My respect is undiminished even by the knowledge that he didn't fly "Widowmakers".

It's possibly coals-to-Newcastle, but I would add the name of Abraham Lincoln to my list above of effective wartime executives whom did not face combat as junior officers/soldiers. His peacetime record in the militia was particularly undistinguished, in his and in others assessment.

Jay Nair writes:

I thought this discussion did not talk about the negative effects of SITG that we are seeing in corporations across the world. The idea of giving managers stock options of their own companies was to give them SITG. But this has resulted in them focusing too much on short term results so that they can increase their own take. Now, I also noticed that Taleb and Russ were focusing on tail risks (in this case, stocks plunging in future years perhaps after the managers themselves left the company?). If such a plan were implemented, I could see the outgoing CEO making sure that he selects someone who is amenable to his dictates to ensure his skin is not burned! So, won't this result in modified behavior thus nullifying the effect of what you wanted to achieve?

Comments for this podcast episode have been closed
Return to top