Russ Roberts

David Rose on the Moral Foundations of Economic Behavior

EconTalk Episode with David Rose
Hosted by Russ Roberts
PRINT
Taleb on Antifragility... Fama on Finance...

David Rose of the University of Missouri, St. Louis and the author of The Moral Foundation of Economic Behavior talks with EconTalk host Russ Roberts about the book and the role morality plays in prosperity. Rose argues that morality plays a crucial role in prosperity and economic development. Knowing that the people you trade with have a principled aversion to exploiting opportunities for cheating in dealing with others allows economic actors to trust one another. That in turn allows for the widespread specialization and interaction through markets with strangers that creates prosperity. In this conversation, Rose explores the nature of the principles that work best to engender trust. The conversation closes with a discussion of the current trend in morality in America and the implications for trust and prosperity.

Size: 32.9 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:36Russ: Intro. [Recording date: January 11, 2012.] Ideas in new book; you frame the book around an interesting thought experiment to help us understand the nature of prosperity. What's that thought experiment? If a society's sole objective was to maximize general prosperity and it could choose the moral beliefs of the people that comprise it, what kind of moral beliefs would it pick? What would they look like? What kind of characteristics would they have? Guest: The reason for doing that is I had become disenchanted with the progress that we have been making as a profession on what's commonly now known as the Development Puzzle. Russ: Which is? Guest: Basically, economics did really well through the 19th century, the beginning of the 20th century, working out the essential logic of the price system. And that was a huge triumph, a great gift to mankind. And I think we basically got that right. But as Thomas Kuhn has pointed out, when you have a new paradigm, you always say that things are great; you start to answer a lot of questions; but over time you start to peter out. The usefulness of the paradigm starts to peter out. And that happened with the neoclassical paradigm. So, what then happened? Well, in the 20th century, institutionalism was re-resurrected, I should say--it was already there to some extent--to fill in the gaps. The basic insight there was while there's nothing wrong with neoclassical economics and our understanding of markets per se, we have to recognize that they exist in a context; that they rest on an institutional foundation, as it were. And once we did that, then a whole bunch of puzzles became solvable. We were able to make some real progress, including but not limited to Development Economics. We certainly made a lot of theoretical progress. That kind of work resulted in Nobel Prizes for people like Ronald Coase, Oliver Williamson, Doug North. I would argue, though, that that has begun to lose steam. We have found that when you drop institutions into less developed countries very often they either do nothing or they are subverted and co-opted and become vehicles of opportunism themselves. So, something else must be missing from the story. Barry Weingast, who is a political scientist at Stanford has a great way of putting the problem. He said if you needed a copy of the U.S. Constitution, you could always go to South America because there's a ton of photocopies of it floating around in the form of their Constitutions. Yet you don't get a United States down there. And you can't tell the standard argument that they don't have the right kind of requisite conditions, because as recently as the late 19th century, Argentina had higher per capita income than we did. So, they have all the stuff that they need, and they even have, superficially, a Constitution, and so on and so forth. So, much of the institutional apparatus is there. Or apparently there. And yet the don't get what we get. Because apparently--they have a court, but maybe it's not quite like ours. They have laws, but legislation doesn't quite work in terms of how it's enforced. Etc. So, there is a puzzle, still, which is fundamentally we don't fully understand why some countries do much better than others. Right. And you are trying to fill that gap. That's what got me interested in this area in a broad kind of way.
4:59Russ: So, this thought experiment is to think about what role moral values might play in helping to create prosperity, and you focus on the issue of trust in dealing with strangers in large group situations because that's necessary for specialization. Is that correct? Guest: It is. The way I approach the whole thing is to say: Look, if we are trying to figure out what kind of moral beliefs would do the best job supporting the development and operation of a market system, the first thing we have to do is figure out what exactly needs to be going on to have a well-functioning market system. That stuff's all well known. Basically, Adam Smith is right about this. The issue of distribution is important but not nearly as important as the issue of having enough stuff to divide up in the first place. Really comes down to specialization. Societies that are able to effectuate dramatic specialization through very large scale production are those that are going to have levels of productivity that are many orders of magnitude greater than other societies. And we've known this for a long time, although it's surprising how few younger economists are really aware of how dramatically different the level of productivity is when you allow specialization. Well, I shouldn't say almost nobody, but many economists don't have the pin factory example memorized, for example. Which I require my Principles students to do because it is such a shocking increase in productivity. But be that as it may, the question then is: That's what it takes to work, but now does specialization present any kinds of problems? Obviously if it was really easy to effectuate tremendous gains from specialization, everybody would do it. But not everybody does do it. What's the problem? Well, when you have dramatic specialization to increase the productivity like that, you are going to invite a problem of localized knowledge that is quite similar to a local knowledge problem that was addressed by Hayek across the whole of society. As you know, Hayek argued that the price system solves a problem and the problem that is solved is reconciling the localization of knowledge. Because we have a price system, we don't have to know what each other is doing or why. All we have to do is pay the market price, and as a result, we'll pay the full social opportunity cost of using that resource. And that effectuates efficient coordination across the whole of society, even though we don't have to know that much about each other--because everything we need to know is already embodied in that price. That was a fabulous argument. But I would argue that when you look inside firms, which is where all the stuff gets created in the first place, we have a similar kind of local knowledge problem. The larger a firm is and the more complex its production is, the more likely it is that there are people who know things that nobody else knows. Or even can know. And as a result, if people in that situation are not able to take full advantage of that knowledge, we are just throwing away a tremendous amount of efficiency, much like we would be if we didn't have market prices across all society. Russ: The problem within the firm: there was a big fad in business schools in the 1990s. I don't know if that fad is still going in the business schools any more but there was a big fad in business schools and management and the business literature about the capital stock of knowledge within a firm--that there was a lot of specialized knowledge and localized knowledge that you are talking about embodied in the individual workers. But they would come and go, so how do you preserve the knowledge that the firm has at any point in time to make that more efficient despite the reality of turnover. I don't know if they made much progress with that; obviously there is one move toward using prices within a firm; I don't think that's been terribly successful. But it's certainly true that at any point in time within any large organization whether it's a business or a non-profit, there is an immense amount of specialized, sometimes localized, but specialized knowledge that isn't written down anywhere. It's just embodied in the heads of people who happen to be employees at the time. And how do you get that to be used effectively is a major problem for any successful organization. Guest: Right. And that's kind of a stock concept of it, and that's certainly correct. But the problem is every bit as daunting in a flow sense, which is what Hayek would have emphasized. That is, things are changing constantly; the problems of the day today are different than yesterday. And they just come at you constantly. And the person who is in the best position to answer those questions is the one who has a great deal of localized knowledge regarding how a particular area of the firm works. What I introduce in the book is that there is a form of opportunism that has never really been codified in the past. It's what I call third-degree opportunism. And that's opportunism of the form that there's an action set and other people in the firm, or the firm owners or CEOs or whatever may know a proper subset of that action set; but a person who is on the ground, as it were--I like to say a middle level manager--knows a much larger number of actions than that action set. And if an action is one that is profitable but not the most profitable is known to the person with local knowledge but not known to the others, and the person who possesses the local knowledge knows this, he might pick an action that is good enough but is not the best. And that would not be consistent with maximizing profits and not be consistent with efficiency. And this is a very daunting problem. I call it 3rd degree opportunism. It's very a very daunting problem because it's a problem that gets worse the bigger firms are. Because the bigger firms are, the more specialized the knowledge, and by definition the more likely you've got a situation in which an individual has an informational advantage over those that he would have to answer to or coordinate with. Russ: You are not talking here about--you talk about these other phenomena, which I'm going to mention, which is shirking, where obviously sometimes an employee can work less hard than his boss might know about and enjoy some leisure on the job. That's not what you are talking about. You are talking about a very specific kind of opportunism, right? Guest: Exactly. I'm talking about a form of opportunism that cuts to the very heart of whether a firm is run in entrepreneurial fashion or in bureaucratic fashion. This is a fundamental tradeoff, because if you aren't able to delegate managerial responsibilities vis-a-vis what we call relational contracts--in other words, contracts that are flexible enough to give those who possess local knowledge--if we can't do that then we are throwing away all of the efficiency gains, what I would call Hayekian gains, that would come from fully exploiting localized knowledge. However, relational contracts that would confer that kind of discretion would by definition open themselves up to opportunistic exploitation that constitutes what Bob Frank would call a golden opportunity. The reason why is by definition if nobody else can know what the optimal action is then there is no way you can be in a sense caught cheating because no one else knows what the counterfactual could have been. Only you know that. So, this is kind of an inescapable problem associated with the efficient use of local knowledge within firms. And I think it's a very deep problem; it's a very fundamental problem; and it cuts right to the heart of production and to the heart of the difference between a bureaucratically run firm and an entrepreneurial firm.
13:35Russ: But it goes beyond that. There are so many transactions that--and you talk about this in the book--you and I are going to make a deal; there's going to be a contract between us. Not a handshake deal; there is a contract. But it's impossible for the contract to specify all the possible conditions, including conditions where I might do something on your behalf without your knowing it's even possible. Because you don't have that localized knowledge. And I always think, when I think about these kind of problems, about selling a house or buying a house, where we have this unbelievably important asset being exchanged for money and we have this unbelievable set of paraphernalia and bells and whistles surrounding it--title and page after page of contractual agreement on all sides, about what we are going to do on each other's behalf. But despite all of that, we leave much of it unspecified because it's too costly to specify everything; and more importantly we can't anticipate everything that could happen. And so inherently at some point there is either trust or there is this random legal action; and of course legal action is really unpleasant. So, obviously the more trust that's involved the better it is because we avoid the complexities of legal action and all of its costs. But--you have to trust the other person to a certain extent. And how do you generate that trust? Especially in this situation, which is the one I want to focus on because it's the center of the book for the rest of the way out, which is: One of the parties knows something nobody else knows, and knows that by either taking an action or not taking an action, good or bad will occur. How do you get that person to do the right thing? And if you can do that, if you can get a world where people do the right thing even when they are not observed or monitored, you can really exploit these potentials for specialization and trade, exchange; and you won't be able to exploit them if that trust isn't there. Is that a good summary? Guest: I agree with that totally. I think that's basically correct. My particular approach doesn't really view it as what do we do to make that happen, although I have ideas, of course. I basically am working backwards. Russ: What's necessary to be true. Fair enough. Let's turn to that. Obviously there are many other ways you can check opportunism generally. There's repeated dealings, there's reputation, there is police rules, monitoring of various kinds. But we're going to focus on the most difficult problem to monitor, observe; because that's really important to keep in the back of your mind as you are listening to this. Because obviously markets and societies find ways to deal with many of the problems associated with opportunism. This particular kind is special. Guest: It's special but it is more frequent and it is more fundamentally important than one might first suspect when one first thinks about these things. Part of the reason why is most of the cost is unobserved. Most of the cost takes the form of economic organizations that don't exist, or institutions that don't exist. So, I would argue that the preoccupation with incentive-compatibility mechanisms is the result of kind of a survival bias. In other words, you study what's there to see and for most of human history, what we have observed are institutions that exist to solve these kinds of like shirking and so forth that are pretty frequent precisely because being able to trust people you don't know is something that has been extremely rare throughout human history. It's even rare today but if you go back 500 years or so, I would say it was completely rare. Nobody had the kind of moral beliefs that would be required to get you to a condition of genuine and generalized trust at the same time. Russ: So, something has changed and part of your argument is going to be, although you don't deal with this in depth in the book: that change helped facilitate the explosion in our standard of living. Guest: That's right, and actually, I'm writing another book now that deals exactly with that issue. That's a huge issue all by itself.
18:31Russ: Let's go back to the moral issue now, which is: What's necessary to create behavior on the part of individuals basically to turn down, reject, and resist the chances to be opportunistic when nobody is watching? What do we need? There are a couple of things that you need. Guest: Number one, the person's predilection to be trustworthy cannot be merely an exercise in incentive compatibility. Which is what most economists want to do. They want to model trust behavior and trustworthiness as an exercise in incentive compatibility. Russ: Explain what you mean by incentive compatibility. Guest: It's the idea that it's an exercise in enlightened self-interest because it's in your own best interest to behave in a trustworthy manner. The most common example is to say: Markets breed honesty and honesty breeds markets. Suppose you've got a guy and he's a car mechanic. If he behaves in an untrustworthy way it gets back to the customers; he has less business. If he behaves in a trustworthy way, he gets rewarded for that by virtue of having more business. And so on and so forth. So, that's an example of the kind of argument that most economists like to make about trust. Which is: It's no big deal, it's easy to explain. It's in your own best interest to be trustworthy anyway. That's all well and good but the problem is if that's all there is to trust then trust is going to fall down exactly where the word is most meaningful. This is such an empty approach that Toshiyo Yamagishi, who is a pretty famous social capital theorist, sociologist in Japan, says this isn't even trust at all. We should call it assurance; that's all it is. Russ: I agree. I don't trust you. I just know you are going to act as if you were trustworthy. Not the same thing. Guest: And Oliver Williamson is very dismissive of a great deal of the trust literature; and he would say that this is what he would call calculative trust, which is a contradiction in terms anyway. So, for a situation in which there is a genuine golden opportunity is possible. Russ: Explain that again. Guest: A golden opportunity is a situation in which the person who may or may not behave in an opportunistic way believes there is zero probability of being caught. In any way, shape, or form. They can do it and they can get away with it, perfectly. Russ: And this terminology comes from Robert Frank. Guest: Yes, Bob Frank first introduced that phrase I believe in 1988 in the book Passions Within Reason. That's the first place I ever saw it. You've got to be able to deal with that. And so, Frank's argument, and I think he was absolutely right although he was kind of dismissed at the time, was that the only way to bust out of that is for trustworthiness to be based on moral taste. If it's in any way an exercise in rational behavior, it's not going to work for a golden opportunity. So, the thing that's producing the trustworthiness has to be in a sense pre-rational, antecedent to the rational calculation problem. So, he said it had to be moral taste. It was a heretical thing to say when he said it and people have largely dismissed it. And I think that's been a huge mistake. Russ: They dismissed it because economists generally don't like arguments based on taste. They prefer to use arguments based on prices, incentives, etc., institutions as we talked about. But this is basically saying you'd better have a taste for being good. Or not doing a bad thing. It had better be part of your makeup, to solve that. And that is an unappealing argument methodologically. It could be true--which is the problem--but it's unappealing methodologically partly because you don't want to be in a position to say: Well, the way we'll make the world a better place is we'll get people to be better people. That obviously--most economists are uncomfortable with that kind of logic. But that doesn't mean it's not true. Guest: This one's also uncomfortable with it. I don't like arguments that are grounded in taste, but nature doesn't care what we like. The explanation just is what it is. If it is indeed the case that tastes carry the day, then it's incumbent upon us to move forward with that as our working theory. Turns out things are not quite as bad as people think, and we can circle back to this later when we talk about culture. But anyway, you were asking what do we need: Well, first of all it needs to be taste. That's where Bob left it. He just said it's got to be taste. I pushed the ball down the field by saying if it's got to be taste, then what kind of moral taste? And then I worked through the thought experiment to discover that first and foremost, if the reason why you think something is wrong is because of the harm it does to other people, which is by the way what I would call harm-based moral restraint, and that is kind of the foundation for why most of us are reluctant to be opportunists. But if that's the only reason why you won't behave opportunistically is because of the harm that's done, then the problem is, if you are in a situation where you think nobody is going to be harmed by your opportunism, you'll still be opportunistic. And just think about it for a minute. That is not a big problem in very small group society, where you live in hunter-gatherer bands or small tribes. The number of people involved is fairly small, so even if we don't get caught, we do know that our actions might measurably harm someone that we care about, or maybe we don't care about him but we don't want to be feeling like we hurt somebody. Russ: By the way, we should mention: guilt is a lot of what we are talking about here. Talk about that for a second. Guest: Guilt is the mechanism through which all of this works; and the question is how do you put guilt to work? You put guilt to work by having moral values that actuate it. The point of my book is that moral values are important also, but even more important is how they are structured, because otherwise you are not going to get guilt triggered in the right sort of way. Russ: And this point about small versus large, I found very interesting, because basically what you are saying is that guilt is going to be triggered by empathy. When I realize that I'm harming someone I'm going to feel bad about that, which is I think a universal truth. We may differ in how bad we feel about harming others and differ dramatically in how we emotionally react knowing we've hurt someone; but the insight you have which I really like is: you might be wrong, but if you don't believe you are hurting anyone, either because you don't perceive it or it's so small--the harm is spread out across many people, as it would be in a large group--the guilt is going to be very small. And you give the example, which I thought was very good, of a false insurance claim. Explain how that would work. Guest: The basic idea is usually when we do something in a small group to behave opportunistically, somebody gets hurt and we feel guilty about it. But the greater the number of people in the group over which the cost of that harm is divided, the more likely it is that there will not be a single human being who is harmed and who we can therefore empathize with and therefore sympathize with and therefore feel guilty about having harmed. Russ: Or, if they are harmed, it's by such a small amount they might not even perceive it. Guest: At some point we don't even have to make that qualification. If I exaggerated my income tax deduction, if I got $1000 more dollars back from the government than otherwise, there is not a single person on the planet who is harmed. There isn't. We don't even need to quibble. We are talking about way less than a penny per person in the United States. People can't even perceive that. It's not even there. Noise swamps it by orders of magnitude. So, no one is harmed. And that's why many people who seem to be nice guys and seem like they would never do anything to hurt you or your family or anybody, very generous, good people, might cheat on their taxes. Russ: Or inflate their expense account at work. Guest: Exactly. And that's a fundamental problem. It's a problem everywhere, but it's an especially big problem in countries outside the West. Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught. So, you are only left with incentives to combat opportunistic behavior. So, the point of that is that harm-based moral restraint is not enough to deal with the empathy problem; and the empathy problem is fundamental because it's a problem that gets worse the larger the group size is. And you are going to be an impoverished society if you can't sustain very large institutions, large markets, large firms. Bigness is the key. Smith is right, and getting big means that our hardwired sense of moral restraint is going to fall down on the job. Russ: Because that's a small group thing. Guest: Right. Because we are a small-group species.
28:42Russ: Let me raise Immanuel Kant here for a second. The only thing I understand about Kant--which is I think an important thing, though--is the categorical imperative. In the categorical imperative he says that you should take an action or avoid an action, when trying to decide if it's the wrong thing to do, you should imagine if everyone did it, if it was common practice, rather than just you doing it. And that's his way to solve this problem, right? I always use the example: Sampling all the fruit everywhere in the grocery, or reading all the books in the bookstore while drinking coffee, which most people say: Well, that doesn't hurt anybody; it's no big deal. And to some extent that's true; but if everyone did that instead of buying the fruit or the books and just ate while they were there, there wouldn't be grocery stores or bookstores; and I consider those immoral acts. When I tell people that, they get mad at me. But I think that's correct. And that's one way to solve the problem. But you don't deal with that. Or do you? Guest: No, I do. In the book I compare the moral foundation after I completely work the whole position out to what other philosophers have had to say, and one of them is Kant. I think what Kant was doing is he was giving a rigorous voice to changes in moral beliefs that were already underway. So, in other words, I don't think he's somebody who brought about these changes. I think he's somebody who is simply echoing them. They are already in the culture and he is codifying it and making it rigorous. I think that people who like Kant or know Kant are going to say: the principled moral restraint, which is the thing that I'm going to say solves the empathy problem, makes a lot of sense to me. Principled moral restraint is the idea that I'm not going to do this particular negative moral action not because of the harm that it does but because I believe it's wrong in and of itself. Russ: Even though it would benefit me. Even though it's in my own self-interest. Aside from the guilt. With the guilt aside, say, it's in my financial interest to do this, it's morally wrong; I'm not going to do it and I'm not going to get caught. Guest: Right. And many economists balk at this. Not to pick on Oliver Williamson, but he and I have argued about this over the years a great deal. And I would say to Oliver: Suppose you are at 7-11, and there's only one person working and he has to duck in the bathroom. And suppose you knew the security camera wasn't working, so you knew with certainty you could steal a candy bar and get away with it. You are not going to steal that candy bar. I know you are not. You know you are not. And you know that I know that you are not. Russ: And it's not because: Well maybe it really is working. That's not the reason. It's just you don't think it's right. Guest: I think that's how Oliver describes principled moral restraint but doesn't realize it. Russ: Well, I think a lot of economists are uncomfortable--it comes back to my methodological point. I think everyone accepts that as true. I think there are economic ways of looking at that. I think if the candy bar would save your child's life, even though it might be wrong you might be more likely to steal it rather than just to appease your sugar demands for just a few minutes. I'm willing to accept the idea. Guest: Sure, but there's a qualitative difference between stealing the candy bar to save your child's life and saying I know that stealing it was wrong but I don't care; I'm going to save my child's life versus not believing it's even wrong in and of itself. Russ: I agree with you. Guest: There's an example that can tease that out. Russ: I'm just agreeing with you that people do act that way, they feel that way, they refuse opportunities because they think they are wrong; but that economists may be uneasy about invoking that for methodological reasons.
33:01Russ: So, principles moral restraint is obviously, undeniably a way to solve the opportunism problem; but you have more to say about it than that. Guest: Well, it's a necessary but not sufficient condition for solving the opportunism problem. It solves the empathy problem, but there's another problem. Russ: The empathy problem meaning that you might have trouble feeling that there are actual people being hurt. Guest: Right. Even if you solve the empathy problem, you have another problem, and that is someone could feel guilty about undertaking a negative moral act, let's say an opportunistic act, and they feel extremely guilty about it because they possess principled moral restraint; maybe somebody's hurt, maybe somebody's not, but that's beside the point in this case; but they feel guilty about it, so they have principled moral restraint. There's no issue there. But they may also feel guilty about not being able to take a positive moral act they could have taken if the negative moral act is undertaken. So, this is what I call the greater good rationalization problem. And it is really a huge problem, because this is a device that many advocates use to rationalize their actions in ways that after a while we come to take as reasonable but not so long ago we would have viewed as patently wrong. Russ: So, give an example. Guest: In the United States today the conversation begins far downstream of whether it's legitimate to take money from other people to solve some kind of social justice kind of problem. If you go back to say 1870 and you say: I've got an idea. We should have the government take a bunch of money from these people and then give it to those people because these people have a lot and those people don't have very much, the vast majority of Americans in 1870 would have said: You can't do that. That would have been self-evident. Although it would be nice to do this nice thing, that would be an inappropriate use of government power. It absolutely would not fly. But over time our sense of what's normal becomes a new normal--popular phrase now--and those kinds of things are just taken as the way it is. Russ: And there are a whole bunch of rationalizations for why that's a good idea. But certainly you are right--there is only a small group of people who would view that as immoral. Guest: Now. Russ: Correct. Guest: But the greater good rationalization problem is a fundamental problem for trust because if I believe that you might feel more guilty about helping someone that you could help if you cheated me, even though you like me, I still can't trust you. I have to believe that you are the kind of person who would say: Just because you could cheat me a little and help your nephew a lot, you don't do that sort of thing. You don't even think in those terms. That's not on your radar screen. I need to believe that about you to trust you completely, in other words to genuinely trust you because I've reached a rational conclusion that you are genuinely trustworthy. Russ: Does that play out in the firm example? Guest: Yes. Russ: How would that play out in that example. I understand it where, let's say, you are really wealthy; I'm buying some property from you; I'm going to use the property for some good cause; and I might convince myself that it's okay to cheat you. And I accept that that's a problem. How would that play out in the firm? Guest: Suppose you are a middle manager in the firm, and you are working really hard, and you work really hard because you believe you are going to be well taken care of by the firm. You are going to get your just desserts. The firm is going to shoot straight with you. You can trust them completely. Things might not work out because you might make a mistake; you might make the wrong call about this or that investment, but that's different. It's never because the firm might cheat you. Intentionally. Or the firm might reach the following kind of conclusion: there's a reduction in demand for the product; they've got to let some people go; if they let you go, you'll probably be able to find another job somewhere; you are pretty talented. But there's this other guy who doesn't work real hard at all, not that gifted, certainly doesn't put the kind of time in that you have; but he has somebody in his family that has a pre-existing condition and if they cut him loose, it will be much more difficult for him to get health care. So, it would be nice to that person. So, in an effort to be nice to that other person, you end up being fired. Now, even if in some ultimate moral system that's the right outcome, it doesn't matter. Your willingness to do that as the firm's CEO, to me, affects my behavior and my willingness to trust you and therefore make firm-specific human capital investments in your firm because that could happen to me. That's a quick and dirty example. Russ: That's interesting, but is that third-degree opportunism? Is that undetectable? Guest: Well, it doesn't have to be undetectable. I'm saying this is another issue. There is a hierarchy to the argument; they don't all have to fit, all be in the same stream of subsets. You asked for an example; I gave you a quick and dirty one. Russ: So, let me just stick with that for a minute. Is the worry then that I as the employee--if I can't trust you, I'm not going to make the investments in myself in the firm that I would otherwise make? There's loss of output there. Guest: Yes. That is true; and that's what I said. But that's not the real point. The real point fundamentally is: I'm not able to trust because you are willing to reduce my welfare, knowingly, even though it's not the appropriate thing to do by the rules of the game; we spelt it out; because you think there's a greater good that can be achieved by doing this to me. Russ: So, the basic point of this, then, is that because of this greater good possibility, people may do things that may violate negative prohibitions, because of the greater good. And so what do you have to say about that? Guest: I certainly feel like you've engaged in a negative moral act because I've lived up to the terms of the contract. By any objective reading of the contract that spells out my employment relationship to you and the other employees' employment relationships to the CEO, I shouldn't be fired; he should. I got cheated.
40:23Russ: But you want to find a mechanism for reducing even that. Guest: My point is that you will not have a social norm of unconditional trustworthiness if you don't also deal with the greater good rationalization problem. And the solution to that problem, just as principled moral restraint solved the empathy problem, lexical primacy of the obedience of moral prohibitions over the obedience of moral exhortations solves the greater good problem. Russ: Explain that. Guest: There are two types of moral statements out there. There are those that exhort us to take positive moral actions, to do things that most people would say are good and right and problem. And then there are prohibitions against negative moral actions; we are being prohibited from doing things that harm or are inherently wrong. If moral beliefs don't just list a bunch of moral values but also adduce a logical structure on those values such that the obedience of moral prohibitions comes first and foremost, and the obedience of moral exhortations is only meaningful in the value of a person's morality if and only if they've satisfied the obedience of moral prohibitions--in that case, that person will never trade a greater good kind of outcome against opportunism. So, you don't need to worry about being a victim of their opportunism, because they felt justified in doing so because there was some kind of positive moral act that they felt morally compelled to do because in their moral belief system, that utilitarian comparison compels them to cheat you. You don't have to worry about that because they don't have that kind of moral belief system. They have a moral belief system that says: First, don't engage in opportunism. Russ: So, this is basically: the ends never justify the means. And the emphasis there is never. And if you know you are dealing with somebody like that, that would be really good, because you know they wouldn't exploit you, justifying it in their mind that there's something better coming. This would make politics very different, I would just say as an aside. Guest: Oh, absolutely. Maybe most people aren't that way, Russ, but you do know people who are that way. Russ: I do. I think it's good to be that way. Let me just make an aside on something we haven't talked about yet, which is the--and this is very Smithian--the role of self-deception. For me, once you open up the argument that maybe this is for the greater good, you start to go down a slippery slope of justifying what you are doing that's really for you but you'll tell yourself: It's not for me. Of course not. It was for my nephew. Guest: I discuss that explicitly in the book. Russ: I don't remember that part. Where does that come in? Guest: It would be in the chapter on duty-based moral restraint. Russ: Sorry I missed it. That's to me the danger. Many people would argue for "modern morality" you take in the greater good. That is the moral action. But my view is that's a slippery slope. Guest: Right. I have a section titled something like: When greater good rationalization becomes self-serving rationalization. And I give some examples of how easy it is to happen. Very concrete examples. But I don't remember any of them off the top of my head.
44:07Russ: So, let's summarize here, because we've gotten into some interesting, complicated stuff. Let me try to see if I know where we're at, which is: If we could live in a world where we knew that everyone believed that the end never justifies the means, we would live in a world where we would know that the person on the other side of the transaction, whether it was within a firm or across firms with exchange, especially with strangers, that we could trust them. And that would allow us to engage in transactions we otherwise either couldn't engage in or could only engage in at great cost, because of the other ways we try to solve that problem. And you are saying there are some that could not be solved in any other way--those would be the golden opportunity ones. Guest: Right, but you are only half right. Russ: What am I missing? Guest: You have to have both principled moral restraint and the greater good rationalization to solve the greater good rationalization problem at the same time to produce a condition of duty-based moral restraint, because--let's go to the greater good rationalization problem. Suppose by doing a particular thing that's a negative moral act, on paper, I can do this wonderful thing. That's the greater good rationalization problem. But if I don't possess principled moral restraint, even if I possess greater good rationalization, even if I possess lexical primacy, so I'm not subject to greater good rationalization, if I don't possess principled moral restraint, that action that I would have to undertake in order to facilitate the positive moral act won't be regarded as a negative moral act if nobody gets harmed. Russ: Correct. Agreed. Guest: Both have to be in play in order to have duty-based moral restraint. And duty-based moral restraint is not even enough to give us the moral foundation. But I figured we would wander to the next stage.
46:14Russ: So, at this point--I didn't feel this way when I was reading the book--but at this point in the conversation, I'm starting to think: This is hopeless. This is too high a level of moral foundation to expect in our fellow citizens. Guest: Actually, that's something I talk about explicitly in the book. There's a new movement in our society; it's a cottage industry, in character and morals education of children. And I'm sure you've seen some of this in your own life, if you talk to people in public schools they'll have these character and moral education programs. And, what you'll find is people who teach in these programs create the impression in children's minds that moral dilemmas are everywhere. Everything is complex and hard and there's no such thing as black and white; everything is a shade of gray. Russ: You buy an apple and it's from New Zealand and you've killed the planet. You'd better study it; you'd better look at it; good luck. Guest: Right. And they make these arguments that people like Dave Rose and Russ Roberts are unsophisticated and incapable of sufficiently nuanced reasoning in order to be a truly moral person. My response to that is: Utter nonsense. The problem is, what they are doing is, they have an implicit theory of morality that's actually very, very old. It is just rank utilitarianism. It's a perfectly good approach to morality if you live in a very small group. It gives you the most efficient outcomes if you live in a small group. No doubt about it. And that's what you are hardwired to believe. And that's why they have such an easy time persuading people they are right, because you are trying to persuade people of a particular type of moral belief system that they are hardwired to already be ready to be receptive to. The problem is, all of these nuanced analyses and all of these exceptions and conundrums that they have have the result of bringing a knife to a gunfight. They are taking a very small group sense of morality and applying it to the modern world. What I would say: the moral foundation is a much more complex set of moral beliefs; but people don't have to know the theory behind the set of beliefs in order to abide by the beliefs. Let me give you a simple illustration that I did in the book about this. Suppose you had a guy who was a mechanic; and this mechanic had a set of tools. And the set of tools was like a little kid's craftsman's starter set, but he's working on a 2010 Honda Accord. There's going to be some real problems, because advanced engines are very complex and they require many specialized tools, and so on and so forth. As a result, the only way you are going to get that car repaired is if that mechanic is extremely smart, clever, and creative, and with bandaids and duct tape can somehow get these tools to get the job done. Because the tools are not up to the job. The tools are simple and therefore it requires a very brilliant and thoughtful mechanic to deal with the complex car. Instead, suppose you had a guy who was trying to repair the same car, same repair, but he has the full complement of tools that are provided by the factory--really advanced stuff, lasers, and you name it, the whole nine yards. It would be a mistake to infer that the car is not complex because it's so easily repaired by the appropriate tools. The fundamental problem is, and these people are making the argument--and you are making the argument--that the moral theory is too complex; what I would argue is actually what's going on is their theory is simple and it's applied to a complex situation. This theory is adequately nuanced itself to deal with the very society that it gave rise to, so as a result the actual execution of the theory is actually quite simple. In other words, the rules of thumb one needs to abide by in order to abide by the moral foundation are actually very simple. Russ: I agree with that. Guest: The fact that it's a demonstration that the theory works has nothing to do with the execution of it. Russ: This comes back to my philosophy professor, Dr. Smyth, who in trying to summarize pragmatism and the thought of Charles Pierce--and it's a very Hayekian insight--the way he summed up one aspect of it was: Your grandmother is right. Meaning your grandmother has a bunch of rules of thumb about right and wrong--don't do this, do this, do that--and if you ask her why or why not, she doesn't have an explanation. She just says: That's always the way it's been and that's the right thing. You are suggesting that if we live that way, or the fact that we have lived that way for a long time is part of the reason we are so successful as a culture. And as an economy. Guest: Yes, and that way, what is required to live that way, doesn't require twenty hours of schooling. It requires many years of continuous reinforcement in order to build the character to produce the moral conviction behind a belief, but the beliefs themselves are pretty simple. Don't do stuff, don't do negative moral actions. Just don't do them; and just because nobody gets hurt, that doesn't mean you can do it, either. Because it's not about the person who is getting hurt or not hurt; it's about you. If you steal, even though nobody gets hurt, you are still a thief. So don't do it. Period. Don't even consider it. Don't even run it up the flagpole. That's not that complicated. And then secondly, if somebody says to you that you should do something that you know is wrong but it's okay to do it because there's this other good thing over here that you can make happen if you do otherwise, you need to realize that that is the language of a charlatan, that that is inappropriate, that you are being sucked in. We don't do things like that. Russ: Some of us try to raise our children that way; some of us do not.
53:17Russ: Let's move away from the morality. Let's talk about the implications for growth, development, and our standard of living. If this is correct, and much of it seems correct to me, there are two implications. One is: Societies, cultures, that have successfully inculcated the view that stealing is just wrong, don't do it, you never want to perceive yourself as a thief--and that's either done through religion or other cultural means--those societies find it easier to specialize and grow. Societies that don't inculcate that or haven't--again there's no thing called society that tries to, but societies with individuals who have not adopted those beliefs are going to find it much more difficult to grow and be successful, because specialization and exchange in large groups is going to be much more difficult. Two questions. Number one: What's the evidence that this is true? It has an appealing casual truth to it. Might there be some specific evidence that it's true. And the second question I would have is: It seems to me, and we've talked about this informally in the last few minutes, that there's been an erosion of that moral imperative in the United States at least over the last 30-40 years. Do you think that's true and do you see any signs that it might make a difference in how we behave towards each other? Guest: Well, as far as evidence, we do have empirical work on measured trust across the world, and measured levels of trust do co-vary well with economic performance and general quality of life in societies. That suggests that however it is they are able to achieve this trust, if they can, it does pay off. And so that doesn't cinch the argument, but it's certainly consistent with the kind of evidence that we would need to see. Russ: Aren't there people who have done experiments--this reminds me of these experiments where you take a wallet, you leave the wallet in the middle of the street, and in some cultures, you find a wallet that isn't yours, you stuff it in your pocket as quickly as you can and hope nobody is looking or notices and nobody says: Hey, what have you got there? You just take the wallet and you get home and take the money and dump the rest in the garbage. But there are other cultures, and we know this happens, where people find that wallet and they return it to a stranger with the money in it. Guest: And if a person was asked to come up with a list of societies where they think most people would act the latter way, they'd probably be right. Their preconceived notions are basically right. And most of those societies are well-developed and prosperous societies. But my point gets behind that point. My point is that in order to get to that condition, moral beliefs have to have a particular kind of structure. If they don't have that kind of structure, you won't have the unconditional trustworthiness and you therefore won't have an environment of trust. Because it will be unsustainable. People will not extend trust if they are continuously punished for doing so. If it's not rational to extend trust, you don't. Russ: Like a sucker, and after a while you'd rather not be a sucker. Guest: Right. Russ: The second question was: Do you sense an erosion in these attitudes in civilization, in Western society. And one thing you might talk about is: where do those views come from? Do they come from folk wisdom? Religion? Does it matter? And where are we headed. Guest: Robert Putnam has documented a pretty-much across-the-board reduction in measured levels of trust. He's focused on social capital, but he does measure trust directly. Eric Uslaner has also done this. From 1950 until the present, it's pretty grim. In the United States, the downward slope is clear. Measured level of trust and trustworthiness are both going down through time. Russ: I was going to interject--I don't believe in the Great Stagnation, which we interviewed Tyler Cowen on this program about it, but this could be an underlying cause of that, if you believe that. It does raise the question: we've been a pretty successful economic society since 1950, so you have to explain despite that erosion why we've done so well with large scale specialization in organizations. Guest: Well, Adam Smith once said: There's a lot of ruin in a nation. Russ: True. Guest: Charles Murray has made an argument kind of similar to this, that in Scandinavia things are moving in the wrong direction. But they've built up a huge pile of cultural capital. They are going to have to make a lot of withdrawals from their cultural account before you get close to the margin. But in my view this reduction in measured trust does comport well with changes in moral beliefs in our country. I've detected it over the course of my own life. The kinds of things that people would say now or say five years ago that would have been laughable even when I was a kid. Let me give you a quick and dirty example. When Jesse Jackson was caught moving funds from the Rainbow Coalition, that he directed, to a woman that he had impregnated--and he was caught dead to right, there was absolutely no way of defending the behavior, he got nailed--many people said: Well, but you've got to look at the whole picture and all the good he's done, give him a break. I really do believe that in 1950 if someone had said that publically, they'd have just been laughed by virtually everyone. Are you kidding me? The guy basically engaged in overt fraud.
1:00:09Russ: I don't know. It's an interesting thing. This is part of the challenge of this kind of research agenda; I don't say this as a criticism because I think what you are trying to do is extremely ambitious and it's very interesting. But there is still a remarkable amount of moral sanction and shaming and other activities for people who are fraudulent or who cheat. Just look at baseball. Very few people say--there's some variance on how people respond to the steroid issue, but so many people just say: Well, they cheated; it's wrong; end of story. Well, everyone else was doing it. Doesn't matter; it was wrong. Well, they didn't really enforce it. Doesn't matter; it was wrong. Guest: You are conflating two very different functions of the brain, though. You are conflating conviction born of a deep belief with a habit of mind. Are you familiar with Kohlberg's stages of development? Russ: I am. I don't like them. Guest: But many people who are just very mechanical about their moral beliefs really are behaving in a simple-minded kind of way. Cheating. But what if? It's just cheating. That's observationally equivalent between having somebody who abides by the moral foundation in a self-aware way and it's unyielding and has deep conviction to the possibility that a person is just--whatever the rule is and for whatever the reason, they accept it and they simply employ it. Let me give you a counterexample to what you are talking about. There have been numerous studies of high school students and college students cheating in college, and high school. This has been looked at over and over again. The amount of cheating has never been zero, of course, but it has gone up dramatically in the last 25 years. Moreover, in the past when you asked students why they cheated and they explained why they cheated, they almost never excused the cheating; they never downplayed the moral import of it. They would say it was wrong but they had to do it. Today, though, increasingly--I don't remember the proportion but it's a shockingly high proportion--most of them report cheating at least once; and a shockingly high proportion of those who report cheating at least once say: What's the big deal? In other words, they make an argument that is very consistent with the absence of principled moral restraint. Because their argument is: I cheated; so what? Nobody got hurt. I didn't take anything from anybody. Nobody's worse off. Teacher's not worse off; I'm certainly not worse off; nobody in the class is worse off; what difference did it make? And the answer of course is, at that margin it makes no difference at all. But my point is that it's indicative of a shift in moral beliefs themselves, the way we organize our thoughts, and it's very frightening. Russ: But you are suggesting--what your book suggests is that this change in our view of what is right and wrong, assuming that is really actually happening, and I think it could be right, is going to affect our economic activity because of the change in trust. Guest: Well, yes. And I think it already has to some extent. Look at the kinds of loan behavior that was going on in the financial crisis. I know people in the real estate business both on the mortgage side of it and in the house sales side of it, and it was just amazing what was going on. Many people knew what was going on was wrong. And they just shouldn't be doing it. But they thought, well, nobody's dying because of this. Russ: Or what about that the ultimate people who are going to pay for this--it's going to be spread out over a large group, a corporation will lose the money or it might be taxpayers; in fact it's good because I'm putting a person in a house. I'm thinking about the person who convinces somebody to take out a loan, doesn't require the documentation that would be necessary or the credit rating; both parties wink and say: Hey, this is good. Guest: And that con worked okay and almost nobody ever got hurt as long as all the house prices kept going up. The standard argument was, well, worst comes to worst, you can't handle it, you sell it. What's the big deal? But prices go down and there's the big deal. Russ: The fundamental question of this approach--of course, these kind of things happened in the past, the question is have people changed the way they feel about them? Did they feel differently in 1880, 1920, 1960 versus today? And the answer is: Maybe; I don't know. Guest: Well, there are tricks--obviously we can't go too far back--but there are tricks to teasing out these things. With survey data that matches up to trust experiment data; and I'm working with some economists around the world to develop variations on well-known trust experiments that can be put together with data that's generated from self-report information about how people feel about various kinds of moral statements, they order them and categorize them. What we can do is we can infer from how they would categorize these things in a survey instrument, we can infer how close they would come to the moral foundation, and then we can match that up to how they performed in the trust experiment, and see if indeed people who give us self-reported information about how they valued things relative to one another and therefore have moral beliefs that comport with the moral foundation actually are more trustworthy.
1:06:53Russ: Let me ask you about one more empirical possibility. Some economists have been surprised over time at the way people behave in various experimental games that economists and psychologists have created, dividing the pie, other ultimatum games, that people didn't behave--I always like this--rationally. That is the claim. And then there have been attempts to decide why they really did and what was going on, what people were thinking. And it's obvious, to me at least, that one of the reasons people don't behave with what narrow-minded self-interest would predict is that people aren't narrow-mindedly self-interested. They are not going to enjoy hurting somebody else, or taking money from someone, and they might feel guilty about it. I don't follow that literature closely. I assume that those experiments have been done outside of the United States and Western Europe. Guest: There's a cottage industry in doing those experiments outside the United States, and just as you'd expect, there's a great deal of variation depending on where you are. I have two reactions to those experiments; and I do talk about that in the book fairly extensively. Point number one: Most experiments-- in fact all that I know of, I don't know any exceptions, but let's just say most to be safe--are inherently framed in a small group context. You are playing a game against another person or against two other people or against all the other students in a classroom. Very rare that an economic experiment in a trust game involves more than 25 people. So we are already in a small group situation. That frames the issue in a small group way and therefore actuates all of our small group mental models. So, it doesn't tell us anything about large group trust. That's one fundamental problem with these experiments. This is something that can be fixed; it's not a big deal to change it; but nobody's even known to ask, even try. I'm going to help people figure out how to do that to test these other hypotheses. The other issue is when people behave in a way that's too generous, too trustworthy, too kind to be rational, according to what an economist would say, there are two possibilities. One is there is some kind of moral guidepost that's affecting their behavior; they are not just playing the game for the sake of the game. That's certainly true. But even if that weren't true, there's another problem that I call a reluctance--well, basically that a person has an aversion to false positive golden opportunities. It seems to me that we are hardwired, that we should be hardwired, to be suspicious of what appear to be golden opportunities. Suppose you sit down and play a game and a guy says: Here's the game, here's how it works. What are you going to do? Well, you are an economist, but even if you weren't, you might think to yourself, who is to say there's not going to be another level to this game where they are going to take how I play now into consideration. Russ: True. Guest: So, I don't know if I'm going to go all the way with this, and no matter what they say you don't believe them. And I do think that it's the mark of maturity and careful thought to be incredulous about things that appear to be golden opportunities. Russ: I don't know. We like free lunches; and a golden opportunity is a variant on a free lunch. We do have a taste for free lunches. Guest: We have a taste for free lunches but only if they really are free.

COMMENTS (42 to date)
Allan Stokes writes:

I've now listened to the majority of the Econtalk archive (with great appreciation), but I don't recall posting here before.

It's not obvious to me that our sentiments about trust have evolved less since the 1950s than our sophistication about media, which changed immensely as Marshall McLuhan observed. The media is one of our major barometers on the morality of greater society—one which we now assess with ironic detachment. Check out "The Daily Show" Crushes Fox News in Year-End Ratings. What does that say about trust? A big one for me as a child was discovering how Big Tobacco had co-opted the white coats.

A second point is that a winner-take-all competitive ethos is corrosive to trust. Is it really the case that a given wealthy one-percenter produced corresponding economic value? Or was this person just a lucky sod in a high stakes game of winner-take-all musical chairs? If people perceive the dynamic as winner-take-all, you quickly converge on a bright-line (steroid taking) moral culture.

In engineering, there is a concept of deterministic jitter: inescapable variation in measurement because you are forcing the measurement into discrete buckets. I think sometimes deterministic jitter is underestimated as a driving force in wealth variance. How many among us resists the small fudge to wind up on the better side of adjacent buckets? A promotion that could have gone either way? In our minds, we're only cheating chance misfortune.

Finally, I really liked the final point that we're instinctively shy about lunging shamelessly toward the free lunch, for fear it winds up on our permanent record (there's always a catch). This is not pointed out nearly enough in these Prisoner's Dilemma studies. No one takes the experimental frame at face value: it's a post-McLuhan world out there.

James writes:

Psychology, not Economics, has the answer to the puzzle posed in the first 5 minutes: Genes and IQ.

I think Economics solved large parts of the development puzzle. Institutions, capital, rule of law etc. are all critical factor, but they are not the sole factors. But please give Psychology and Biology some respect. Societies are composed of humans, and the raw human organism is ultimately the result of genetic factors, and those factors differ across peoples. Whether they differ a lot or a little is entirely a matter of context. In most contexts those differences are so minor we ignore them in the name of equality, but in the context of economic development they appear to be huge.

As Bryan Caplan discussed on this show 9 months ago, genes have been shown to affect intelligence, personality, and just about any other human trait you care to name. In the extreme, those traits are what distinguish humans from Chimpanzees. If those traits don't matter for economic development, then why is the idea of Chimpanzee's running their own country so absurd?

Mort Dubois writes:

Maybe you guys are just a little too old, but I think you are missing the effect that the Internet, as a disruptive technology, has on our institutions, and particular the traditional methods of rating students relative to each other.

In particular, you posit that cheating is rampant and on the rise in colleges, without discussing the nature of that cheating. (I'll guess that it's mostly plagiarism.) Internet access has made an act which used to require some effort so easy that it's trivial. And rationally, you might say that students are trying to overthrow a system which doesn't allow them to act like the pin factory - which forces them to produce, on their own, goods which are available in vast variety, at almost no cost. It's a non-collaborative method of teaching and learning. You shouldn't overlook the possibility that the students approach to their college has passed you by, and that they see their 4+ years in school as simply an ordeal to be passed through the easiest way possible, with the real learning coming later.

You are also overlooking ways in which large institutions can be now be held accountable for their actions, in a way that was never possible before. When sunshine hits the dark corners of the way our society's leaders behave, an overall lowering of trust is inevitable.

I think that we are in a moment of technological upheaval comparable to the replacement of sailing ships with steam. A lot of old knowledge was replaced with new, different knowledge. And there was undoubtedly an omnipresent chorus of geezers who took it as evidence that these kids these days are taking us straight to hell. A new order, with new norms of behavior, is bound to emerge.

There's a huge positive effect of internet access in evaluating whether a given choice of action is likely to work out well or not. In my own business, we do many transactions with customers we never meet - large sums of money change hands on the basis of phone calls and emails. Those clients have access to everything that googling my name (and it's not Mort Dubois) can turn up, for better or for worse. This gives me incentives that reinforce my inborn tendencies to honest dealing.

Don't right off the young ones just yet. And as always, thank you for a thoughtful and stimulating discussion.

Mort

John Strong writes:

This is great stuff. When interview ended, I found myself wishing you had prolonged the discussion another two hours.

QUESTION for David Rose:

I can see the relationship between commercial culture and the development of a culture of trust between strangers. In The Wisdom of Crowds James Surowiecki argues that long distance trust between strangers really took off with the success of Quakers at trans-Atlantic trade.

But what is the relationship between the culture of trust and the rule of law?

The lack of trust in preliberal corporativist societies seems to be correlated with the absence of the rule of law. Property rights are protected for select groups with access to power, but not generally. Yet, I don't see a clear causal relationship between trust and law.

Could it be that with a sufficient level of trust, enforcement costs fall to a manageable level, and rule of law becomes practically possible?

IH writes:

A couple of minutes into the interview, Rose claims that Argentina had a GDP per capita that was higher than the US's as recently as 1970. In fact, Argentina's GDP per capita (PPP) was $1,308 in 1970 and the US's was 4,892 according to UNDATA.

Russ Roberts writes:

IH,

You misheard him. He said as late as the 19th CENTURY, not the 1970s...

emrich writes:

I enjoyed this podcast a lot, to my surprise, given the dry subject. For example, I learned why pure utilitarianism is so flawed. Parts were a little hard to follow but what redeemed it is that Rose didn't mind showing he actually cared about the real world. He makes connections between abstract theory and the world the non-academics among us live in. I've given up on almost every academic work of philosophy after 25 pages or so but this podcast really tempts me to give his book a try.

IH writes:

Thanks for the clarification. That makes much more sense.

Nathan writes:

This like other podcasts has made naive claims about premodern times. While it is certainly true that long ago people were less likely to interact with strangers than they do now, it is not at all the case that they were less trusting of them. Witness the paramount importance of the guest-host relationship in early Indo-European societies, by which one was obliged to take strangers into ones own home, feed and clothe them, with few questions asked. The behaviour of Nausicaa and Eumaios toward Odysseus when washes up as a stranger in Phaeacia and Ithaca respectively is a well known example.

John Strong writes:

Relationship of Trust to Law and Legislation

Rephrasing my question above. This much is clear:

Unequal access to law and protection of property rights will produce a culture of distrust.

But the opposite is not clear:

Why should trust lead to the rule of law? What's in it for those in power?

They have citizens who trust each other. That's nice, but why would that lead to a liberal legal order or make the slightest contribution to forging a liberal society.

Man, I sure wish I could hear either David Rose or Barry Weingast address that question.

Why do I care about this point?

Well, how do I explain to friends that commercial culture played a role in extending the rule of law to large groups of people (nation states, ect.)? Most folks are skeptical of that, but I know there's an argument in there somewhere.

People on the left often think of themselves as muckrackers, but it's the other way around. Their special pleading for select groups creates an inertia that pulls societies back towards their preliberal past (witness Greece), and, for example, in 20th century city machine politics you see the fruition of this tendency, even in the United States.

Special pleadings produce legislation that is frequently antithetical to law (in the Hayekian sense). Fine. I understand that. But where does trust come into this mix? What role did trust play in the emergent process that engendered respect for law over special pleading legislation?

Martin Brock writes:

Rose often presumes what I'll call "economic morality". This economic morality is a particular, axiomatic system of moral principles and their implications. It is not Morality in general. Other moral assumptions are possible.

In a different system of morality, if I must take a candy bar, without paying a merchant for it, to save my child's life, taking the candy bar is not a necessary evil. It is a positive good, because my child's right to life trumps the merchant's right to payment for the candy bar.

Suppose the merchant is a "good" person in this other sense of "good". If I inform him of my predicament, he gives me the candy bar under these circumstances, because giving it to me is moral. Is the merchant happier with the candy bar intact? Of course not. Allowing my child to die under these circumstances harms the merchant.

In this hypothetical, I cannot ask the merchant for the candy bar, so I make the moral decision on his behalf, and my child lives. Because my child lives and the merchant loses only a candy bar, the merchant is grateful to me.

Of course, this moral calculation occurs exclusively in my head, because the hypothesis rules out communication with the merchant. If I do not take the candy bar, because economic morality operates in my head and the other morality does not, then my child dies, but the moral calculation still occurs exclusively in my head.

James writes:

Anytime an old person complains about the decline of society, you have to take it with a grain of salt. This discussion reminded me of the anti-atheist argument that you can't have morality without God.

I thought this guest was one of the least convincing you have had on the show Russ. Moral behavior is driven by emotion, not rational calculation, so what does it matter what justification people give for hypothetical actions?

The cross-cultural morality issue here is not the type of cognitive moral calculus or ingrained moral principles being followed by a certain culture. It is simply whether people in the culture view "the other" as members of an in-group or an out-group. All Norwegians view each other as members of the same group, but Iraqis most certainly do not. They are sharply divided on all kinds of religious, ethnic, and basically tribal lines. Sectarian conflict is a great destroyer of civilization, and the reason is that when people regard others as out-group members, they drop almost all moral prohibitions.

I think when cultures are able to extend the in-group to a large enough circle, it becomes easier for people to trust even those who are outside that culture (like American businessmen trusting Chinese businessmen).

Woodah writes:

Yes! Jesse Jackson's corruption is a great example of how far our morals have declined in recent years. Back in 1950, we had TRUST. Trust, for example, that people like Jesse Jackson would drink from their own water-fountains, sit in the back of the bus, and not eat in restaurants favored by people like David Rose. It truly was a golden age of morality.


John writes:

The discussion went from several specific trails of thought to general conclusions. And no matter what path was taken they all intersected at the same point: if the goal is avoidance of negative moral actions, then "principled moral restraint" is the only value that works, where the principal is absolute and not malleable to context or desire. But let's take another step down the path... principled moral restraint only can exist with a belief in absolutes. Now take the next step...

Matthew Newhall writes:

Hello

Long time listener. I love this series, and especially loved this show. David Rose is dancing around the problem but is missing something. The problem appears more complex than necessary.

There is a segment of the population that is absolutely never empathic. They know right from wrong, but only care if they are caught. They are permanent opportunists. The only thing that varies is size of the opportunity. Psychopaths are 3-5 percent of the population globally. Their brains are psychologically different than other humans. They will NEVER resist a true golden opportunity.

I propose that psychopathic populations vary and is the missing link in the 'South America is not a United States' problem. They were a small percentage at the outset of the United States and that was key. They grow uncontrollably during a successful economic run eventually triggering a self limiting event and destroying their host economy.

Human beings as we know most of them are actually 'irrationally' moral. Originally all humans were what we know as psychopaths. Then at the dawn of civilization a mutation occurred causing humans to express an irrational trust of other humans. This irrational trust is the fundamental underpinning of markets and subsequently economies of scale. The inherent risk in any investment in a selfish society is the proof.

It is only 2 pages a quick read.

http://www.warcloud.net/Psychopaths-ancient-breed.html

This theory makes predictions. I can defend this and answer questions.

Mort Dubois writes:

Also, Jesse Jackson was born in 1941 and is 70 + years old - hard to see how he can be taken as a representative of the decrepit morals of the younger generation.

Allan Stokes writes:

Here are a couple of interesting counterpoints, the Wired article fresh today.

Profit vs. Principle: The Neurobiology of Integrity

In short, when people didn’t sell out their principles, it wasn’t because the price wasn’t right. It just seemed wrong. “There’s one bucket of things that are utilitarian, and another bucket of categorical things,” Berns said. “If it’s a sacred value to you, then you can’t even conceive of it in a cost-benefit framework.”

The article suggests that a rigid code of conduct functions as a cognitive efficiency, without actually referencing the burgeoning field of decision fatigue.

Do You Suffer From Decision Fatigue?

The more choices you make throughout the day, the harder each one becomes for your brain, and eventually it looks for shortcuts, usually in either of two very different ways. One shortcut is to become reckless [and impulsive; the other is to] do nothing.
ThomasL writes:

Excellent podcast. I hope you have David Rose back again.

PrometheeFeu writes:

I really do not understand the issue with people taking advantage of a "golden opportunity". A golden opportunity if it exists can be exploited without any repercussions. But if that's the case, it cannot prevent mutually beneficial bargains by definition by its very definition.

Let's say I provide something and I can take advantage of my customers. By definition, if my customers take a loss, they will be worst off than if they went to another provider. So even if nobody knows the loss is the result of dishonesty on my part, it will be attributed to incompetence or some other flaw on my part and I will no longer have customers.

On the other hand, if my customers do not take a loss, no problem occurs.

The guest also brings up externalities, but those do not prevent mutually-beneficial bargains from being made and therefore, we can be prosperous.

I find the guest's argument thoroughly unpersuasive. It seems to me that on the contrary, people can depend upon market processes weeding out the bad apples and that we don't actually need some sort of underlying morality to exist prior to markets and institutions. That norm will be created by the market itself.

Brian writes:

At about 28:00 minutes the guest says

"It's a problem everywhere, but it's an especially big problem in countries outside the West. Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught."

Wow, I'd like to hear the evidence justifying this assertion. It is common for people in the West to do whatever they want even when they know it will harm somebody. If they think no one will be harmed then it is more likely than not people in the West will do it. I see examples of this almost every day - in Albemarle County, Virginia - a well educated, suburban community. One can only imagine the behavior of those in less fortunate and/or higher stress communities. But surely, Western financial institutions are unmatched in their immoral or unethical treatment of people and governments inside and outside the West. Anyone who believes the West, especially the USA, holds some moral high ground over the rest of the world is simply uninformed or delusional.

Seth writes:

I agree with Rose's observation that behavioral economics experiments are framed in the small group setting and tells us nothing about large group trust or norms.

I think this is common source of differences between stated and revealed preferences. We often state our preferences based on the small group setting, but express our preferences in large groups.

John Strong writes:

Brian wrote:

"It's a problem everywhere, but it's an especially big problem in countries outside the West. Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught." ... Wow, I'd like to hear the evidence justifying this assertion.

You missed the point. Prof. Rose was not talking about the intensity of moral feeling among people in preliberal societies; he was talking about their indifference to social harms that are widely dispersed.

Have you ever lived in a developing nation?

PrometheeFeu writes:

@John Strong:

I have lived in developing countries and I must say that I too am very skeptical of that sweeping claim by Rose.

Also, it's very important to note a certain number of things:
1) The chance of "getting caught" is generally much lower in developing than developed countries.

2) While we all want to think of ourselves as good people, that is not free. It's not unreasonable to think of "good behavior" as a normal good. So as we become wealthier, we are more likely to engage in "good behavior".

A. Zarkov writes:

David Rose brings up Robert Putnam's study, but skips over Putnam's conclusion: the more diversity, the less the trust between and among ethnic groups. For example, Los Angeles is the most diverse large city in the U.S., but has the lowest level of trust. Originally Putnam was reluctant to publish his own work because it flies in the face of the central dogma of our time that multiculturalism bestows benefits on a society. Objective evidence points in the opposite direction. Countries that are more homogeneous with respect to language, race, and religion are generally more prosperous, and have a lower level of violence than countries that are diverse. Diversity leads to an erosion of social capital. The same conclusion applies to regions, cities, and counties within a country. It's time to stop being poltroonish about the facts of life and face reality.

Michael Wengler writes:

Outstanding podcast, one of my favorites and I believe I have listened to them all.

This links in I think to Matt Ridley's "Rational Optimist." This suggests man has evolved to trade, that we have the "concept" of trading hard-wired in to us. It makes sense we would have other "moral" urges hard-wired in to support the insanely effective cooperation with each other that we are capable of. We are hard-wired NOT to be purely individuals!

It is ironic to me that the argument against a utilitarian morality is that it doesn't produce as much stuff for humans working in groups as does a rule-based morality. I.e., utilitarian morality is not as utilitarian as rule-based morality.

A. Zarkov writes:

Brian wrote,

"But surely, Western financial institutions are unmatched in their immoral or unethical treatment of people and governments inside and outside the West. Anyone who believes the West, especially the USA, holds some moral high ground over the rest of the world is simply uninformed or delusional."

The Japanese banking system is far more dishonest than the American banking system. In fact if one talks to people who do business in the Pacific Rim, you will find widespread agreement that dishonest behavior is far more common there than in the U.S. We have plenty of corruption here, but it's much worse a lot of other places. If you don't believe this, then you should get around more. Try Italy for example, or Greece.

Mike Riddiford writes:

Agree trust is important, but I can't help thinking that a credible threat of detection and prosecution of wrongdoing also plays an important role in creating the preconditions for society and markets.

There is probably an optimal level of investment you need to make in market supervision activities (acknowledging the potential for those to also crate problems).

After all, I sure know that I drive more slowly because of speed cameras, rather than any moral awakening to the dangers of speedy driving :)

Brian writes:

Reply to John Strong:

I don't think I missed the point at all. I was not talking about the "intensity of moral feeling among people in preliberal societies."

Rose asserted a difference in the moral character between people in the West and people outside the West:

"Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught."

I'm saying there is no difference. I see people inside the West act this way almost daily. Some examples:

-Dumping trash, appliances and furniture along a road side.
-Dumping hazardous fluids in waterways, sewers and open ground.
-Trashing public restrooms.
-Applying fertilizers on agricultural or residential ground which runs off into waterways creating substantial costs for people downstream (ever heard of the Chesapeake Bay?)
-Applying pesticides and herbicides that do the same.
-Shoplifting.
-Drinking while driving.
-Texting while driving.
-Selling mortgages to people you know do not understand what they are buying and cannot afford it, then dumping them on US taxpayers
-Selling mortgage-backed securities that you know are junk to buyers who don't understand they are buying junk.
-Filing false Social Security claims
-Filing false Medicare claims

I mean I could just go on and on and on. Who pays the cost of all these actions?

I have lived in Indonesia. Did I see the people there doing similar things? Yes. Did they do them at a greater frequency? No. Was the value/cost of the harm done greater than where I live now? No.

But not only are people in the US more than willing to "do whatever they want as long as they don't get caught" inside the US, they have no problem going outside the US and doing it as well. How about toxic waste dumping by Texaco in Ecuador? How about the toxic waste left behind in Vietnam from storing leaking Agent Orange drums on open ground at US bases?

Like I said in my first post, where is the evidence to justify a claim that people in the West are less likely to "do whatever they want as long as they don't get caught" than people outside the West.

John Strong writes:

@ PrometheeFeu: "I have lived in developing countries ... we all want to think of ourselves as good people"

@ Brian: "Rose asserted a difference in the moral character ... I'm saying there is no difference."

Human beings are wired the same everywhere, of course. But for most of human history the scope of compassion and fair play was limited to family and band. Even the tribe is a relatively recent social institution.

In my experience, people in developing nations are more generous than people in the U.S. when it comes to their intimates, but less prone to trust strangers, which is one reason why Ebay.com works better than its Latin American equivalent, MercadoLibre.com.

Trust of strangers is correlated with maturity of a nation's commerical culture.

In developing nations distrust is not limited to commercial transactions. Both distrust and despairing cynicism infuse all kinds of social institutions and undermine public spiritedness. People might be "good". So what? When it comes to the sort of behavior required to maintain modern liberal institutions, despair, cynicism and distrust trump goodness.

JRo writes:

"It's for the children!"

That's the battle cry of the Greater Good Rationalizers.

No doubt Jesse Jackson was thinking it as he funneled other people's money to the mother of his love child.

PrometheeFeu writes:

@John Strong:

Let's take for a given the idea that trust between strangers is lesser in developing economies. That does not rescue Rose's statement. The way he phrased it and the context specifically implied that this distrust was the result of a lower level of morality which caused institutions not to function properly.

My primary problem is that he implies that we are dealing with a sort of inherent difference in agents while there clearly is a single-agent multiple-equilibria model which accounts for the behavior. That's a basic Occam's Razor argument.

Also, I am made uncomfortable by how politically convenient his narrative is. We could easily boil down his conclusion to a bumper sticker: "Countries are poor because their people are immoral and they need to be made moral to become rich." Maybe it's true. But when I see an argument that so conveniently matches up with certain political prescriptions, I re-double my suspicion that the author may be letting their beliefs interfere with their investigation. The fact that the author brings forth no evidence for his assertion turns the red flag into a gigantic blinking warning sign.

Eric S. Harris writes:

Don't tell Mike Munger or Don Boudrequx or Robin Hanson, but if David Rose appears a couple more times I may have a new favorite EconTalk guest.

Also, some of the criticisms brought up by other listeners were things I hadn't noticed or hadn't considered on my first pass.

Which is good, because I welcome a reason to listen to the interview again.

Krishnan writes:

I was shocked to hear Russ mention "immoral" in the context of people reading books in stores (and not buying them) OR people sampling food/fruits in stores (without purchasing them)

IMMORAL?

Well, if they STOLE the books or food, yes - wrong AND immoral. These book stores AND grocery stores are doing what benefits them - if not, they would be foolish to do what they do. Oh sure, if EVERYONE did that, it would end the book store/grocery store - but IMMORAL? I cannot understand how anyone can jump to that conclusion.

Several current business models (including book stores, grocery stores) are under assault from AMAZON (books and more) ... it may be that the book store as we know it WILL go away (I doubt it though) - but people reading OR sampling foods is not immoral.

I wonder if looking at a product at say BestBuy and purchasing that on AMAZON is considered IMMORAL. I suspect Russ would say yes. I say no. It is consumers using information they have (or get) to get a good deal for their money. And many consumers make that choice knowing fully well that they are supporting some "non local" company.

Krishnan writes:

I am wondering if I missed something ... Russ Roberts railing against consumers making choices to buy or not to buy? Just very odd indeed. Something does not fit ... (OK, perhaps someone can fill me in on what I may be missing) -

PrometheeFeu writes:

@Krishnan:

Russ is using the Kantian categorical imperative here which states that if the outcome of an action would be bad if everyone took that action, that action is immoral. (I'm sure I'm simplifying, but that's the jist of it)

That sort of situation is exactly why I don't think the categorical imperative is a useful guide. Different people react differently to certain situations and the world works very well precisely for that reason. Of course bookstores would go out of business if everyone read the books in the stores and then just left. But if only some people do that, it creates a certain atmosphere that attracts customers who will buy books. It can also serve as marketing and price discrimination. When I was unemployed, I would often go spend time in a book-store and read books knowing that I would only buy 1 in 5 books or maybe even 1 in 10 books that I read. However, by allowing me to do that, they created a lot of good-will and after I got steady income, I went to them for all my book purchases until I moved to a different town. It didn't cost them anything (I suppose a little extra inventory) but they did end up gaining quite a bit.

Krishnan writes:

Re: PrometheeFeu - thanks

I can see why I did not chose "Philosophy" as my major - since I would be forced to read "Kant" - who to me, seems hopelessly confused. Philosophy IS very important though - it drives behavior,

John Strong writes:

@PrometheeFeu

I am made uncomfortable by how politically convenient [David Rose's] narrative is. We could easily boil down his conclusion to a bumper sticker: "Countries are poor because their people are immoral and they need to be made moral to become rich."

I don't think David Rose is making a statement about morally culpable behaviors. That's the kind of thing people on the left do. In particular, the left's narrative about "corruption" in developing nations attempts to trace social, political and economic dysfunction to the moral failures of individuals.

It's not really corruption. It's corporativism. Corporativism looks like "corruption" to Westerners, precisely because they view it through a liberal prism.

Here's an example. Where I live, being a member of the teacher's union entitles you to sell your teaching job or even bequeath it to a family member. The going rate is about $10,000 USD.

We know a lot of people who participate in this system. I have a friend whose family had a blowout because his uncle's widow decided to sell his teaching job rather than bequeath it to one of the nephews who felt entitled to it. We have another friend who is a wonderful person, generous, extremely active in her church doing good things, a singer with an operatic voice. She and her husband are considering mortgaging their home to purchase a teaching position that sells for around $10,000.

Now, these are not "bad" people. They just have no faith in their nation's educational institutions. They likewise have no faith in the liberal order of the rule of law and impartial rewards according to merit. In their experience, rewards depend on what labor union you belong to, what political party you belong to and who your friends, family and compadres are. Thus, in developing nations people feel enormous moral obligation to their circle of friends and family and almost none at all to large abstract institutions, and worrying about widely distributed social harms is a luxury that no one can afford.

Raja writes:

Solid podcast.

I found it interesting that Rose approached this problem out of frustration with "development economics". The obvious conclusion of his line of thought is that some regions are "culturally inferior," depending on how you want to define that, which leads them to experience slower growth. This is totally obvious and would have been common knowledge 60 years ago, but thanks to the cultural relativism jihad in academia has been brainwashed out of our memory. I think there's a middle ground between where we are now and going back to a "White Man's Burden" mindset. Namely abandon ideas of forcing development on others and learn to accept that their cultures simply may not value the same things we do, including economic growth.

Brain writes:

@ Raja

"... some regions are "culturally inferior," depending on how you want to define that, which leads them to experience slower growth. This is totally obvious ..."

Please define "culturally inferior."

Also, can you give examples where the "culturally superior" regions tried to force development on the "culturally inferior" regions, and explain what was their motivation in doing so?

PrometheeFeu writes:

@John Strong:

"I don't think David Rose is making a statement about morally culpable behaviors. That's the kind of thing people on the left do. In particular, the left's narrative about "corruption" in developing nations attempts to trace social, political and economic dysfunction to the moral failures of individuals."

Are you really going to try to cast this in a right-left issue? Have you been paying attention everything Bryan Caplan writes about the deserving v undeserving poor? Have you been paying attention to the Republican debates? How often do they talk about morality? I also must say that I've heard people on both the left and the right speak of corruption as holding back developing countries. I'm not at all convinced that you can claim that the left is more likely to make arguments based on morality.

But my point wasn't really that Rose is making this argument because it allows him to feel morally superior to people in developing countries. (Though the tone of the podcast does lead me to believe he does feel morally superior and that it may be influencing his judgement) My point is that such theories whether accurate or not strike a chord with many people because it makes them feel good about themselves. Furthermore, on the right, it plays very well into the narrative that many poor people deserve their faith. That's what I mean by "politically convenient". When a theory is so politically convenient, I become extra-skeptical from the outset to counter the fact that researchers, authors, publishers etc might already be biased in favor of the theory. And from the podcast, I must say that I think Rose allowed himself to guide the evidence and the argument towards a conclusion he liked rather than allowing himself to be guided by the evidence and the logic to the conclusion that made the most sense. Maybe his arguments just don't come out well in the podcast and the book might be much better, but my reading list is way too long already and so I'll have to base my judgement on the podcast.

"Here's an example. Where I live, being a member of the teacher's union entitles you to sell your teaching job or even bequeath it to a family member. The going rate is about $10,000 USD."

Taxi medallions in New York City go for $1 million. Union cards similarly can fetch very high prices to work on docks and in certain occupations. You're describing something which is also pervasive in the developed world.

"Thus, in developing nations people feel enormous moral obligation to their circle of friends and family and almost none at all to large abstract institutions"

I have not seen much of a difference between the developed world and the developing world in my experience. But let's say what you are saying is true. The morality that Rose says promotes development doesn't want people to feel a loyalty to large abstract institutions. Otherwise, to adapt his example, you might choose to fire the highly-productive foreigner because you believe it's more important to help out your fellow country-man.

"worrying about widely distributed social harms is a luxury that no one can afford."

But that's exactly my point. Being a good person is a normal good. When you have more money, you can afford to be a good person. But if you're poor, you can't pass up an opportunity to enrich yourself even if it is "bad". He reverses the causality.

Now of course, it's always possible that the causality runs both ways. But then, how did developed countries industrialize? Sure, if causality runs both ways, it's a multiple-equilibria system, but then, his theory doesn't have any extra explanatory power. Poor countries are poor because their people are immoral and their people are immoral because they are poor. Yet, obviously, the USA, UK, France, etc all industrialized. So it's possible to jump from one equilibrium to another. But then, I'd argue what causes the jump is the real explanatory variable, not the morality of people in poor countries.

max writes:

re: A. Zarkov comments of 26 Jan

@5:23PM Zarkov remarks approvingly of Robert Putnam's claims about the corrosive effects of ethnic diversity on interpersonal trust. "Countries that are more homogeneous with respect to language, race, and religion are generally more prosperous, and have a lower level of violence than countries that are diverse. Diversity leads to an erosion of social capital. The same conclusion applies to regions, cities, and counties within a country. It's time to stop being poltroonish about the facts of life and face reality."

@5:51PM Zarkov disparages skeptical comments made by someone else about the trustworthiness of "Western institutions" in general, and banks in particular. "Dishonest behavior is far more common" in Asian financial institutions than in U.S. banks, he writes knowingly, notably in Japan -- which just happens to be the most ethnically homogenous country on Earth.

Perhaps there are multiple A. Zarkovs with radically divergent views participating in this discussion? Will the real poltroon please stand up!

MH

max writes:

Am I missing something, or did Rose just (re)discover the "economics of informtion" ala 1960s-1970s Stigler, Akerlof, Stiglitz et al...?

Comments for this podcast episode have been closed
Return to top