Russ Roberts

James Heckman on Facts, Evidence, and the State of Econometrics

EconTalk Episode with James Heckman
Hosted by Russ Roberts
Selling Sneakers and Swooshes... There's No Such Thing as an Ob...

U. of Chicago, chicago2.jpg Nobel Laureate James Heckman of the University of Chicago talks with EconTalk host Russ Roberts about the state of econometrics and the challenges of measurement in assessing economic theories and public policy. Heckman gives us his take on natural experiments, selection bias, randomized control trials and the reliability of sophisticated statistical analysis. The conversation closes with Heckman reminiscing about his intellectual influences throughout his career.

Size:29.5 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast episode

Related Readings
About this week's guest: About ideas and people mentioned in this podcast episode:


Podcast Episode Highlights
0:33Russ: I want to remind listeners that we're doing a survey for your favorite episodes of 2015. You have until January 31st, so please go to and in the upper left-hand corner you'll find a link to the survey. So, please vote.
0:51Russ: Intro. [Recording date: January 11, 2016.] Much of your contribution in econometrics is related to what is sometimes called selection bias--the people or data we observe may not be like the people or data we don't observe. How much progress do you think we've made in this area? Guest: Oh, I think a lot of progress has been made. I think the additional literature was [?] pointing out the problem, which had been neglected by many economists. Not all, by any means; but it had been a problem that had kind of been swept under the rug and stayed that way for many, many years. And when the issue became--aware, got into the public discussion in economics--two things happened. One was that people became more data-sensitive, and this triggered responses that weren't[were?] purely methodological, that consisted of people collecting better data, which is always a very good thing to do. And second it also suggested something which I think links us closely to economics, which is basically that the selection decision generally, at least if it's self-selection by agents, really involves a lot of economic considerations. And so when we think about things like labor supply, unemployment, even voting or other kinds of questions, choices were there. So it stimulated some work also in linking economic analysis of data to economic choice analysis. So it had these two branches, I think, which are still very active today. Russ: It's a huge problem, though. You give a wonderful example in one of your pieces where you talk about when we try to assess progress made by African Americans over, say, 1950-2000, there's some progress over the first few decades of that period--substantial progress; but unfortunately it appears that much of that progress is measured rather than real, because many African Americans are not in the labor force--are not like the ones who are in the labor force. And so the rise in measured, say, average wages, is the fact that some of the lowest-wage workers are not in the data. Is that correct still? Guest: That's correct. A lot of the biggest source is of course that incarceration, where black males are literally taken out of the labor force. Most black males, many black males anyway who are in prison, are less than high school graduates or maybe just high school graduates or GEDs (General Educational Development) and they tend to be lower-wage people, so that they tend to be ignored in the official statistics that you see reported. But it's not necessarily always in that direction. There's some evidence for example in the last 25 years that in fact if you look at the wages of women and in particular women, what you found, what was being found was actually that increasingly, starting in the 1980s, more educated women were working more. Big growth rate in the labor force participation and employment of women came among the most educated women. And it turns out that those were some of the most highly educated and higher wage women. Therefore some of the growth of the female wages may well be a consequence of the fact that they are getting more educated, women who have essentially higher wages and higher wage potential. So this problem generically affects a lot of social statistics. People want it to go away, but it's there. Russ: Well, they like to ignore it, is what I've found. I'm a big critic of the failure to take account of demographic changes in household structure. And when you are using household data on inequality or home ownership and you have huge increases in single head of households, it's inaccurate to compare over time without correcting for that, it seems to me. And you have people just happily go ahead and do it. Guest: That's extremely relevant today when you think about the way that household inequality is measured. There are two different issues here that show up in a lot of discussions that just get totally confused. The one that you suggest is certainly highly relevant: namely that a big contributor to growth in household income inequality is the growth of single parent households. And we know that those are very, very unequally distributed. A second big faction--I don't know if you want to call it selection bias so much as the definition bias--but some of the most dramatic statistics about the rise in inequality are based not on the same data unit as the household but what are called taxpayer units. Taxpayer units and household units are very different objects. So, being careful about these definitions and making sure that we hold composition of the workforce and whatever we are trying to measure, important, constant is an extremely important question I think and still something that gets easily neglected. Very hard to explain in a single word or two; and I think this gets lost in public discussion.
6:21Russ: On the topic of wage growth, you mentioned women. It's rather striking how men's wages have been remarkably flat, at least in some data sets; that avoids a household issue. For a long period of time when productivity has been rising, when overall incomes have been rising, per capita GDP (Gross Domestic Product) is rising, male earnings are remarkably flat, at least corrected by conventional measures of inflation. Have you looked at that? You have any thoughts on that? Guest: Yes, I have. I think there's some very interesting work. In fact, it's interesting--this last week I was at the American Economics Association meetings and gave a course on inequality actually with Steve Durlauf--so for three days we met with a group of students, largely faculty and graduate students who were interested in this question. And I listed the latest evidence, which I think is extremely interesting and important, mainly the discussion about the CPS (Current Population Survey) [?CPS-U1-U6, unemployment or underutilization measures] and BLS (Bureau of Labor Statistics) [?]--the deflator and what exactly the true measure of wellbeing is. Russ: Good luck. Guest: Well, I realize it's a controversial discussion. But what's interesting is the following: that if you look at the bottom--take for example a measure that received a lot of attention: Two years ago there were official reports saying that the poverty rate in the United States, in 2014, say, that was the year I think this was calculated for, and you compared that to the beginning of the War on Poverty in 1964, that basically we are at about the same level. Maybe a little lower but the poverty rate was the same. However, people who calculated this carefully and realized two things: One, the progress in the true price of living--the fact that the cost of living has gone down, tremendous growth in quality. If you look at so-called chain-link indices you are going to find substantial growth in real income. And secondly, so not only quality but you also had real reductions in price, and especially among the basket of goods, the so-called Walmart basket that a lot of the poorer people, the less affluent people, would be governed by. In other words, that's the group that is probably the least advantaged among the population. And it turns out there has been substantial progress. And if you add to that additional transfers in programs, changes, the official poverty rate that many economists and some sociologists, even some strong advocates of research on poverty and advocates for the poor, would change the U.S. official poverty rate from 14% down to about 5%. So I think we've made tremendous progress. But it's because it's partly in a matter of dimensions that have to do with unmeasured components. And I think this gets easily lost. And on top of that we have a lot of problems with standard data sets that we use for measuring wages and for measuring a lot of these--even consumption expenditure--are showing increasing nonresponse rates. And that's a real problem. So, when people adjusted these things--and there are judgments made, no question about it--there does seem to be sort of more progress and less stagnation than you hear out there in the public discussion. Certainly in the Presidential debates. But I mean there is an issue, that real income growth have not been uniform across the different levels of the income distribution. Different percentiles of the distribution. But let's go back to one thing you mentioned at the very beginning here, Russ, and that is: you think about selection and you look at the question that the labor force participation rate of males has been declining, even in the prime age, that this common measure that people use, the so-called 90-10, which is comparing the 9th percentile people to the 10th percentile people at the bottom, say, that the content, the composition of those deciles is changing over time. So our comparisons aren't stable. People think of a percentage--5 percentile or even for that matter the median--as referring to a stable group of people. It's not. There are multiple skills and there's been a lot of selection and a lot of the estimates don't adjust for. So I think world[?] is not as pessimistic as what looks to be the case from the unadjusted raw statistics. That's a long-winded story. Russ: It's really important. Guest: I think it's a very important discussion which people just ignore. And I think it's become politically convenient for many people to argue that we're getting declining real wages. I just don't think we are getting declining real wages. I mean, there are a lot of issues, but I don't think the real wage has actually declined. And even people using Census data, CPS data, the Current Population Survey data, recently have not seen declines. But I think if you properly adjust you actually see some real aspects of growth. Russ: When you say people are careless about it--I expect politicians to be careless about it. I'm disappointed when economists are careless about it, because they are--for whatever reason there are a lot of incentives there, either publication or to get attention or to be influential. Guest: Well, I think that's part of it. As you know, in any profession--I guess economists are no different from others--making big, striking statements, something that's dramatic--making a splash is really important. But in this case I think there's a whole group of so-called poverty researchers--people who are focused on income inequality who just established a convention that they are going to decide that a skill level as a percentile in the Current Population Survey, distribution and making no adjustment for the fact that the composition of those people at that percentile has changed--it's a little bit like, you know, the same person should be at the same percentile, at the 5th percentile, and yet, you know, they are treated as, these percentiles are treated as really stable objects. And they are not. And they are not describing the same people. Russ: And it's not just not the same people--it's that they don't have the same characteristics. So just to take the earlier example, household composition at the median is radically different than it was 40 years ago. Guest: Oh, exactly. Russ: And so when you make those kind of comparisons, it's an apples to oranges comparison. Guest: Exactly. And I think that's a very common fallacy, but I've not seen any Presidential candidate--not that I follow them that closely--or any political candidate even discussing that. Even qualified--saying here, the real incomes have gone down. So there's an endless fear of pessimism that actually seems to be governing both sides of the political debate, Republican and Democrat. Russ: Yeah, I agree.
13:30Russ: I want to switch gears. There's been a lot of enthusiasm for randomized control trials in economics, particularly in the area of development in poor countries. But this goes back decades, as you pointed out, in labor economics, experiments looking at the negative income tax, the effect of training programs. How useful are these techniques and how reliable are their findings? Guest: Well, let me go back, give you the--you are absolutely right. These are, as you know, in social science--and economics is no exception--there are these eternal cycles. People get on those bandwagons; then they get off them [?]; and then they get back on the bandwagon. But they are new people. So, the wagon keeps rolling but the occupants are changing. I remember as a graduate student at Princeton University, it rolling some of the very first participants in the negative income tax experiment. That was an experiment suggested by research by Milton Friedman, suggesting that one effective way to transfer income to the poor, giving people incentives, would be a negative income tax. So this was viewed--there was a woman from MIT (Massachusetts Institution of Technology), a graduate student, who went to mathematics--her name was Heather Ross. She actually was the mind--she and several others--mathematical were the minds behind creating this program and trying to evaluating it. And so what came out of that was an interesting, and actually one of the great legacies of the negative income tax studies was actually modern econometrics or micro-econometrics-- Russ: Yeah, it's true. Guest: Precisely. Precisely because the experiments were so messed up and people did not understand when they were designing the experiments how much choice there was. How much attrition[?] there would be, how much individuals would respond to incentives in ways that weren't even thought about. So, the first round of experiments was generally viewed as a failure. I think John Cogan's testimony before Congress in the late 1970s or early 1980s was the capstone of that failure in the sense that he pointed out all of the variety of estimates and the need for using econometric estimates to adjust for the non-compliance, the self-selection, the nonresponse, and on and on and on. So that all went to rest. Meanwhile, the faithful continued. And there was a large group of people working largely for government consultant organizations, big companies like Manpower Demonstration Research Corporation, [?] still continues. And so there's then a constant faith, despite this surge around [?] what I would call extreme failure. Nobody believed the New Jersey income tax experiments; and in the later Seattle experiments--nobody believed them because they were so heavily compromised by a whole set of other issues. But still there has been this notion out there, which is popular. People understand you toss a coin, you randomly assign aspirin to that one group and no aspirin to the other and you make a comparison. It's so easy. It's so compelling. And it's so misleading in a social context. And I say that--it's not just development[?]. So you get various people who have come along and picked up the banner. You know, Esther Duflo has certainly been carrying the banner forward in development; and Banerjee. And I'm not saying that the experiments don't add to the data sources that we have. But I think there are subtleties. Some of these points I made in a paper many years ago--Angus Deaton--revised some of those points in the context of development. But they really came to this: That people, when you experiment on them, are acting in a purposeful way. The most striking example, I can say, is the recent study about the Head Start Impact Program that was put out a few years ago by, I believe the Department of Education. And the Head Start study, if you look at it, randomly assigned people to Head Start, at least some Head Start centers, and then denied access to others. They didn't deny it permanently. They denied it over a window of opportunities. And so the experiment results are reported; the treatment group really wasn't doing much better than the control group in the experiment. But as people looked at that study, they found exactly what we had found in earlier Manpower studies and the like. And they, namely--what did the control group people do? Well, first of all, the random assignment generally has to be among people who are interested in taking the program in the first place. Okay? So, basically whether it's a job-training program or Head Start, you randomly deny access to people who apply and are accepted. That's the standard. It doesn't have to be--in the drug trials, not necessarily so. Well, what happens is people who are denied access to the drug or the job training program or Head Start, will actually try to find substitutes for it. In some earlier work, we found, during the time when AIDS (acquired immune deficiency syndrome) really wasn't treatable, that when random assignments were made of AID trials to what was thought to be an effective drug for AIDS patients, the patients involved in the experiment, the subjects involved, were so threatened that they ended up sharing their medicine with the control. I mean, they didn't--it was a blind trial so nobody knew who had the treatment, who had the control. But treatments and controls knew each other, and they basically just randomized within themselves: everybody got a share of everything else. They at least got half a loaf [?] rather than none whatsoever. So--and this was certainly true in the job-training programs where people who were denied one job-training program would enroll in another. And there were a lot of substitutes out there back in the 1990s and still today. But in the case of Head Start, which is relevant, there are a huge number of other childcare programs out there. Including other Head Start programs. So, it turned out that a big chunk of the people who were in the so-called control group were also getting in a Head Start program. Or, maybe a program better than Head Start: you know, some substitute they could find. So, again, economic choice theory had its way. And the control group was heavily contaminated by this. And so, a simple treatment versus control comparison was not informative. Russ: Understating the full impact. Guest: Clearly understated. It's like, literally comparing like, I have an apple, I have a Washington State apple here on the left and a Washington State apple here on the right; and gee, there's no difference between apples. Which is fine. But it doesn't tell you the question about whether an apple is a good thing to eat versus nothing. And that's literally what was going on. So, I think what's happened is, this eternal optimism. People understand it: they think they understand it. These other questions are too subtle sometimes. People just don't want to--and they say, 'Oh, here's the experimental evidence.' And the experimental evidence, I think, actually has to be treated with a real grain of salt sometimes. A lot of caution. And people don't. It depends. For example, there were studies that were done in India about the effect of small lending programs. And one group of people introduced into the area, into an area of India, a lending program for disadvantaged people--generally women or small lenders. And the idea is: Do these programs have any effect? And that particular intervention showed no effect whatsoever. But later analysts looked into it and said, 'Oh, wait. It turned out when that program was introduced and to that particular part of India, there were 40 other programs, very comparable, already in place. So, literally there were perfect substitutes for what the treatment was. So the randomized trial was completely compromised by failing to think about the substitutes. There are a lot of issues that arise with randomized trials. So, I think the issue of randomization--it's a good idea: extra source of variation is good. But you've got to be careful. And I think people aren't. And I think that's been a problem with the interpretation of this data.
21:55Russ: Well, let's look at more traditional techniques. We had Josh Angrist as a guest on EconTalk, talking about what some call the credibility revolution, econometrics' new research designs; other measures to avoid estimation challenges when we apply econometrics to microeconomics. Are you a fan of that literature and those new techniques? Guest: Well, I'd say two things. First of all, the so-called new techniques are not so new. They involve Instrumental Variables, which I think go back to Sewall Wright or his father, Philip Wright, 1928 or so. Secondly, Instrumental Variables have been a central part of econometrics for the last 70, 80 years. So, the methodology of an instrumental variables is not new. I think that the so-called-- Russ: Just to clarify, for non-econometricians: Instrumental Variables are ways to try to control for the worry that causation might run in both directions. Or there's a bias in your estimation because of the complexity of those interactions. Right? Guest: Right. Well, an Instrumental Variable (IV) is, you can think of an experiment like we were talking about as an example of an Instrumental Variable. So, for example, you can think of randomly assigning somebody--forget about the problems with the experiment. What you find is you randomized--if you randomize somebody, what it does is it's neutral between treatments and controls. Any unobservables among the treatments will be balanced with those among the controls. And they are randomized trial; and the randomization assigns some people to a treatment and denies others treatment. That's in an ideal world. That's what an IV does. But it's not just a totally random assignment. It's assumed--and it's a big assumption--that it balances more or less the unobservables, which is the treatment and the controls. But the application of the instrument--you know, it moves people towards one direction versus the other. So, you can think of randomization as just a special occasion of Instrumental Variables. So, yes--sorry not to define it. Russ: That's okay. Guest: But I think this whole idea of the credibility revolution, it's very good; it's good for sales; and I'm very happy to see sales and consciousness [?]. But I think the idea of the credibility, so-called credibility revolution: First of all, I think we have to properly attribute a lot of this basic thrust to Ed Leamer. And the same is both written in 1978. He had a book called Specification Searches in economics where he raised a lot of questions which are still on the table today. In fact, I would say a lot of the work in econometrics about robustness and sensitivity was presaged and pretty well described in that 1978 book by Leamer. Which I think is still available online. But I think the idea of the credibility revolution came from this, that--and I think there is some value in being aware that a lot of conventional econometric procedures--you know, assumptions about linearity, assumptions about normality, assumptions about making distributional assumptions functional form assumptions--did, and were documented to actually change the nature of the empirical work that came from it. And it was very hard sometimes for people to reproduce the findings of one study by some other study. And so it became kind of a cloudy, cloudy world out there. People weren't sure what they were getting from all of these models. So I think there's some general thrust that was true in the whole economics profession, starting mid-1980s, about the time Angrist and others, [?] this credibility revolution. We're starting out on [?] a graduate school--a lot of the previous structural work had really not delivered on its promise. But there was a lot of fragility. So, fragility is the key, here. But unfortunately what I see which was one of the negative sides of this so-called credibility revolution is a lack of interpretation of what's being estimated. I think the goal of econometrics, as opposed to statistics, is to ask economic questions and to answer those economic questions. And I think that means you start with a question; and the question is 'Why am I doing this, and what economic question am I developing? What am I really answering?' Like say, oh, say, Cogan's work. When I change the negative income tax or when I make the incentive scheme to work steeper, for example if I reward work more by paying higher wages or letting people keep more of their earnings, do I get a greater labor supply response? Or do I get less? This was cast in terms of what classical economics would call income substitution effects--you know, moving people toward something, compensating for real income, income effects making people wealthier, buying more goods that are desirable. Unfortunately, the credibility revolution has taken this notion that there's some missing variable out there, some unobservable, and we want to control for that unobservable, to a new level, to a new extreme. So much so that there seems to be an obsession with making sure that we don't have this unobservable contaminating our result, without asking the question of: What is it that we are getting from this instrument? What is it we are getting? So it's kind of traded away, so that--we're more credible, there's less bias; but it's less credible in the sense that we don't know what we are estimating. And so what's happening is that much less use of economics is being made. And as a result it becomes very difficult to use this--for policy purposes, for anything. So the high point of the credibility revolution kind of work or the instrumental variable type of work would be: Suppose that I have a policy, and I impose it in a given environment until literally it's not a random assigned. I impose, say, in one state a certain kind of withholding scheme on a tax payment. Say, Social Security taxes. Say I do this for Georgia but I don't do it for Mississippi; I standardize so that differences of the ethnic, social compositions of those two states. That's going to answer a very specific and useful question: If I impose that tax on a certain group of people whom I study, how much does that tax change their behavior--say, retirement or labor supply or work behavior or unemployment search behavior? That will be a specific thing. But generally speaking it's very costly and very unrealistic to imagine that every policy we are ever going to find or ever be interested in will be a policy that we can exactly replicate. Generally we think that substitution and income effects, these basic economic parameters are what govern responses to policy. And the whole promise of econometrics back in the 1940s when it was really started in a rigorous way here at the Cowles Foundation at Chicago was really to try to uncover the basic economic parameters that govern behavior. So I think that there's been a huge shift away from trying to understand behavior and moving toward statistical artifacts that are hard to interpret as responses to economic questions. So I think the credibility revolution has been somewhat overstated and probably not properly appreciated as really kind of turned focus away from serious economic analysis towards something that I think is more purely statistical.
30:01Russ: Well, let me take an example that is talked about a great deal, and I want to set it in the context of what I often hear from younger economists. Guest: Yes. Russ: I'll hear people say things like, 'Well, I just looked at the data. I just look and see what the data tell me.' And 'I don't need theory,' or 'I don't want to use theory to bias my understanding of what's going on in the data. And example of that would be the minimum wage debate. So, when I was growing up and when you were growing up--you are a little older than I am but we are both somewhat similar generations on this issue--there was no debate. It was an overwhelming consensus by economists. In fact, what made economists distinctive from everyone else was that we thought that minimum wages had a cost. They hurt employment opportunities for low-skilled people. And Card and Krueger came along; they did a state-boundary kind of comparison you are talking about; and they found very different effects than the traditional econometric literature. And that spawned an enormous literature suggesting that minimum wage effects are either small or even positive on employment. But mostly close to zero, as a lot of people would argue. I wouldn't, but that's what they say. What do you think of that debate? Guest: Well, it's interesting. That's a very good example, Russell. So, let me step back for a second. First of all, I think, you know, some of the most recent work, for example a recent paper that Tom MaCurdy published in the Journal of Political Economy last spring suggested that in fact there are other mechanisms by which firms can respond to higher wages. So, for example, and one way--and work also by a student of Card, actually, at Berkeley is consistent with this in a minimum wage study in Hungary a few years ago--and that is that firms can actually increase prices. So, instead of reducing employment they can actually increase prices. And they can pass it along. It almost depends on the price elasticity--how inelastic demand for the final product is. So, just from basic economic theory, it's not necessarily always going to be a reduction in employment. I mean, that's going to be a force in that direction but there are other ways that firms can respond to higher cost shocks. So that's one thing on the table. The second thing is that if you look at the Card and Krueger analysis and a lot of the subsequent analysis that came from that line of work, I think some of it was fairly casual. To put it mildly. And I don't think the body of work--if you look at some of the work by David Neumark and some of the other analysts who have looked this quite carefully, I don't think that the large thrust of work is actually saying that minimum wages have no effect on unemployment. I think that study in particular had certain issues that were pointed out by Neumark and by others, just what else was going on in those two states at that time. You see, see, from the casual observer, you kind of say, I'm on one side of the Delaware River and you're on the other side, so what you are going to do is say, 'Here's New Hope on one side in Pennsylvania, and then there's a counterpart just across the river, and those should be pretty similar.' But there are a lot of different state policies that are different, and compositions are different. And so it wasn't quite as easy, I think, as people wanted to make from that comparison. So, in this way it sounded very compelling. But as you got to examine it more closely, I think people started thinking, well, maybe there really could be some effects. So, in terms of the minimum wage debate, I think it's still ongoing. I think there are cases, theoretically, where if your firm is a monopsonist, for example, you might actually change employment. That's a classic case that was--Joan Robinson, I think, had that case or some version of it in the 1930s. But I think more generally the evidence does suggest that the structure is one towards increasing costs; and then the costs are passed on in various ways. So I think the debate--you see, what was compelling about that, and I will say it was compelling, if you read the book--was that it looked at the surface to be a very, very nice comparison with like a natural experiment where you had an increase on one side but not on the other side. But also don't forget that another key point that also frequently gets lost is that the range of changes in wages that were being considered in those studies were actually fairly limited. There were fairly small changes in minimum wage. I think when we get to the change of minimum wage for example of coverage of Puerto Rico by the U.S. minimum wage in the 1930s or even now today, we are getting huge increases in the minimum wage where you are moving the bottom of the distribution up to the median, and that I think would lead, I think, any economist, including Card and Krueger, would argue that those would be changes that would probably lead to substantial disemployment effects. What I'm saying is that minimum wages [?minimum wage changes--Econlib Ed.] are not all the same. Some are bigger, some are smaller. So, if I were to tell you that if you smoke one cigarette a day you are not going to be hurt that much, as opposed to smoking three packs a day, I don't think you'd be too surprised. And I think a small change in the minimum wage is not going to have much of an effect. I think that's what the findings have been. And David Card, anyway, when he's been asked on this has said repeatedly that they are talking about modest changes in the minimum wage. Which is different from the parameter of saying what happens if I boost the minimum wage by 50%? There's got to be some response to that. It's just out of the range. And this is the kind of counterfactual--the idea of a policy parameter that we haven't yet seen, except maybe in the case of Puerto Rico, that would be very important to know in designing policy but that a simple available observational study and simple experiments won't track. So that's why I think--I think we really have to be very careful. And again, that's the role of economics, in associating with interpreting the data. So I think that's the part--so, coming back to the credibility revolution, I think the part of the incredibility of the credibility revolution has been its unwillingness to kind of use economic models, even simple economic models. And it's not just--this is not just an aesthetic appreciation. This is a sense of trying to think about how to interpret and generalize. So, the most purely empirical procedure would lead to say, 'I'm going to be purely inductive. I'm only going to look at regularities that I've seen in the past.' But the trouble is the world is changing. It's always changing. And we need to try to extract from the past some behavioral regularities that we can use as a guide to interpreting and analyzing policy. I think that's gotten lost, both the minimum wage debate and the larger issue of the credibility revolution.
37:35Russ: Let me raise a broader concern that has been a common topic on this program, and recently it was raised in a conversation with Noah Smith. Despite my training at the University of Chicago--and a good chunk of that came at your hands--it's striking to me how rarely econometric evidence is decisive in creating a consensus about public policy or knowledge about a particular area. I'm struck by how easy it is for advocates, whether they are ideological or methodological advocates, to dismiss empirical work as indecisive, flawed--whether it's experimental work, whether it's more traditional econometrics. Where do you think we stand on that? How much progress have we made, say, in accumulating the kind of knowledge that I think both you and I think is the right kind--for the structural relationships that allow us to predict? I'm very skeptical of our ability to do that reliably given the complexity of the world and the kind of challenges you've been talking about. What do you think of that worry? Guest: Well, I think it's a legitimate worry. And it worries me a lot. What I worry about is what I think is more general, not just even about empirical work, is kind of the non-cumulative nature of a lot of work in economics. I'm thinking now more macroeconomics than microeconomics, where we're seeing cycles. In some parts of macroeconomics we are back to the Solow's curve [? Phillips Curve ?--Econlib Ed.], which was supposed to be dead and buried 40 years ago, and it's now alive and well and blossoming in Central Banks and public policy discussions in some quarters. So I think part of it comes not from the fact that it's econometric, but I think some of it comes from the fact that--look at certainly parts of economics is like data. I mean, that's the fact of the matter. So what is offered as a fact, just isn't a fact. So, like in the 1960s when I was a graduate student, late 1960s, you know, people were talking about the instability of the Solow's curve [? Phillips Curve ?--Econlib Ed]. There were things called Lipsey Loops and what was happening was the Phillips Curve was shifting all around, and people knew that that was an unstable object; and there was a gradual awareness of it and some theories were developed to try to explain that instability. Which, I don't think the theories were ever fully confirmed, but they were at least appealing, at least over a period explained some of the stagflation and some of the instability in macro phenomena. So I do think that there is a group of economists, not insubstantial, that sort of goes through the motions of doing careful empirical work in economics, but either lack the data or lack the integrity or some combination of those two, or lack the caution maybe is the right word, not integrity, to kind of put the data in its context and say, 'I really can't say something very strong about this.' And this is true for a lot of models. In macroeconomics and other parts of economics there's a practice called calibration. The calibrated models are models that are kind of looking at some old stylized facts that are putting together different pieces of data that are not mutually consistent. I mean, literally: you take estimates of this area, estimates of that area, and you assemble something that's like a Frankenstein that then stalks the planet and stalks the profession, walking around. It's got a labor supply parameter from labor economics and it's got an output analysis study from Ohio, and on and on and on. And the out comes something--and sometimes a compelling story is told. But it's a story. It's not the data. And I think there's a lack of discipline in some areas where people just don't want to go to primary data sources. And I think you are right, Russell, and it bothers me--that what we know as economists is much more limited than many people carry on as if they know. And so I've become aware of that. Just the humility of knowledge. You know, the old statement by Hayek, this pretence of knowledge question. Which I think is real: which there is a sense in which among professional economists, among professionals generally, you want to solve rigorous, carefully reasoned work, which frequently means highly rigorous formal mathematical models. And yet what people are sometimes afraid to admit is just how rough-edged this stuff is. I don't think it's all bad. I think there is some basic factors that are there; and I think we can learn from them. But I do think that there is a kind of a lack of humility in the face of data. But you come back to--you phrased a really important question about: Should we [?] go back to purely empirical discussion about economics? Should we just have--let the facts speak for themselves? That is a recurring fallacy. And I remember--if you think back--and I don't remember this; I was a little baby or little child in many ways. But back in the 1940s at Chicago, there was a debate that broke out; and it was a debate really between Milton Friedman and Tjalling Koopmans. Although it wasn't quite stated that way, it ended up that way. And that was this idea of measurement without theory. Could you do measurement without theory? Arthur Burns, and for that matter Friedman, were trying to chart the business cycles. Burns and Mitchell [Wesley Clair Mitchell--Econlib Ed.] in particular were trying to chart the business cycle, but using a very a-theoretical solution. Very hard to interpret what it is they had and what its relevance was for predicting the future. So, it let to a big controversy, which continues to this day: Measurement without theory. And so, it's very appealing to say, 'Let's not let the theory get in the way. We have all the facts. We should look at facts. We should basically have a structure that is free of a lot of arbitrary theory and a lot of arbitrary structure. That's very appealing. I would like it. The idea that we have is this purely inductive, Francis Bacon-like style--not the painter but the original philosopher. So, but the problem with that is, as Koopmans pointed out, and as people pointed out: that every fact is subject to multiple interpretations. You've got to place it in context. So, it's not like--I think what it is--see, this is a case of reaction/over-reaction. So there are these rigidly-specified models that nobody believes--I think that's true--somebody comes along and says I'm going to do this, I'm going to do that. And even though that's very appealing and it leads to a lot of ergs[?] of energy and brainpower, nobody believes them. Because they are not robust. But on the other hand, somebody comes along and says, 'See, here's a simple fact,' like we were just saying:, 'wage inequality has gone up.' Right? 'And household inequality has gone up. Therefore the economic system is sailing.' Well, think about what we were just saying earlier. The family is changing. That's a fact. That's a matter of interpretation. Russ: Different fact. Guest: A different fact. But then you ask: Well, why is the family changing? That's a deeper question. But every one of these--there is no such thing as an objective fact, out there. And there have been a lot of controversies; and the literature will continue long past our lifetime. So, people will say, 'Let the facts speak for themselves.' But in fact, the facts almost never fully speak for themselves. But they do speak. So the question is just to be, you know, sensitive to the facts and to interpret them in ways that allow us to be more robust. So I'd say in a lot of areas of economics, we have less knowledge than we think we have. So I think there is a pretence to knowledge. I there's a lot less than we really think and many people think they know. I think--you look at conventions--I remember Dale Mortensen, the famous economist at Northwestern who got a Nobel Prize for his work on search theory, he and I were going to a conference together in Spain many years ago. And I remember, we're on the plane; we had a good chance to talk about a lot of things. And he was reporting estimates of his numbers. And I said, and he was giving some numbers, and I said, 'Well, Dale, you know--I don't know, these numbers.' I said, 'Do you think that if I wasn't running what the club here and I just did my own independent research that I would come up with your numbers?' And he smiled, and laughed, 'No. We all agree that this is this, this is that.' I guess progress is made that way. And I wasn't willing to go along with that progress. But I think one has to have a certain humility. And I just have to know--that, you know, we know a lot less than we think we know. But we do know something. So I think, I think you are right--that in some sense it's better to kind of be aware of the limitations. But in the end we still need a framework of interpretation. So, even Friedman, who was kind of the other side of the Burns and Mitchell, favoring kind of letting the data kind of speak for themselves, Friedman in all of his work used very basic, very sound, very basic economic models to interpret the evidence. Permanent income is a great example. And some of his work, even A Monetary Theory; The Quantity Theory. And so I think every successful body of social science that use basic models, but they've gotten to kind of the core of the idea--not bells and whistles. And bells and whistles are kind of second generation, third generation add-ons. That becomes very, very--professionally and privately rewarding. Maybe not so rewarding for the subject as a contributor to the economic knowledge generally and the public policy. But I do things economists have contributed, I think--I think there was an appreciation, right? After all this effort, but gradually people are starting to use the fruits of the Boskin Commission and some of the other commissions that look at the effect of product quality on the consumer price index level. And I think people are adjusting. There are still arguments about the act magnitude. But I think that was an economic principle, very simple, well documented, and that kind of made its way into the mainstream. So I think we--I mean, if you think about it, 70 years ago, there were no [?] Current Population Survey Data; there weren't the kind of information we have today, dominates the headlines every few months when somebody finds a new fact. You know--wages have gone up, wages have gone down; employment of blacks has decreased; or this or that. So I think we have a much richer data system. But I think we also have to supplement it with an interpretive system. Otherwise we are just going to have blind facts that can be interpreted any which way.
48:47Russ: Yeah, I worry a lot about the biases we all have and how hard it is to judge those facts objectively when you don't have a theory. And of course we do have a theory. It's just in the background. It would be better, to me, to make it out. Guest: Make it out. But the best way to do that is not necessarily that just everybody agree on one theory. Sometimes it's good to have very competing theories. But stating their views in a very open way, and letting people decide. But sometimes the debate can get very, very complex. And even, like, think of the discussions about derivatives in financial asset regulation; and sometimes, for the average person, even for the average economists who weren't trained in Finance, those discussions can get very technical and they probably can't contribute very well, or even understand it well. There's also another sense, which is that parts of economics are hard, and if we were just a little more careful ourselves and higher standards, I think people might be willing to defer more than they do now. Larger than an average person out there in the world, to economists, if we had a little more internal policing about what we did within specific fields. Russ: Yeah. I'm curious: you talked about the Hayekian--I think of it as Hayekian humility. Guest: Yes. Yes. Russ: Has that changed over your lifetime, in your career? For me, when I was younger, I was a lot more confident. I was pretty sure that, as I've said before here, I was pretty sure that my guys did the good studies; and the other side had the bad studies. And it was a very painful recognition at one point to realize that actually, the other side actually cares deeply, is trying their hardest; and they suffer from the same cognitive challenges my side suffers from. And their data is flawed, and our data is flawed, and our models are not so robust. Is that a big part of your mindset for a long time or is it something that you've come to as you've gotten older? Guest: Well, it's--I think that's a general process of aging. If you do empirical work as I do and you get into issues, you inevitably are confronted with your own failures of perception and your own blind sides. And I think--I think the profession as a whole is probably better, much better, now. I mean the whole enterprise is bigger to start with. You are getting a lot of diverse points of view. And the whole capacity of the profession to replicate, to simulate, to check other people's studies, has become much greater than it was in the past. I think the big development that's occurred inside economics, and it's in economics journals and in the professional--that if people put out a study, except for having those studies based on proprietary data--that many studies essentially have to be out there and to be replicated. And it's literally been the kiss of death for people not to allow others to replicate their data. And that's a good sign. I think that's a really good sign. I don't think that would have been true 50 years ago to the same extent it is now. So I think the whole profession's into replication, into basically trying to [?] closely, to look more deeply at what other sides are going to--so I think we're in a better position to actually check each other. And I think that's a major improvement. So I think that's been a drift independent of my private aging. But I also have this sense--when I was younger, I certainly think--don't forget, I grew up--I came of age, if you will--in the late 1960s. And at that time there was a hubris. I think it was a hubris, that I think was more centered in macro than in micro. And it certainly influenced my thinking, though, that you could basically control the business cycle and you could use econometric models to predict a lot of things. And then all that started unraveling within my lifetime, even my early lifetime. And people started questioning: Can we do this? The original Klein-Brookings model that was put out in the 1960s, I remember as a graduate student reading that it had all these equations, more equations and more parameters and it had instrumental variables, than it had any kind of credibility. And its systems weren't even mutually consistent: the equations. And then the final blow came when a friend of mine, a professor, a little bit older, ended up working for Klein. And pointing out: how did Klein's model predict so well? Because when Klein got predictions from his econometric model, he would then adjust it, using his insights. Russ: Yeah. There you go. Guest: So, it wasn't like some triumph of econometrics. This was basically a triumph of Klein's common sense. So I think the whole profession probably went overboard in the 1960s and 1970s maybe about the ability of economic models to predict. And I think that led to the backlash that now we think of as the credibility revolution. And I think that--yes, I think we've all come to recognize the limits of the data. But on the other hand, I think we should also be amazed at how much richer the data base is these days--how much more we can actually investigate. For example, we can look at aspects of time use surveys. We can look at aspects of surveys of individuals incarcerated. We can look at the trends in areas. We have theory, details, scanner data which allow us to look at transactions at stores--individual transactions--to identify kind of what quality changes are, how to adjust price indices. So, even though we've got a long ways to go, I think we've gone a long way, too--way back from the 1920s and 1930s when there were almost no U.S. aggregate economic or microeconomic data. Now we have a large body. So I think the empirical side of economics is much healthier than it was, before--I mean long before, going back to the 1920s and 1930s. That was just a period with no data. So I think we have a better understanding of the economy than we did. And I think that's still there. And I think we have better interpretive frameworks than we had out there. And I think understanding the non-market sector, thinking more broadly about demographic trends--within--and appreciating them. I think these are things that we shouldn't underlook, overlook, here, understate where we've come from. We've come a long way.
55:43Russ: Talk about your mentors--your influences, your intellectual influences when you were younger, and certainly today. Guest: Well, when I was young, I mean, I had some very good colleagues at Princeton. I certainly was influenced by people like Richard Quandt and then some younger people you wouldn't know so well. People like Harry [?Collegian?], who was out of Maryland, retired, and Stanley Black--these are people who interacted with me. Orley Ashenfelter was a graduate student with me when I was at Princeton, but he was a couple of years older than I am. But we interacted a lot, and as peers, we were both feeling, you know, feeling our oats. And there was this econometric revolution where we could say, 'Look, Labor Economics--there was almost no application of consumer theory, in terms of, in terms of econometrics.' So we would read Theil's books [Henri Theil--Econlib Ed.] on consumer demand and apply it, estimating labor supply. So there was a lot of excitement. And at Princeton, William Bowen was doing a study of the labor market which was at that time very, very new, just a factual study. But over my lifetime, some of the great influences, some of the people who have influenced me the most, have been some of my colleagues here at Chicago. And I would put Gary Becker first and foremost in that thought. I came in--you know, a young guy, anxious to prove myself. And like a lot of people, very interested in Gary Becker's work. And I found it fascinating and humbling. So I had Jacob Mincer as my first colleague, and Columbia, my first job. And I kept in close touch with him throughout his life. And also then with Gary Becker, with whom I interacted and we talked classes and shared seminars, and so forth. So I felt those two were important mentors. But I felt also for example in macroeconomics, people like--I don't want to say 'mentors'--I probably quit getting a mentor years ago. But I've interacted with people like Burt Singer, who is actually not even an economist, but he's a statistician. And I would say a polymath who has a range of ideas. And we've interacted broadly over a whole range of topics. He's gotten interested in economics; I've gotten interested in topics and statistics. We wrote some econometrics papers together. And then, in macro, I found some of the greatest influence from Lars Hansen, who, even though--we've written a few papers together, but Lars has an incredible range of skills. And my general group of high-quality colleagues at Chicago; they have been a very good thing. Early on, you know, when I came to Chicago, Friedman was still running workshops. And like, I treasured the time when I first came here, being able to go out to dinner once a week with Friedman. And he came to lunch conversations, and we had all these great discussions of Friedman, Stigler, and Becker. Those three people, very good. Robert Barro was here. There were a lot of exchanges, some of them fierce. But some of them very good. And all of them with intensity. So I felt that the whole structure of colleagues here--William Brock who was at the time here as a theorist, very heavily engaged. But look, when I arrived here they had a lot of just activity going on. And I don't want to say it's mentoring, but people like Merton Miller and Myron Scholes and you could go down a long list--Greg Lewis was still here at that time; I inherited one of his classes and interacted with him for a while in Labor Economics. And then, gradually--and then Lucas arrived and there was stimulation on that front. Both positive and negative at times--but sort of feeling how he felt there was, macro should be a little more careful than it was. But nonetheless I got a lot of stimulating ideas. So, between the Business School here--I mean, Coase was over at the Law School. Posner and Landes and Coase were actually very active in the Stigler Workshop, the Law and Economics Workshop, which was half IO (Industrial Organization) and half Law and Economics. So I remember many stimulating workshops. So I can go through a long list of people that I found stimulating. And continue. Plus the students. Getting students like you, getting students like MaCurdy--we talked about earlier--other students. I've had a whole host of students who--in some sense have mentored me. They challenge me with questions. They force me to rethink issues. They are my best critics. And they are frequently sources of ideas. Mutual. I can sometimes help them; they can help me. But the whole atmosphere at Chicago has traditionally been give-and-take exchange, and people making back and forth. And I think that back and forth has really been an integral part of my own training. So, I kind of--when I got here to Chicago, I really became very enthusiastic about the structure of the place. And it was hard not to get caught up. And at one point, there was a very famous economist, now long mostly forgotten named Harry Johnson, trade economist. He had a huge influence on me at the time. He actually was a Keynesian, kind of opposed to Friedman. So, Friedman and Johnson were both here. There was a little bit of unpleasantness about it because Johnson had written this paper claiming that Friedman had falsely characterized the quantity theory at Chicago. But the fact of the matter is they were both very bright, and they would exchange ideas. And it wasn't like I was caught between them. I could actually get stimulated by both of them. And they were both very open to me. And I really found I learned a lot about a lot of different things. So, it was a very exciting place. And it still is. The generations change. And I mentioned some pretty tall timber there. But to me it was very stimulating. Just the whole range of it. To go to the Stigler workshop, for example, and see this ranger of ideas--literally the first year I was here, or I guess it was the second year, I was teaching here, I actually became aware of this work. And Posner had written a book called The Economic Analysis of the Law. And so I used that, as a textbook, actually, in a course in Price Theory for undergraduates. And it really stimulated all the undergraduates. And at that time I remember talking to Landes, Posner, and Coase. So all these ideas were in flux, and we could argue back and forth about this, that, or the other thing. So in that sense it was really tremendously--but it was the case, though, dealing with Friedman--Friedman raised for me some of my first concerns about what you want to call the credibility revolution. I remember telling him about some work that I was doing. And as you remember, some of the work I was doing was very complicated for its time. Computation. And I remember I was sitting in Becker's house and I told Friedman about what I was doing. He looked at me and said, 'You know--' looked at me just straight out, bluntly, and said, 'you know, it looks like in that kind of work, it lacks a room for fraud.' And I looked at him and you know, I said, 'Well, yes. Deliberate or maybe not so deliberate.' And he agreed. And he's right. There was a lack of reproducibility. So Friedman was onto the credibility question. And it was indelicate. So it sent chills down my spine. But I also recognized that he was right.

Comments and Sharing

TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (14 to date)
jw writes:

So Prof Heckman is rightly concerned about the challenges of modern econometrics and its effect on policy. Let’s take a quick look at some macroeconomic experiments in policy and see if the most basic scientific principles apply:

Inequality – Piketty and others find inequality everywhere and can only envision more redistribution as the solution. But they refuse to account for after tax or post-redistribution effects in their studies. They do not take into account huge quality of life improvements as Heckman mentions and as Rector writes ( or the tens of trillions of accrued benefits as Martin Feldstein writes ( and the ever changing composition of the top X% which Sowell has documented for years. So are these inequality fixated economists unaware of these studies or does their bias for central planning overshadow their intellectual integrity?

CBO - The CBO recently released its forecast for the next ten years. So we have a documented prediction of a government agency with access to scores of PhD economists, mathematicians, statisticians and computer scientists. It projects that the debt held by the public will increase from $13T to $24T, with no recession forecasted for ten years. Luckily, we have the 2005 CBO report still online and we can check how this experiment worked before. Then, it also predicted no recession for ten years - it completely missed the largest contraction since the Depression - and predicted that by 2015 the debt held by the public would be $5.6T, for an error of only 130% (close enough for government work indeed…).

Fed – The Fed has hundreds of PhD economists, mathematicians, statisticians and computer scientists and a budget of $4B to think deep thoughts. They also completely failed to forecast the Great Recession. When they originated QE, they predicted that real GDP growth from 2010 to 2012 would be a cumulative 11.1%. This was only three years, not ten, so the accuracy should increase, which it did. The Fed was only off by 43% in their three year forecast, which even included additional rounds of QE. They also forecast that the long term real GDP growth would settle at 2.6%, a level still not seen since QE began.

So with our best and brightest failing to produce a prediction anywhere close to reality, one would think that Heckman and Russ’ discussion about Hayekian humility would be germane. Alas, in the current Presidential race we have an avowed Socialist and a near socialist leading on the Democratic side and a Lord-knows-what leading on the Republican side all confidently promising that they know best how to centrally manage an $18T economy of 330M souls. Worse, younger voters are leaning towards the siren song of the Socialist, despite the worldwide failure of over a century of socialist experiments.

I think that it is fair to say that macro-econometrics is still far from a science and that even if it were, politicians and voters would continue to ignore it.

Dan Hanson writes:

It would be nice to see discussions of econometrics and experimental economics acknowledge complexity theory and the reality of the economy as a complex adaptive system. Thinking in that paradigm changes much of what we believe we can know about the economy and how much we can discover through natural experiments, studies of historical responses to economic shocks, etc.

For example, complex systems theory tells us that if you watched the economy respond to a shock (a stimulus, a financial collapse, tax changes, whatever), and then you could roll back time and re-apply the same shock to the same economy, the output the second time might be wildly different due to tiny changes in input. And that's starting from the exact same point.

Then there's the 'adaptive' part: How the economy responded to a stimulus in 2001 might not even be remotely close to how the economy in 2016 would respond, because the economy in 2016 has already internalized and modified itself due to the earlier change, and will respond differently. So historical analysis is suspect.

The truth is that the economy, like other complex adaptive systems, is fundamentally opaque, chaotic in its behavior within some limits, extremely sensitive to small changes in input, and unpredictable in its future behavior. We've proven that time and again by examining the track records of economic prediction, even when carried out by large groups of the best economists around. And yet we still pretend that we can have valid experiments and predict how the economy will behave by looking at past behavior, even though we have ample evidence that it's not true.

This is relevant for economists on both the right and left. Where's that hyperinflation the right was predicting? Where are the large multipliers the left was predicting? We are constantly surprised by what the economy does relative to what we think it will do, but then we ignore the evidence of our experience and go right ahead making further predictions and treating them like they are something useful and not just educated guesses and noise.

jw writes:

I absolutely agree.

Hussman (PhD Econ, Stanford) had an excellent article on this a couple of weeks ago ( And as much as I respect his analysis, he has been wrong in his market valuation calls for quite a while (or maybe just really early...).

Walter Clark writes:

I constantly remind my fellow commenters at CafeHayek that being unemployed by a raise in the minimum wage does not work as argument against the left. When relieved of employment they are taken care of to some extent by the State and even though the dollar amount is less than minimum wage, the one relieved of employment is actually relieved of (what the left considers) slave-like conditions and now are able to spend quality time helping around the house.
This has economic significance in that it places a natural minimum wage at whatever the going support structure equivalences to. The benefit of not having to go to work should be added in as well. If after finding the total equivalent in dollars an hour for the natural minimum wage, it should be easy to see where raising the government enforced minimum wage would make no difference in the employment level of those just below the minimum wage.
It should be pointed out that in the absence of the welfare state, the reward to those that donate to, or are directly part of charities, would be substantially increased. Increased reward means more support than is seen today from them. Although probably not as great as socialized charity, it will also create a natural minimum wage.

jw writes:


I am not sure that I am completely following you, but I think that you might be interested in this famous chart by the PA Secy of Public Welfare:

It shows that an unemployed single parent with two children in PA would receive $45K of income and benefits. Someone making the minimum wage would actually receive income and benefits in excess of $55K. So their marginal total take home pay and benefits would be $10K for 2,000 hours of work, or only $5/hr.

Ironically, if the minimum wage were increased to $15/hr, they would actually net less total wages and benefits.

It also shows that a rational actor would stop earning at $29K, as at that point they would be as well off as someone with no government benefits and wages of over $69K.

People do what they are incentivized to do.

Walter Clark writes:

Thanks jw for the link and its summary.

Does anyone know if a study has been made of how many dollars per hour equivalent to be able to not have to go to work? Is there even a term for it? Is it different for different kinds of employment?

I want to call attention once again to the fact that economists that I read never talk about a natural minimum wage. I feel, if they don't take that into account they will be forever arguing on the back side of the power curve to explain Card and Krueger type of reports.

Charlie writes:


I am interested to know more about what Dr. Heckman was referring to when he talked about the changing composition of individuals in the 90th and 10th percentiles (around the 10 minute mark) of an income distribution, so that comparisons over time are problematic.


jw writes:


Please see this 2007 article by Sowell:

and 2011:

"Because most people who are in the top 1 percent in a given year do not stay in that bracket over the years.

"If we are being serious — as distinguished from being political — then our concern should be with what is happening to actual flesh-and-blood human beings, not what is happening to abstract income brackets. There is the same statistical problem when talking about “the poor” as there is when talking about “the rich.”

"A University of Michigan study showed that most of the working people who were in the bottom 20 percent of income earners in 1975 were also in the top 40 percent at some point by 1991. Only 5 percent of those in the bottom quintile in 1975 were still there in 1991, while 29 percent of them were now in the top quintile.

"People in the media and in politics choose statistics that seem to prove what they want to prove. But the rest of us should become aware of what games are being played with numbers."

Trent writes:

Another very interesting podcast.

I found the discussion regarding unexpected actions by participants in randomized experiments fascinating. It begs the question whether there can ever be a 100% objective/problem-free experiment involving humans...and also lends credence to one of Russ' long-held points about how econometric analyses don't convince 'the other side' - they can always find problems with the studies they disagree with (and no doubt ignore the problems with the ones they agree with).

In reading the comments, I noted jw and his comment: "People do what they are incentivized to do." I certainly agree that incentives matter...they alter behavior. But I don't think we can say that if we offer Incentive A, people are definitely going to do X. Some might, but others will respond in ways we didn't expect/couldn't predict. Hence the problems associated with designing human experiments.

jw writes:


I absolutely agree and didn't mean to imply that it was as simple as a carrot and a stick.

As with everything, there are distributions, from the teenage worker in their first job getting minimum wage and no government subsidies to the comfortable young lady profiled on "60 Minutes" a few years back who was proud of her ability to game the system to get unemployment benefits (and myriad other subsidies) almost indefinitely by working only a few weeks a year.

Somewhere in between are enough people to consume hundreds of billions of dollars of subsidies while at the same time the labor force participation rate drops off of a cliff. One can also look at the large increases in disability claims and EBT participation. Millions of working age, able people are simply choosing to not work anymore. I am not trying to make judgements, they may be making extremely rational choices for their situations (possibly morally hazardous and sub-optimal in the long term, but rational in the short term).

Whatever their motivation, economic hardship is not the driver to seek work that it once was (again, for the majority, as there are certainly exceptions).

Eric Howard writes:

Yes, another fine podcast. Thanks so much for all you and your team does in producing these Russ! I teach principles of macro/micro and I am constantly highlighting specific EconTalk podcasts for students to listen to.

The discussion with Professor Heckman was fantastic and stimulating on many levels. Two observations regarding the part of the discussion where Roberts and Heckman were focusing on data, objective facts, the presentation of research findings and the methodological perspective adopted by many economists:

1.) Popper: It brought to mind one of my favorite passages from Karl Popper's The Logic of Scientific Discovery, "...observation in the light of theories...":

“My point of view is, briefly, that our ordinary language is full of theories: that observation is always observation in the light of theories; that it is only the inductivist [sic] prejudice which leads people to think that there could be a phenomenal language, free of theories, and distinguishable from a ‘theoretical language’; and lastly, that the theorist is interested in explanation as such, that is to say, in testable explanatory theories: applications and predictions interest him only for theoretical reasons—because they may be used as tests of theories.” (Popper, 1959, p. 59 n. 1).

A related point is made a little more succinctly by criminologist Joel Best:

"No statistic is perfect, but some are less imperfect than others. Good or bad, every statistic reflects its creators’ choices” (Best, 2001, p. 161).

Or, as Heckman himself has put it:

“Econometric methods uncritically adapted from statistics are not useful in many of the research activities pursued by economists. A theorem-proof format is poorly suited for analyzing economic data which requires skills of synthesis, interpretation and empirical imagination. Command of statistical methods is only a par[t] and sometimes a very small part, of what is required to do first-class empirical research” (Heckman, 2001, p. 4).

2.) Piketty on Inequality: One of things that is striking about Piketty's work for me is not so much his specific policy proposals as much as his methodological positions. It seems that Piketty has often adopted the position of "I'm just letting the data speak for itself" on the one hand while taking some strong positions on the other. Here is a passage which sums up Piketty's position on methods, methodology and their relationship to models/theory from his 2015 JEP article:

“Theoretical models, abstract concepts, and equations (such as r > g, to which I return in greater detail below) also play a certain role in my analysis. However this role is relatively modest—as I believe the role of theory should generally be in the social sciences—and it should certainly not be exaggerated. Models can contribute to clarifying logical relationships between particular assumptions and conclusions but only by oversimplifying the real world to an extreme point. Models can play a useful role but only if one does not overestimate the meaning of this kind of abstract operation. All economic concepts, irrespective of how ‘scientific’ they pretend to be, are intellectual constructions that are socially and historically determined, and which are often used to promote certain views, values, or interests. Models are a language that can be useful only if solicited together with other forms of expressions, while recognizing that we are all part of the same conflict-filled,
deliberative process” (Piketty, 2015, p. 70).

Piketty seems to express a thinly veiled contempt for models in favor of the historical moment, the social construction of knowledge, and "the data" essentially. Reminds one of the German Historical School and their aspirations.

These passages (I apologize for the length) capture some of what Roberts and Heckman were discussing relative to data, facts and economic theory. I think we need a much more deliberate discussion of these issues within economics.

Best to all,
Eric Howard

Best, J. (2001). Damned lies and statistics: Untangling numbers from the media, politicians, and activists. Berkeley and Los Angeles: University of California Press.

Heckman, J. J. (2001). Econometrics and empirical economics. Journal of Econometrics, 100(1), 3-5.

Piketty, T. (2015). Putting distribution back at the center of economics: Reflections on Capital in the Twenty-First Century. Journal of Economic Perspectives, 29(1), 67-88.

Popper, K. R. (1959). The logic of scientific discovery. New York: Basic Books, Inc. [Originally published in German as Logik der Forschung. Zur Erkenntnistheorie der modernen Naturwissenschaft in 1934]

SaveyourSelf writes:

@ Dan Hanson
Well said.

@ jw
Well read.

@ Russ Roberts.
I have been trying to understand your objection to statistics for a while now. I appreciate how you have been bringing up the question with many of your guests recently. I was hoping I could provide a succinct answer for you—my own hubris, I know—but each time I begin an answer, the chain of my logic ties itself up in knots. Even so, I will list the things I think I know—more to discipline my own thoughts than persuade you at this point—but I still secretly hope you will find something useful within.

1. Statistics has value! I get the impression Dr. Roberts is taking the extreme position that statistics in Economics is not useful and may even be detrimental. I want to argue the opposite, that it is priceless—valuable to the point that all elementary school students should learn its basics right after learning addition and subtraction.
2. Statistics deals with randomness, we do not. Much of the universe—perhaps most of it—is not causally linked. Most associations are random. Humans don’t do random. The studies regarding educated peoples’ phenomenal ability to forget and ignore statistical facts (“Thinking Fast and Slow,” by Daniel Kahneman) is hilarious and humbling.
3. Good statistics are precise. Statistical studies, when done well, draw very specific conclusions like: “The null hypothesis is false.” Trying to turn, “The null hypothesis is false,” in to a national policy is, well, not statistical. Don’t hate statistics for what is not statistics.
4. We won’t ever get it. Even valuable statistical information like, “the null hypothesis is false,” is difficult for humans to understand or remember. It’s a different language, after all, and I don’t mean a different spoken or written language. It is a “non-causal” language. Non-causality slips off our brains like water off a new wax job. That doesn’t make statistics wrong, mind you. It just makes it…unnatural.
5. The brain isn’t wired statistically. Humans find it difficult to think statistically. Even statisticians cannot think statistically when distracted. Apparently we are built to think in terms of causal associations. Statistics has trouble with causality. But there is value in diversity. It is precisely because statistics processes information so differently that it is worth doing.
6. When Bias is the devil, Statistics is holy water. Statistics, done well, reduces bias. Bias is evil because it means, by definition, that we are lying to ourselves—usually without realizing it. In other words, Bias makes educated men more ignorant than the ignorant. So even in the best case scenario—let’s call it, “unintentional-bias”—we are made more stupid by our efforts to understand. In the worst case scenario—bias as fraud—Bias is intentionally imbedded to alter others’ behavior for the benefit of the liar. Now it is true that statistics is complex. It is easy to hide bias in statistics. But statistics, imperfect though it is, is far better at reducing Bias than we are. It, at least, has the potential to reduce bias. Nothing else I know of does.

Richard Berger writes:


"Much of the universe—perhaps most of it—is not causally linked." If so, why are you bothering to understand it?

Why don't you take an hour and listen to the podcast?

Richard Berger writes:

This was an outstanding interview. You were very lucky to have him as a teacher. His commentary is penetrating, spot on, and he is a very gracious fellow.

Comments for this podcast episode have been closed
Return to top