Russ Roberts

Leamer on the State of Econometrics

EconTalk Episode with Ed Leamer
Hosted by Russ Roberts
PRINT
Taleb on Black Swans, Fragilit... Roberts on the Crisis...

Ed Leamer of UCLA talks with EconTalk host Russ Roberts about the state of econometrics. He discusses his 1983 article, "Let's Take the 'Con' Out of Econometrics" and the recent interest in natural experiments as a way to improve empirical work. He also discusses the problems with the "fishing expedition" approach to empirical work. The conversation closes with Leamer's views on macroeconomics, housing, and the business cycle and how they have been received by the profession.

Size: 27.4 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:36Intro. [Recording date: May 3, 2010.] State of econometrics--the application of statistical techniques to economic questions. A few weeks ago, Tim Harford wrote a piece in the Financial Times referencing a piece you wrote back in 1983 called "Let's Take the 'Con' Out of Econometrics." Harford argued we've finally succeeded in solving at least one crucial problem--it did take 27 years--but we've finally removed the con; we've got more honesty. Particularly he was focused on the identification problem. Claimed he was referring to work by Angrist and Pischke, who argued that by use of so-called natural experiments and modern techniques, we've been able to get a much better assessment of relationships in economic data. First, talk about your 1983 piece; what was the con that we ought to be aware of? The con is that depending on what model you select, you can get dramatically different estimates and conclusions. Economists have not spent enough effort alerting their customers to that sensitivity. That's the con--pretending that the data sets are providing more information than they possibly can because the econometric method requires you to make a complete commitment to assumptions that you have at best a half-hearted commitment to. I was arguing what we need to do is develop tools that researchers can use that would separate sturdy from fragile inferences. Sturdy ones are the ones that don't depend much on ambiguous assumptions. The fragile ones change with a very slight change in the model you happen to use. We need first of all tools that will help us sort the sturdy from the fragile conclusions. Secondly we need a method of communicating that in the articles we write and a culture that is receptive to that. The culture as it is now is a "maximize-the-t"-kind of culture, which is a way of saying find something in the data set--and there are two reasons why there might not be something in the data set. One is the data set might be too small--what econometricians call "collinear"--and the other is that the assumptions that you need are not really credible. Economists by and large don't want to hear that kind of negative. They want to hear that they are making major conclusions from the data sets. Clarifications: When you talk about a "t", the t-statistic in a statistical study is a measure of how likely or unlikely is it that the relationship you found in the data is due to chance. A high t-statistic would mean it's very likely that this relationship is there and not just some fluke. The word statisticians have suggests that it is statistically significant, which we summarize by saying "significant." But "significant" really means "important," and it's not the same. Highly recommend that we use the word "measurable." We want to know if this data set allow you to measure the effect; whether the effect is big in an economic sense is a totally different issue. McCloskey and Ziliak book where they attack the whole concept of modern econometrics on the grounds that we've become obsessed with the relationship between two variables is significant--it could be very unimportant; it could be small in its magnitude and impact but significantly different from 0, meaning it's not just a chance. Key distinction we care about as economists. We don't often have a conversation about what size of coefficient do we need for this to be an important effect. Difficult conversation to have. Instead we turn it over to statisticians who decide what's significant or not based on these t values that really don't have anything to do with the setting and are context-free. Economists need to impose more order in the conversation and not relinquish the most decision, which is to decide whether or not this is really an important variable or important effect.
5:49Going back to 1983 article, mentioned being explicit about our assumptions or how sensitive our results are to our assumptions. When economists talk about their assumptions, they are usually talking about things like "I'm going to assume that businesses are profit-maximizing," or "I'm going to assume that individuals are maximize their utility." But in statistical work, econometrics, when you talk about assumptions, you talk about very specific assumptions about where the data come from, the way the errors might be distributed, whether the relationship is linear, quadratic, cubic. Critical task in the art of drawing inferences from a data set: how to translate a conceptual framework, theory, model, which by its very nature is a simple version of reality, into a compelling and persuasive data analysis. Your theory might say demand curves slope downward. That's not nearly as complete a statement as is needed for a statistician or econometrician to do the data analysis. The data analysis requires that you select a particular functional form; allow for the fact that this year's consumption may depend on last year's prices as well as today's. Tomorrow's as well--expectations. Have to think about the other variables that are going to drive the demand and not just the price. A theorist can get away with making a vague statement that quantity demanded depends on price, but a data analyst has to fill that in to make a very explicit model that has no doubt associated with it. If there's any doubt, it's the random error that we tack onto that model. The doubt about that is distributional assumptions about which the theorists have no opinion. Huge step between conceptualization of the problem and building a model that can capture that framework.
8:23For those listeners who are not practicing or would-be economists or graduate students in economics, etc., want to set the stage. In an economics journal, or a medical journal in epidemiology where we are going to look at the relationship, say, between drinking and cancer or in economics between some piece of legislation like the minimum wage and whether it affects employment or not, what you'll find somewhere in that article if it's an empirical article, is a table or a chart that purports to show that the relationship between the two variables that we care about is of such-and-such a magnitude and is not due to chance. What is hidden from us as the readers and is the unspoken secret Leamer is referring to in his 1983 article, is that we don't get to go in the kitchen with the researcher. We don't see all the different regressions that were done before the chart was finished. The chart was presented as objective science. But those of us who have been in the kitchen--you don't just sit down and say you think these are the variables that count and this is the statistical relationship between them, do the analysis and then publish it. You convince yourself rather easily that you must have had the wrong specification--you left out a variable or included one you shouldn't have included. Or you should have added a squared term to allow for a nonlinear relationship. Until eventually, you craft, sculpt a piece of work that is a conclusion; and you publish that. You show that there is a relationship between A and B, x and y. Leamer's point is that if you haven't shown me all the steps in the kitchen, I don't really know whether what you found is robust. Kitchen reference, old joke: two things that you don't want to see in the making--one is econometric estimates and the other is sausages. Dirty process. Why? Example: the theory might suggest that a feather in a vacuum will accelerate at a constant rate when it falls. But economists don't observe feathers in a vacuum. They observe feathers when the wind is blowing, when the humidity varies, eagle feathers, duck feathers. Tons of things that are going to affect the speed at which things fall. Theorists are allowed to hypothesize that vacuum, but the real world doesn't have that vacuum. Got to translate that into a complete model with all the controls, the kind of things we were just identifying. You and I can sit down and think of these controls--you and I will come up with different lists; tomorrow I'll come up with a different list from today's. That's a sensitivity issue--we want to make sure that an adequate range of alternative models has been studied and confirmed that all the reasonable models lead to about the same conclusion, which is that you get the sturdy inference. Or, if what seem like small changes in the models, the kinds of things that economists would be willing to entertain, lead to dramatically different conclusions--that's a fragile estimate, not to be believed. Suggested, alongside this work of art, suggested you should also include some of the souffles that fell, some of the dishes that didn't work out so the reader could judge if there is a real relationship there. How has the profession reacted to that suggestion? Economists will have a table of alternative estimates. But there's been no awareness that this is a critical issue. A lot of work with complex econometrics but not a lot of progress with building tools for identifying sensitivity of our conclusions to our assumptions, or for reporting adequately that sensitivity. Still in same operating procedure as 30 years ago--to cook the books.
14:13Why must be because there is no incentive for us to do otherwise. Want to come back to that, but staying on track: What are some of the more recent techniques in econometrics, particularly the use of instrumental variables to create so-called natural experiments, and what the proponents are claiming about those techniques? Angrist and Pischke paper well-written, will be out in Journal of Economic Perspectives this month, making what seems like a compelling case that randomization is the solution. Meaning that in an experimental situation, you have purposeful randomization: try to decide whether fertilizer affects yield, so you randomly select plots that get fertilizers. Look at treated and non-treated plots--measure of fertilizer on yields. Only job in that setting is to determine whether the data set is large enough that you have a statistically significant finding; or is it too small relative to the size of the effect that it leaves open the possibility that what you are observing is pure randomness and not a real effect. That's the traditional view about experiments--if you do the experimental design adequately with controls and then do the randomization you will get a proper causal conclusion--to which I totally agree. We call that science. The problem with that is you created that in a laboratory. There is no assurance that it will translate into the same effect in the real world, particularly in economics because we are talking about a social system; and an expectational aspect also--makes the transference from laboratory to real world hard. Those are purposefully randomized experiments--purposefully designed. Instrumental variables is a reference to accidental experiments--scurry around trying to find out something that is as if you had an experiment. Example: what does immigration do to a community? Look at thousands of Cubans when they fled Cuba, Mariel boatlift, and study the impact that has had on the community, which is what David Card has done. The argument being that since that was an exogenous event--not correlated with anything else going on in Miami at the time, not like Castro said things are great in Miami so let's let the people out, which would confound the statistical relationship, or things were horrible in Miami so he let them out. A random political event that is outside the causal relationship we are trying to study. Economists think of that as being tantamount to the idea of a randomized experiment. Problems: First, there's no such thing as a really exogenous variable. We don't know how much Castro was looking over to see what was happening in Miami, so there's a possibility that that boatlift was responding to something that was happening Miami. Every one of these is going to open up conversation about whether it is really a randomized treatment or whether it's correlated with the impact you are trying to determine. But does a boatlift tell us anything about a 2000-mile fence? Translating that to impact of immigrants in other settings is difficult. Takes the same kind of work it takes to draw conclusions from non-experimental or observational data--have to think long and hard about the circumstances that have affected that outcome and put in control variables.
20:04Previous podcast on macroeconomics--boatlift immigration issue is micro problem, but in macro we make that leap all the time when we talk about aggregate demand. When someone says in the past, $1 billion had this impact on the economy--so much unemployment, this level of growth--people are presuming that the same structural relationships still hold. Even though the cause of the recession might be totally different, what the money is spent on might be totally different, implicit in those multiplier arguments is the presumption that it doesn't matter. Find that very strange. Let's be more explicit. If you just look at correlation over time, it doesn't tell you anything about causal impacts. So you need something like a randomized experiment. If you want to know: Does government spending have a multiplier? then you have to have a treated group and a control group. In the case of macro, very difficult to think of what is the natural experiment, whether it's purposeful or natural, that we can use to make conclusions about the impact of federal stimulus programs. The one that comes to mind is defense spending--end of war, start of war. Robert Barro has used that--interesting, useful. Clever. Angrist and Pischke. But does apparent defense buildup in WWII tell us anything about the stimulus package that the Obama administration put together? Doesn't seem to have any relationship, or not be an automatic corollary. Sympathetic to Barro's conclusion but have to admit that the scientific nature of it is somewhat problematic. Didn't stop the people who are not sympathetic from saying it was just totally wrong. Bizarre that scientific work by macro- or microeconomists on anything that we care about, e.g., quality of schooling podcast with Ravitch, crucial social policy issues that we all have strong feelings about--the empirical work, no matter how careful or clever doesn't seem to change anybody's mind who is not already a believer. That means it's not science. In science there is skepticism, too; takes a while for people to come around. But it doesn't happen at all in economics. Incentives: the consumers of this work realize that is little incentive to get it right in a scientific sense; there is an incentive to reconfirm what you already believe. There is also a belief that there is another side, and the other side could produce some kind of model; and I'll wait until I see the whole thing work out before I draw any firm conclusions. Like a court of law in which you see the plaintiff's argument but you are not allowed to see the defendant's; not going to make a judgment till you see it all worked out. When you hear only one side, if you are sympathetic to that side you are cheering the whole time the argument is being made: there's nothing the other side can say; but they manage to. Commentary Magazine letters to the Editor would savage the article; would think the author didn't have a leg to stand on. Strangely enough, the author would show why his antagonist didn't have a leg to stand on. Insider or not, sometimes there is no way to choose in any objective sense--you don't have any information. Aggressive language--economic theory is fiction: sometimes good, insightful, sometimes boring, but fictional representation of the world; and economic analysis is really journalism. Journalist's job is to marshal the facts and put them together persuasively; but it's not science. Fiction and journalism. The people who swear by these techniques--Angrist and Pischke as an example--what would they say to your criticism? Angrist and Pischke would be sympathetic; understand their point, too--randomization is great if you have it. Experiments can be highly useful. Just don't think any single path will work. Theories; studying data sets in different ways; but to think that designing experiments is going to suddenly change economics into an empirical scientific discipline seems unlikely. That might be where we have some significant disagreement. Often the creators of techniques are less enthusiastic than their followers. Their followers tend to be drinking the kool-aid and have forgotten all the admonishments of the creators about what they should be careful about and watch out for. Incentives: read in one of your articles--you make a mention of the fact that the only people who believe the results that come forth are the author. How could that be? Gotten into the habit of asking people if they can name an econometric study that caused the profession to come to a consensus about something controversial. Most economists struggle to come up with an answer. Some economists name their own work--extraordinary. Isn't it strange that in our field so many people are spending so many hours churning out results that nobody takes seriously? Would like to make a distinction between the process and the outcome here. The process helps us think better as economists. Analyzing data sets, complex ambiguous settings, helps us think clearly. Same with economic theory--carried out mindlessly it's a total waste of time--but there are people who can do theoretical manipulations, make discoveries, and learn things through that process. So, even though the final model may be silly and the table of t-statistics may be irrelevant, the process helps us form judgments. Social conversations we have also help us come to conclusions--often not the right ones, but some scope for progress.
30:10 Pessimistic note: agree in a world where we sit around in our togas and try to come to agreement on these relationships. But it doesn't quite work that way. What happens is the more exotic and dramatic your result, the more likely you'll be featured in the NYTimes. The university likes that, so there's a real bias toward shocking claims, contrarian, bizarre claims. Recent example: the Wall Street Journal had a piece on the front page of its weekend section about two or three weeks ago that when Tiger Woods enters a tournament, instead of encouraging people to try harder, they just give up. The implication is that our whole understanding of competition has to be reconsidered, because we usually think of competition as bringing out the best in people, people striving to meet the high bar the competitor provides, but with Tiger Woods, he's so dominant that people just give up. As a result, competition has this destructive effect. Lesson: that's not enough--we've got to apply it to business. The implication is that businesses shouldn't try to hire the best people, because if you bring in a superstar, people could just sit around and say "I'll never get a big bonus." One of the examples given in the story was at General Electric, only the top 20% get the big bonuses, so a superstar could discourage people from being in that top 20%. Student's joke--well, if they have 5 employees, that would be true. But they have more than five. Article was based on an unpublished article by researcher at Northwestern who discovered by carefully teasing out and controlling for all the relevant factors, that when Tiger Woods enters a tournament, his opponents score higher by 8/10s of a stroke--meaning they perform worse. All this econometric firepower brought to bear. How many regressions were run where the result was the other way that you didn't tell me? Unless I know that, why would I have any confidence in that result? Leamer: Have heard that paper presented; not as skeptical about the basic finding, but skeptical about the interpretation. My quality of golfing is much influenced by the people I play with. Russ: Told colleague Don Boudreaux about this finding, he said sure it discourages people from golfing--"I don't go into golf because Tiger Woods is there." There are millions of Americans who have decided to take up other pastimes, tennis, because they don't think they can beat Tiger Woods. No doubt true. Also true that if you are paired with him, or with Larry Bird, famous trash-talker in basketball, it could affect your performance in a negative way. Might regress toward a lower level. What is not true is that golfers who are already in the sport didn't give up when Tiger Woods came along. They worked incredibly harder--started lifting weights, stopped loafing, put in more hours. Statistical finding--remarkably small--author notes that many tournaments settled by a single stroke; response: but no tournaments are settled by 8/10s of a stroke. It's only an average; some would be affected by a larger amount. That's the not the crucial point. The crucial issue here is that Tiger Woods doesn't enter all tournaments. That's the crucial experiment--the randomization experiment that's been created. He tends to enter the harder tournaments. Jennifer Brown, the economist who studied this, controlled for that. But that isn't the real comparison. Harder question: let's look at golfers who were golfing before Tiger Woods came along, and then after he came along, and let's see whether on the similar courses before and after whether they took their game up a notch or said they'll never win. Her analysis is addressing a different question: within a given year, how do these players play in tournaments that Tiger is in versus ones he is not? If I thought I was a competitor with Tiger Woods and I saw him making some of the impossible shots, I could easily be lulled into thinking I could make those same shots and giving it a try, harming my score as a consequence--that would be one mechanism. Not making less effort, but trying things I can't do. Might take more chances; might decide to play for third, which does pay a lot, so might get more cautious. Open, by Andre Agassi, similar conversations with himself when he had to play Pete Sampras, dominant player of his era who usually beat Agassi. That's not the real question: the author of the article and economist who did the study don't just want to show about athletes in times of stress--want to generalize it to general notions about competition. Tiger impact might be true; but does it generalize to other settings like corporations? Even less than Mariel boatlift study tells us about Mexico and the United States. Don't know how to compare the two.
38:55Macroeconomics: interesting? any soul-searching going on in the profession? Things we didn't understand about home prices and macroeconomic activity. Wake-up calls? Probably not. Continue to live in our own cocoons, think of financial policy as somebody else's problem, doesn't affect us. Huge swing in the profession away from monetarist and rational expectations models in favor of simple Keynesian models, without any basis. Not everybody has swung that way, but surprising how many in the profession have been endorsing these stimulus packages. I think I know the answer to how the economy works, too! In a healthy economy when someone loses their job it doesn't precipitate job loss, but when the economy becomes unhealthy, it creates feedback loops, which means that some job loss creates other job loss; and the government needs to help prevent that negative feedback loop, demand management, but only during those few episodes. For example now we are in the self-healing phase and the job of the government should be entirely to eliminate the uncertainty. Problem: Stimulus package extended unemployment insurance, which tangles up looking for work in times of unhealth and the negative feedback loop. We decided to pay people not to look for work; made it cheaper to be unemployed. But, we gave them money--according to the Keynesian model that kind of makes up for their being unemployed--it keeps demand going. Messy system to separate out. Leamer: Opinion--it's all opinion and no data behind it. Thought Russ was going to say that unemployment insurance was increasing the unemployment rate. Russ: I think it does. Believe demand curves slope downward. Other things going on. Challenges of predicting accurately. A lot of people justified those unemployment extensions on the grounds of aggregate demand; kind of forgot that it would encourage people to not work as hard as they otherwise would. Not the only reason unemployment is responding slowly to the recovery. Agree that demand curves slope down; but long run can be different from the short run. If you put in place incentives that pay enormous benefits if you are unemployed, you definitely get more unemployment; but in the context of a cycle people tend to think of themselves as either working or not working, and that self-categorization is not much affected by the benefits they are receiving. They are out there hoping they are going to get a job. We need to do some data analysis to find out who is right here. Eagle feathers, duck feathers, windy day--with housing collapse, we might expect that unemployed construction workers might be in an unusual situation relative to past downturns that were more general. About a quarter of the people who are unemployed since the last peak, December 2007, are in construction. They are going to have a hard time trying to figure out whether they should stay in construction or not. A lot of uncertainty and imperfect information.
44:55Pedagogy, educational question. Teach a class on how to think about numbers, how to be skeptical about relationships you see; the way journalists misreport with confidence that isn't justified. Teach journalists the same principles trying to teach them to be more skeptical. People take a lot of things on faith. One response is to say all empirical work is garbage, to dismiss everything. Confirmation bias, journals generally only publish positive results, etc. Podcast on book Macroeconomic Patterns and Stories; argue that we need both. All we have in the area of macro is opinions. Teach a course called Turning Numbers into Knowledge; final exam is for the students to read the testimony of the Federal Reserve Chairman to Congress and pick a sentence out of that; then look at data sets to see whether they can confirm or cast doubt on that opinion. Process. Profession is way too heavy toward theory; macro has completely ignored an enormous data base that could have an impact on how we understand the economy. People have imposed a particular structure, a straitjacket, on the data which prevents them from learning how this complex economy actually evolves. What straitjacket? You give me your model--the overlapping generations model, the rational expectations model, the Keynesian model--all the forecasting models are simple Keynesian models. Commit yourself. Example: wrote a paper saying housing is the business cycle. Housing is absolutely critical; great leading indicator but also contributes a large fraction of the jobs. Construction--large fraction of every one of the downturns we've had. Disadvantage in writing a macro paper--not a macroeconomist. Advantage in writing a macro paper--not a macroeconomist. Came to that question as a student of data. If you look at the data without the straitjacket, without having a horse in this race, not a Keynesian, Austrian, monetarist, rational expectations guy--the data shouts, screams that housing has something to do with almost every cyclical downtown of the post-war era. Skilled and respected data analyst, but not a skilled and respected macroeconomist; have expertise, reputational credibility--category might be annoying rather than skilled and with credible reputation. No macroeconomist say your paper and said this is something kind of important. Marty Feldstein told Leamer he saw the paper and didn't know that about housing. Honest man. Another prominent economist expresses annoyance and said he already knew all that stuff. Why didn't you write it in that book you wrote? "It's in there somewhere." Another not to be named implied he didn't know what you were talking about. This is what's there. You can't say it's not relevant. Do they have a different answer? You've done these interviews! Benign neglect. Odd given that virtually every macroeconomist in the world would concede that housing had something to do with this downturn. Interesting question how much of it was due to feedback loops between housing and the financial sector, but nobody denies that housing was a precipitating factor here. Paper was written in 2007--this guy's a real prophet! Thought police, treat you with disdain. Marched to my own drummer; lonely. In own personal odyssey, find work compelling. Don't remember what I might have thought of it in 1985, but thought you've got to do something. But maybe you don't. Humility, as a profession. Not much incentive for economists to think that. If you want to be in the newspapers, you've got to be overconfident.
55:29Different a decade ago--very few economists referred to in these national outlets. Maybe the Internet has had an impact, blog stuff going on. Our profession has become radically changed by the opportunities for fame and fortune in last 25 years. Let's go back 30, 40 years ago: two famous living economists--Milton Friedman and Paul Samuelson. Each had a column in Newsweek; if you asked people to name an economist that's who they'd come up with. Today, an enormously larger group of economists make money directly or indirectly get income and fame from the Internet, newspaper columns--Paul Krugman; Tyler Cowen; Steve Levitt; etc. Made for a lot of entertainment, not sure it has led to a lot more truth. Good aspect: it keeps us in touch with real-world issues. This is the golden age of economic education. In the old days if someone wanted to learn economics, only a handful of books to recommend. Now there are 30 books you can recommend of people who write for a general audience; 10 blogs full of interesting information and thought, many trying to figure out how the world works, no charge. Probably a good thing. Generation of economists who were basically theorists, so a little more is probably a good thing.

COMMENTS (41 to date)
Julien Couvreur writes:

Interesting discussion. It is certainly difficult to tease out knowledge about causal relationships just by looking at numbers.

But there is a deeper issue with econometrics, which Austrians in particular emphasize: there can be no such thing as a physical constant in the realm of human action. In other words, what is the repeatability of a result?

Using the example from the discussion:

"If you look at the data without the straitjacket, without having a horse in this race, not a Keynesian, Austrian, monetarist, rational expectations guy--the data shouts, screams that housing has something to do with almost every cyclical downturn of the post-war era."

Great, let's measure that. We find that housing had sensitivity X to the expansion of money and credit, while the financial sector had sensitivity Y.

What does that tell you about future downturns?
Pretty much nothing.
As the economy develops, there is no reason to think that X and Y would stay the same.

Another, new type of good could soak up the inflated credit ahead of housing.
For example, an expensive but highly desirable domestic robot that takes care of your home. Or maybe we discover a way to build houses for really cheap and people can afford them without a mortgage.

Nick writes:

Dr Roberts,

Your class about statistical skepticism sounds interesting. Is there any possibility of you posting audio or visual recordings of this lecture online?

oakshott writes:

Great podcast this week.

The description of how economists build their econometric models reminds me of a story I heard about a biology prof who was determined to prove a theory correct and instituted a massacre a lab mice in search of an 'n of 10'.

I especially liked the brief exchange about your views on unemployment insurance and the mutual acknowledgement that belief doesn't equal knowledge.

How about a podcast exploring the non-numerical factors that may impact actual behaviour? Learner alluded to one when he mentioned that in the short term he felt that folks considered themselves workers or non-workers regardless and the extension of unemployment benefits had minimal effect on behaviour.

In general it seems to me that libertarians address this aspect of economics by ignoring it. I don't think this is justified, but I'd like to hear you or someone like Mike Munger convince me otherwise.


Peter Twieg writes:

It should be noted that the discouraging effects of superstars only occurs when a relatively fixed rent is being competed for. If people's compensation is tied to their own productivity rather than their productivity ranking relative to others, this effect should vanish. Competition isn't the problem, then - fixed rents are the problem.

Justin P writes:

Good show.
I was surprised at how much of the material covered with Leamer carries over from last weeks Teleb-cast.

I think Taleb would totally agree with Leamer's statement, "economic theory is fiction: sometimes good, insightful, sometimes boring, but fictional representation of the world; and economic analysis is really journalism."

The failed recipe concept is interesting. It definitely would be nice to see how many attempts a researcher had to make in order to get the "right fit" between the data. How can we measure if a researcher cherry picked the data to fit their own preconceived models? How many variables did they have to leave out to get the result they wanted?

That's a problem with just about every scientific research being done today. Medical studies come to mind almost immediately.

How many times has Milk been determined to be detrimental to the public health? How many times have the previous studies been over turned by newer studies? I honestly don't know, if milk is good or bad for you right now, but I'm certain in another year, there will be another study saying the opposite. Which variable are one set of researchers throwing out and which ones are the opposite set putting back in?

I like the idea of randomization, but randomization will never work for economics because economics isn't science, it's social science. You can never completely randomize any sample, since people and groups of people are interconnected in thousands of tiny, and often unknown ways. There are just too many variables in play to know which ones are random and which just seem random, but in reality are not.

"Another prominent economist expresses annoyance and said he already knew all that stuff. Why didn't you write it in that book you wrote? "It's in there somewhere." Another not to be named implied he didn't know what you were talking about."

Come on Russ, give us a hint? Delong comes to mind, no doubt Leamer will join you as one of the "worst economists ever" soon.

SwissEcon writes:

I enjoyed the podcast this week. Russ could have mentioned Esther Duflo's work in development economics. She performs randomized field experiments in developing countries. In a recent TED talk, Duflo claims to have found definitive answers to a number of important questions, as, for example, the most efficient way to increase school attendance.

agnostic writes:

Do econometricians use model comparisons, like comparing the Akaike Information Criterion across models? Informally, such things reward models that better fit the data but punish models that have more parameters. The model among those being compared that strikes the best balance between fitting the data while not blowing up complexity wins.

Nitpicky remark: a t-statistic or p-value is the other way around of what you guys said. A p-value says what the probability is of observing a test statistic as extreme (or more) as the one found, *assuming the null hypothesis were true*.

So it doesn't tell us how likely it is that the result is a fluke -- that would be the probability of some hypothesis being true, given the observed data. P-value is P(observed data | null hypothesis is true) -- talk about begging the question!

As a simple example, say you flip a coin 5 times and get 0 heads. Your null hypothesis is that the coin is fair. *Assuming that hypothesis to be true*, then the probability of observing such a result is 1/32 or about 0.03.

So you'd reject the null hypothesis at the 0.05 level, arguing that it would be pretty strange to observe these data if the coin were actually fair. But of course it would be pretty strange to observe these data under all sorts of other null hypotheses.

Thus, p-values don't tell us which among a list of rival hypotheses are better -- just that this one or that one doesn't seem to work. Model comparisons, though, can meaningfully rank hypotheses based on excelling at the fit/complexity trade-off. There's still an art to deciding what to make of the AIC (or whichever one), not just accepting the ranking blindly, but they are much better than the test statistic / p-value approach.

eric mcfadden writes:

Don't put up these podcasts the day I have my econometrics final. This type of stuff just makes me write weird answers on my exam and will lower my grade.

JH writes:

I had one quarrel with something that you talked about. You posed the question as to whether there was a paper or text that changed anyone's mind. I think that you are asking the wrong question. I don't think that it is as important to change the minds of an individual so much as the profession as a whole. For example, even if there was what the vast majority of the profession viewed as a definitive study done to show that fiscal policy has no effect on output, I would still expect certain folks to defend fiscal stimulus and (try to) find something wrong with the study. This is not a knock against these particular individuals, but rather is the result of priors that are hard to update.

A better gauge as to whether the views in the profession have changed is to look at the graduate curriculum and see how views on fiscal policy have changed. I am currently writing my dissertation and thus am only a year or two removed from Ph.D. macro and the advanced macro field courses. Fiscal theory, in those courses, effectively began with the work of Barro on Ricardian Equivalence. Keynesian multipliers were not taught. In fact, it wasn't until recently that people started trying to estimate fiscal multipliers in the now popular dynamic stochastic general equilibrium models. And, what's more, those models assume that the central bank can only effect monetary policy via the interest rate. Thus, there conclusions assume that once the nominal interest rate hits zero that fiscal policy can be effective. Correspondingly, Michael Woodford's recent paper showed -- although that wasn't his stated purpose -- that the size of the fiscal multiplier is crucially dependent on this assumption. (You can ask your colleague, Garret Jones, about the validity of this assumption.)

In other words, what I am trying to convey is that while it might be very difficult to change the minds of individuals with empirical research, it is possible to change the minds of the profession. Now this change likely will not come about through one influential study, but through a series of influential papers and debates -- after all, the profession is a made up of a group of individuals. To illustrate this point, one needn't look any further than the work of Milton Friedman, Anna Schwartz, Allan Meltzer, Karl Brunner, and fellow monetarists whose views were initially scoffed at by many intelligent and influential economists at the time of their early work. Many, but not all, of their ideas are now considered mainstream as a result of debate and empirical evidence.

Charlie writes:

Russ,

I think you have a misunderstanding about how people form and hold beliefs. You repeated a common theme in this podcast that econometric studies don't change people's minds that already have a strong opinion.

Really though, this is perfectly natural. People with strong opinions have already seen lots of evidence that confirms their views. One new piece of evidence should rarely ever have enough sway to cause someone to radically change their view. What really happens is evidence and knowledge accumulates over time. Every economist takes a single paper with a grain of salt. It's only when many papers over time with different models and methods accumulate that minds get changed.

This process isn't unique to econometrics or even economics. Even the soft statistical papers of Milton Friedman that you have spoken so often of didn't instill immediate change of views. It didn't cause Keynesians to say, "darn, I've been wrong all time." But as time passed and new evidence came out (especially a notable stagflation episode), views changed over time. The keynesians of today are much different than those of the 1950s and much of that change was caused by a dialog with MF and other monetarists. Yet, it happens one paper at a time, not all at once.

Evidence accumulates over time. To reject all econometrics, because one paper isn't enough to form a view about most big important topics is quite unfair.

Charlie

P.S.- Extending this argument, Leamer is far too pessimistic. Imagine as Harford describes that a natural experiment in Israel gives us an estimate of how class size effects learning. Suppose it's large, if I'm a school principal in Dallas do I radically redo my budget and change my school? No, but I'd love to do a random experiment. Take some kids randomly put some in a smaller class some larger. Overtime we can accumulate knowledge. And we need to make more of our policy decisions with accumulating knowledge in mind.

P.P.S.- I'm glad you posted the Journal article featuring Tiger, because you completely mischaracterized it. It discusses several findings in a variety of settings to talk about the "superstar effect." The hypothesis is more mild than you suggest, "There is little doubt that, in many situations, such incentive structures lead to motivated employees, working hard for the top spots. But the presence of a superstar can reverse this dynamic, so that instead of trying our best we accept the inevitability of defeat." The bottom line is that all the studies together create a kind of interesting finding. If I was creating pay structures, I would go through the literature and form an opinion about the effect. I'd look through the papers (if when Tiger tanks the first two rounds, do people play the next two better?) and I'd look through the different contexts (where does this seem to show up?). As is, the article didn't change my mind about anything, intrigued me a bit, but not enough for me to put a lot of effort into forming a belief, and yet, that doesn't at all mean the individual studies were useless or "not scientific." Evidence accumulates.

P.P.P.S. - It's always intrigued me why economics professors never seem to use their own methods. If I were a teacher the first thing, I'd do would be collect data (what's your gpa? sat? how often do you study?...) and see how it effected their grades. Why don't teachers with multiple classes do basically random experiments? Don't you have any interesting questions about teaching?

Russ Roberts writes:

JH and Charlie,

There is both a micro and macro version of being convinced by a paper. The micro version is to ask an individual if a paper using sophisticated econometrics has ever caused the person to reverse an opinion.

That is the question Ian Ayres asks his colleagues. The question I have been asking is whether you can name a paper that used sophisticated econometrics to bring about a consensus in the profession about a controversial issue that people felt strongly about.

Think about the death penalty and whether it deters would-be murderers. Go look at Leamer's "con" article. He has a nice table in there where he shows a range of effects on capital punishment depending on how you specify the equation. The death penalty either deters 29 murders or causes 12. Nobody's mind is going to be changed (and nobody's mind should be changed) by any particular specification. What I understand Leamer to be saying is that at least with that data set, no scientifically reliable measure is discoverable.


What we have in economics, on both sides of the ideological divide, is a massive problem of confirmation bias. Yes, Charlie, people change their minds slowly. The question is whether that's because they have all this prior evidence you claim they have or whether it's because they have a stake in one side or another or a bias. That is why I like JH's observation that sometimes "the profession" moves toward one position or another even though the leaders in a field may deny the validity of the new theory. That is because the old guard has all these sunk investments.

Where I potentially disagree with JH is that it's not clear to me that the new consensus that arises (either against Kenyesianism, say, or in favor of it, has much to do with the kind of sophisticated empirical work that I am critiquing here (instrumental variables or even just multiple regression.) Facts matter. Evidence matters. But the monetarist revolution of the 1970s and 1980s wasn't driven by precise estimates of Keynesian multipliers that refuted the power of fiscal policy. It was driven the observation that stagflation seemed inconsistent with Keynesian models. And certainly the recent re-embrace of Keynes has nothing to do with econometrics.

Christian Pugaczewski writes:

Russ,

There have to be some econometrics papers out there that have changed opinions. For instance, surely the paper that first published the results on the Ultimatum Game convinced everybody that the classical assumption that economic actors always make money maximizing decisions is false. Unless, of course, that paper doesn't fall within the definition of an econometric study.

Are there still economists who say that human beings always make money maximizing decisions?

Christian writes:

Russ,

Two topic suggestions: California is voting on the decriminalization of marijuana in November and immigration is once again a hot button topic. Aren't people who favor strong immigration laws just engaging in rent seeking?

Christian writes:

Russ,

One more topic suggestion (sorry should've included it in the other post): I'd like to hear more about morality and the concepts of "good" and "right." The economics discipline appears to be a type of moral philosophy. I often understand economists to say things like "free markets are good" or "communism is bad" but I have no idea how they arrive at such conclusions. Moreover, it seems like a foregone conclusion that wealth maximization is a "good" thing, but why? By what standard? Is it ever the case that wealth minimizing is "good"?

Why are efficient markets "good"? Efficiency can be the enemy on some occasions. For instance, electricity travels less efficiently across the filament of a light bulb, causing friction, which produces heat and light. Inefficiency a "good" thing in this case, right?

Is it ever the case where economists study a topic where there is neither a "good" result nor a "bad" result - just a result, which always holds true? If not, then what could possibly be the point of studying it?

Jeff G writes:

Going from a hand saw to a chainsaw has a huge economic impact. How does econometrics give any insight into the resulting economic change before the chainsaw is invented? Since peoples economic lives change drastically with productivity improvements, being unable to predict the rate of invention makes modeling of the future based on the past seem dubious.

People seldom hold economic variables constant long enough to use statistics.

Oded Gurantz writes:

An additional book that I enjoyed reading about public policy and consensus building with data was "Spin Cycle: How Research Is Used in Policy Debates: The Case of Charter Schools" by Jeffrey Henig (available in large part in google books).

Charlie asks "If I were a teacher the first thing, I'd do would be collect data (what's your gpa? sat? how often do you study?...) and see how it effected their grades. Why don't teachers with multiple classes do basically random experiments?". Though I don't see how collecting self-reported data about studying leads to anything truly interesting, here is one paper an economics professor did on his own class that might be along the lines he was looking for, constructing an RDD by requiring mandatory attendance for all students scoring below a certain threshold on the midterm: http://people.ucsc.edu/~cdobkin/Papers/Class_Attendance.pdf

Charlie writes:

"There is both a micro and macro version of being convinced by a paper. The micro version is to ask an individual if a paper using sophisticated econometrics has ever caused the person to reverse an opinion."

My first objection is that if you change "paper using sophisticated econometrics" to "paper in economics," you will mostly get the same responses. As to whether its bias or evidence based (surely some of both), but I've read lots of papers in fields I didn't know much about that seemed really convincing, even if the result was somewhat counter-intuitive. I find papers much less convincing in subjects I know much more about. I think that's a pretty general result and more consistent with evidence than biases. I think biases matter though, and I think much of the change in economics is certainly not convincing your peers, but rather their students.

Second, I think we have very different views on what useful knowledge is and what econometrics actually tries to provide. In my undergraduate course on econometrics my professor did crime research, when he taught instrumental variables he told us about Levitt's paper on the effect police have on crime. Since more police are added to places where crime increases, in a simple-minded regression it looks like police cause crime. Levitt used election years as an instrument. Since politicians add police in election years and election years don't cause crime, the instrument makes it possible to get an estimate of police on crime. I already thought police reduced crime, but I don't know how much. The interesting part is the estimate, not the effect. Should we raise taxes and hire more police officers or not? This is true of almost all the interesting econometrics articles I read. I think smaller classes probably increase student achievement, but how much? How hard would it be to collect data on factors that might effect student achievement in your economics classes? Would it not be useful for students to get a sense of how much their math background or their time spent studying will effect their ability to achieve in your class?

The point is, econometrics gives valuable information on the magnitude of effects, which is useful for policy makers. In that sense your death penalty argument works in my favor. There were 16,000 murders in the U.S. last year. Isn't it useful to know that changing from death penalty to no death penalty isn't likely to change the number by 1,000 or even 100?

So in some sense, I'm agreeing with many of the econometric critiques you mention. Maybe it's because I'm young and your old, and the critiques have started to sink in. I was taught to think about oompf and confidence intervals. You think econometrics is about proving whether an effect exists. To me, if the effect is small, it doesn't matter much whether it exists or not. You think it's to prove theory correct or incorrect. I think it's useful for policy makers to make decisions and employ scarce resources. To you, the work on crime and gun laws and crime and the death penalty is a miserable failure, because the conclusion is ambiguous. For me the field is a success, because whatever the effect is, it's small, so I personally just don't care much. Were I a policy maker, I wouldn't spend any political capital on either issue and as a voter I don't use the issues to choose a candidate.

Charlie writes:

Oded,

"Though I don't see how collecting self-reported data about studying leads to anything truly interesting"

Really? It wouldn't be cool to tell your students that each additional hour students report studying their midterm goes up 2 points (or whatever). Do math majors do better than history majors? There's all kinds of interesting things you could find. You could think of interesting policy experiments. Have one classes homework count for a grade and the other classes not. See if students do better.

I looked at the article, and certainly there are many more, just not as many as I think there should be. I only spent a little time over what is a long paper. I thought the policy tested was interesting. I didn't like how those professors reported the results. Everything is in standard deviations, which has merits, but at least for me makes it hard to think about oompf. They find a 10% increase in attendance is equal to an increase in final exam of .17 standard deviation. I'm not sure if that is small or large. I'd need to convert that to something more intuitive.

Trent Whitney writes:

A couple of thoughts on your golf comments with Prof. Leamer.

* "I don't go into golf because Tiger Woods is there. There are millions of Americans who have decided to take up other pastimes, tennis, because they don't think they can beat Tiger Woods."

I disagree. At the beginning of Tiger Woods' career, there was an explosion in the number of Americans who took up golf. Now, that number has ebbed a bit over time, but I think it has more to do with people finding out how hard a sport it actually is and becoming discouraged, instead of quitting because they think they can't beat Tiger Woods.

Further, on the PGA Tour, you've seen an increasing number of strong, young players emerge in the past few years. I think that's because golf is attracting more and more golfers because of the higher purses that have resulted from Tiger Woods' popularity and the higher TV ratings. So while you could argue they aren't there because they think they can beat Tiger Woods, I will argue that at least some of them are there because of Tiger Woods (this indirect monetary effect).

* [paraphrasing] Competing against Tiger Woods affects my own game negatively.

This is something that I first noticed with Jack Nicklaus when he competed in Majors. He always seemed to play conservatively, not taking the big risks for super-low scores, but playing the percentages in that if he gave himself enough "safe" birdie chances, he'd make his share and avoid the double bogeys (or worse). And while he was posting consistent scores of -2 each round, the likes of Tom Weiskopf would explode because they would try to shoot -6 or -7 thinking that's what Nicklaus would shoot.

So when you look at Nicklaus' career as a whole, I think that's why you see all those 2nd and 3rd place finishes in Majors, and very few come-from-behind victories a la the 1986 Masters. Nicklaus even said a couple of years ago that had he known someone like Tiger would have come along to challenge his records, he would have played more aggressively in a lot of those tournaments.

In any event, Woods, having studied Nicklaus, employs much of the same strategy. He usually has solid 2nd and 3rd rounds in Majors, then plays consistently on Sunday while others around him explode, thinking they have to shoot an -8. They all remember Woods' record-setting rounds at Augusta in the late 1990s and his performance in the 2000 US Open, and they know the potential is there...even though he hasn't blown away the field in years.

I think that's why Woods ends up in lots of playoffs against so-called "grinders" (players who play it safe, aim for par, but convert birdie chances every now and then), like Rocco Mediate in a recent US Open. And I believe it's still the case that Tiger hasn't come from behind to win a Major on Sunday - hasn't he won every one of his Majors after having had the lead after Saturday?

Dmitry writes:

Julien Couvreur, "But there is a deeper issue with econometrics, which Austrians in particular emphasize: there can be no such thing as a physical constant in the realm of human action. In other words, what is the repeatability of a result?".

I have come to the same conclusion as you. I think this argument has been greatly overlooked by methodologists. Does anybody have any idea why this has happened?

Russ Roberts writes:

Trent,

When I said I didn't go into golf, I meant as a profession. But you're right, lots of people got interested in golf because of Tiger. And you're also right about the surge of talent because of the increase in purses. One of the silly things about the whole superstar argument put forward in the WSJ article is the idea that Tiger wins all the time. He wins 1/3 of the time. That's a lot. But it's not like you're chances of winning are zero so you give up and become a tennis player. At any given tournament, Tiger might be the favorite, but lots of golfers have a chance of winning and as I pointed out, second and third are still lucrative. Your point about the increase in purses is relevant there as well. My guess is that second pays a lot more than it did in the past.

Trent Whitney writes:

Russ,

I did some quick research on PGA purses by looking up the 2 tournaments in the Metroplex (Byron Nelson in Dallas and Colonial in Fort Worth):

1995 Byron Nelson Prize Money (Before Tiger)
1) $234,000
2-4) $ 97,067 (3-way tie)

2009 Byron Nelson Prize Money
1) $1,170,000
2) $ 720,000
3) $ 442,000
4-5) $ 286,000 (2-way tie)

1995 Colonial Prize Money (Before Tiger)
1) $252,000
2) $151,200
3) $ 95,200
4) $ 67,200

2009 Colonial Prize Money
1) $1,116,000
2-3) $ 545,600 (2-way tie)
4) $ 297,600

Clearly these increases far outpaced inflation. And you can certainly make a great living finishing 2nd-4th consistently. In fact, I think Tim Clark, who won for the first time on the PGA Tour on Sunday, already had won over $14 million in his PGA career...lots of Top 10 finishes, no doubt.

Russ Roberts writes:

Charlie,

You suggest that the econometrics is useful because it can establish general magnitudes and not just whether an effect exists or not. Policy makers need to care about both. Unfortunately, the death penalty example does not help your argument. I wrote in a comment above that the death penalty deters as many as 29 murders or causes as many as 12. I wrote poorly. Those numbers are deterred or caused per execution. I would suggest those are very large numbers and that unfortunately, we have little knowledge of what the true effect is.

You mention Levitt's study of the relationship between crime and police. Yes, a positive correlation is misleading because of the potential for reverse causality. The question remains as to whether the real relationship is negative or close to zero. A lot of studies find no relationship between expenditure and educational outcomes. Do you believe those? Do you think foreign aid reduces poverty? These are very important questions. I am skeptical of the ability of sophisticated econometrics do disentangle the effects that we really want to know about either in magnitude or general direction.

George Peacock writes:

Russ,

I don't know whether your forum is appropriate for the following topic, but I wonder about interviewing a guest who could discuss the link between economics and investing and about some of the models (Modern Portfolio Theory and the underpinnings of Means Variance optimizers come to mind) that seem to have pervaded the investment industry.

It was great to have Taleb and Ed back again. I cannot understand why they seem to be such outliers. It reminds me of Horton Hears a Who. Where's the tipping point where hubris dissolves? Where's our Jojo?

James writes:

Russ,

I appreciated this podcast, but I think someone should point out that this skepticism you are advocating only helps you avoid Type I error. You are still vulnerable to Type II error, and skepticism is not going to help you there. I have heard this argument from many methodologists, but it doesn't necessarily help people do good science. Part of the backlash against significance testing has been due to this obsession with Type I at the expense of Type II error.

You may correctly reject conclusions in papers that using econometric wizardry to show what caused the stock market crash, but how are you going to discover the real cause? You certainly can't run a randomized experiment.

Aren't these sophisticated econometric methods the very methods that have been invented to deal with the exact methodological problems you are pointing out?

Phillip writes:

Dr. Roberts,

Great podcast! Have you heard of "Government Size and Implications for Economic Growth" by Andreas Bergh and Magnus Henrekson from AEI Press. I haven't read it yet. I just learned of the book from an AEI podcast.

It has a great thesis, and more importantly, the authors seem to address the issues discussed in your latest podcast. They sound very open about their methods.

http://www.aei.org/book/100043

Charlie writes:

Russ,

One reason econometrics is important is that it keeps us from making mistakes with simple statistics. I was just watching a tv program and the just quoted the "a women gets paid 70 cents per dollar a man gets paid." If I had never taken econometrics that would bother me, but I know that when you add a lot of controls the effect mostly goes away. I bet figuring out whether it is 0 or small and positive or small and negative is really hard, but I think it's been very effectively shown not to be as large as unsophisticated statistics would show. There are all sorts of bogus statistics like that getting thrown around that at least experts know econometrics proves false.

MikeRINO writes:

Russ,

With all due respect,
you seem to exhibit a "Regression Toward the Mean" syndrome.
Just because you heard a stupid idea on Mr. Limbaugh's show doesn't mean it makes economic sense.

I'm referring to:
"Unemployment extension increases the unemployment rate."
Only someone who has never experienced the state of unemployment could hold such a ridiculous opinion.

1) You should compare state unemployment rates to salaries, you might notice a vast drop in income. This would be a Negative "Incentive" to stay unemployed. How is it that economic INCENTIVES only work when YOU SAY they work?

2) Then there's the fear of ever getting re-employed, especially with longer and longer unemployment periods. So, there's FEAR, another Negative Incentive.

3) Extreme Budget Cuts to preserve your ability to say in your home. The ACTION you take while unemployed to attempt to preserve your assets. An amplification of point 2 really.

This is what I mean about the "Right", You'se guys are NEVER Right. [ Actually, I think you guys run a .100 batting average. Every 1 time in 10 at bats you guys get a hit. ]

The Republican Party runs the country sub-optimally.
You make the rich richer, and give crooks the propaganda they need to rob the middle class, making us all POORER.

The Democratic Party, on the other hand, GROWS the Country making us ALL Richer.

Tim Bugge writes:

[Comment removed pending confirmation of email address and for crude language. Email the webmaster@econlib.org to request restoring your comment privileges. A valid email address is required to post comments on EconTalk.--Econlib Ed.]

DM writes:

I don’t know how the selection process of the guests and the lineup of the podcasts work, but I think it would have been a hoot to listen to De Vany and Taleb interviews following on the heels of this podcast.

Russ Roberts writes:

MikeRhino,

You seem to think I am a Republican or that I listen to Rush Limbaugh because I think unemployment insurance affects unemployment. Neither is true.

You might check out this article on unemployment by Larry Summers, currently serving in the Obama Administration:

http://www.econlib.org/library/Enc/Unemployment.html

He too seems to think there's an effect.

I actually think both Republicans and Democrats could do a better job letting us all prosper...

Russ Roberts writes:

Charlie,

I think you are confusing economics and econometrics. When I hear that women's earnings are 70% of those of men, econometrics isn't what gets me thinking, it's economics--an awareness that there are multiple things going on at once that lead to outcomes, that the world is a complex place, that yes, correlation is not causation.

Multiple regression analysis is an attempt to disentangle that complexity and quantify it. I don't think it's very good at doing that most of the time. For example, men and women have different levels of education. But they also major in different subjects and go to different quality schools. Their incentive to study and their choice of major are affected by discrimination. So that complicated matters even further.

My point is that econometrics isn't very good at controlling for those effects with most data sets that we're stuck with.

My other point is that some humility is in order. I say that not simply because I think it's the logical result of my arguments. I say that because I see economists screaming on both sides of most public policy issues, each side armed with econometric analysis that the other side easily dismisses.

Bradley Calder writes:

That was a fantastic podcast, thanks a lot Dr. Leamer!

Tim Bugge writes:

Thanks to all for another terrific podcast.

Russ,

Regarding your evolving sentiment toward the efficacy of traditional economic methodology; I'd love to hear a discussion contrasting the common use of empiricism and the Misesian characterization of economics as an a priori science.

FromThePacific writes:

Great PodCast!

I'd love to hear further interviews on this topic. Particularly, it would be great if you could get one of the JPAL guys (Duflo, Karlan, Miguel, etc.) to talk about this, and go further in the discussion of the Deaton critique, which is basically what is discussed in great detail in the latest issue of the JEP.

Charlie writes:

Russ,

"men and women have different levels of education. But they also major in different subjects and go to different quality schools. Their incentive to study and their choice of major are affected by discrimination. So that complicated matters even further."

I think you let the perfect be the enemy of the good. I think if you just add simple controls age, experience, education, you get a long way to closing the gap unsophisticated statistics say is there. To me, that is important. And if those variables didn't close much of the gap, I think the public debate would be much, much different, and for good reason.

Beyond that, I'm certainly for humility, both for empirical work and theory, but it seems you have a very hard stance against econometrics, as in its value is close to zero.

I'm just wondering if you take your same criterion, "I see economists screaming on both sides of most public policy issues, each side armed with ... analysis that the other side easily dismisses"

What is left? What can go in the blank instead of econometrics? Certainly theory is not. Both sides always have reasonable theories and strong arguments to dismiss the other's theory. What about Leamer's "macro stories," both sides always have stories and unsophisticated stats to back them up? And we know unsophisticated statistics are quite perilous to use, and even economic experiments may not transfer out of the experimental domain.

If you apply the same standard to other parts of the field, don't we have to throw out economics altogether?

Charlie writes:

Just to clarify my first paragraph, economics tells you what things to think about and econometrics tells you if those things can reasonably close the gap in question. So yes, economic theory tells you something might be up, but econometrics makes those theories be reasonably supported by the data. If simple controls didn't close the gap in most studies, the debate amongst experts would be much different. Someone could always think of another economics theory of another variable that might effect it to support their bias, but the opinion of experts as a whole would change if the data were different.

We can always use theory to call econometrics into question, but whether that theory is reasonable and persuasive is another matter.

John Berg writes:

As one who has grown addicted to EconTalk podcasts in the last several months, I was greatly struck by the accuracy of two predictions, recently read, in the June 30, 2008 podcast "Kling on Hospitals and Health Care."
Your highlights notes:

"39:54 The uninsured. You hear that they are at risk. Employer-provided health insurance is unraveling, huge wage differentials, people who are healthy leave to become consultants and have more take-home pay. Medicare, bankrupt. Enormous structural deficits forecast as population ages. Not much of a crisis in the sense that we'll do something different--lower benefits, raise tax rates. The problem that looms is the political fight. Stein's Law: something can't go on forever. Health care spending rising relative to GDP has to stop eventually if only on arithmetical impossibility. Longevity is going up. Most tempting way to resolve it is to keep clamping down on reimbursements to doctors under Medicare. Net result is doctors are leaving the system. Headed toward a two-tier system. Question of how we get there. People who use the government funds get poorer choices, fewer doctors; others pay more and more and have more choices."

I note a amazingly far-seeing forecast:
@ minutes 42: + Russ predicts that Medicare "train wreck" (bankruptcy) is not the threat but the political fight that must take place. Then Kling forecasts the pressure on Medicare Doctors reimbursement and the two tier health delivery system. (I read, "rationing.")

A tip of my hat.

John Berg

Big Al writes:

Great show. Tough to hit the sweet spot where you're at a level most people can understand, but still deep enough into the math to keep it interesting for the listeners with some background. Yes, more shows like this, please!

Worik writes:

I really appreciated the comments about empirical work changing people's mind.

Part of learning is changing your mind and having your mind changed.

That is why I come here and listen. I am of a completely different ideological bent than Russ but listening only to people with whom I agree makes me a bore.

I am appreciative of Russ' way of dealing with POVs with which he disagrees.

Good stuff - if only more of us would listen attentively to those with whom we disagree instead of shouting over the barricades.

Peace out, comrades!

W

Rmangum writes:

I've been listening to EconTalk for more than a year now, and I've heard you become increasingly skeptical about empirical work in economics, leading you to proclaim that "economics is not a science." The implicit assumption is that only natural sciences like physics, which work by inducting general laws from empirical data. As far as I know this is a very modern, 20th-century view of science, where social sciences such as economics (formerly "political economy") suffer from "physics envy." The older view of science denoted any systematic investigation of phenomena.

I know that you are influenced by F.A. Hayek, and have done several shows discussing or referencing the Austrian theory of the business cycle. But it seems that your skepticism about empirical economics could be clarified by the methodological theory of Ludwig von Mises, which held to "methodological dualism," meaning that human beings are essentially different from the natural phenomena investigated by physics (they are much less predictable, hence one cannot induct "laws" from historical data) and hence require a fundamentally different mode of inquiry, namely an a priori approach. (I am not 100 percent convinced of pure apriorism myself, but it does seem to have certain advantages over the neoclassical mathematical approach.)

Thanks for the great work, and I look forward to more fascinating shows in the future.

Comments for this podcast episode have been closed
Return to top