Ian Ayres on Super Crunchers and the Power of Data
Oct 22 2007

Ian Ayres of Yale University Law School talks about the ideas in his new book, Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart. Ayres argues for the power of data and analysis over more traditional decision-making methods using judgment and intuition. He talks with EconTalk host Russ Roberts about predicting the quality of wine based on climate and rainfall, the increasing use of randomized data in the world of business, the use of evidence and information in medicine rather than the judgment of your doctor, and whether concealed handguns or car protection devices such as LoJack reduce the crime rate. The podcast closes with a postscript by Roberts challenging the use of sophisticated statistical techniques to analyze complex systems.

Adam Ozimek on the Power of Econometrics and Data
Adam Ozimek of Moody's Analytics and blogger at Forbes talks with EconTalk host Russ Roberts about why economists change their minds or don't. Ozimek argues that economists make erratic but steady progress using econometrics and other forms of evidence to...
Susan Athey on Machine Learning, Big Data, and Causation
Can machine learning improve the use of data and evidence for understanding economics and public policy? Susan Athey of Stanford University talks with EconTalk host Russ Roberts about how machine learning can be used in conjunction with traditional econometric techniques...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Salaam Yitbarek
Oct 22 2007 at 8:23am

It’s a good thing you didn’t have him on with Taleb.

Allan Niemerg
Oct 22 2007 at 11:29am

The podcast shows an interesting tension, on the one hand, data analysis can lead to the discovery of subtle and unexpected relationships, but on the other, data analysis can be used to suggest scientific certainty where there is none.

What I would be interested in hearing more about is the impact of sharing datasets as method to keep people honest. It seems to me, that if someone makes a claim, and provides a dataset for others to perform the tests themselves, then one can feel more comfortable that the claim was made honestly. Conversely, those who make claims but do not provide datasets ought to have their claims discounted accordingly. So, I’m curious to know how often data is shared. Russ mentioned that some journals require the data to be shared, but is this just a few journals, or is this industry practice?

John Lott
Oct 22 2007 at 11:58am

1) Here is a list of papers that find support for the hypothesis that right to carry laws reduce violent crime rates:


The debate among refereed papers is whether right to carry has no effect or whether it produces a benefit. To me that is important and that result has a real impact on the debate.

2) If the net benefits from LoJack are really $4,300 per device, one really has a hard time explaining why insurance companies fight so hard against giving discounts. The external benefit story doesn’t cut it because as I have explained Freedomnomics and elsewhere there are multiple ways to internalize those externalities. For example, Porsche could put LoJacks on all of its cars. If anything there would be an external cost because criminals would switch towards stealing BMWs and other cars.

scott clark
Oct 22 2007 at 12:39pm

Russ, maybe you should consider another interview with Robin Hanson, this time to discuss overcoming bias, the unsustainability of any honest disagreement, and a lot of the other topics Dr. Hanson works on outside of health. What do you say?

Luke O
Oct 22 2007 at 3:23pm

I agree with Salaam that Taleb would have trounced this guy.

I looked at his site, some of which is alright. He may even be right on “average” but where he advocates prediction in fat-tailed environments he is precisely wrong.

He reminds me of the character of Carlos in “Fooled by Randomness”. He will have us all drowning in rivers that are an average of four feet deep.

I think this show was well presented though and I did like the postscript comments by Russ.

Oct 22 2007 at 5:09pm

I propose a simple way to enhance the credibility of empirical work without hurting incentives for creating new datasets. Let the journals hire, say, graduate students in economics to rerun the experiments, try new specifications with the data and produce a report on their results (maybe we can call them referee assistants). Moreover, pay them a bonus if they can disprove the relations proposed on the paper. Their incentives to cheat would be very low for reputation issues and the fact that the authors would double check what they do (an unfair assessment would never see the light of day). Journals would publish such reports on the internet.

Oct 22 2007 at 8:50pm

I agree with all above that data arranging can be the perfect justification for the wrong conclusion. But it is sort of like democracy, if we don’t look at data and we don’t look for conclusions than how do we advance. In the Econtalk Podcasts it has been mentioned many times that Economics is not like physics with a precise formula. I suggest that physics is much like economics where both disciplines study the past to try to predict the future. Nothing is ever perfectly determined but models are based on judgements from the past. The projections are assessed based on how they predict the future. It is time that indicates the likelihood of correctness. We never have perfect knowledge that a methodology is correct or incorrect.

Nathan Adams
Oct 22 2007 at 11:52pm

I get the impression that there is a significant vested interest within the economics profession in defending mathematical, particularly multiple regression, analysis. I’m not an economist, but from the seminars I’ve attended it seems there is a lot of middling economics done where a “problem” is identified, data is collected, a regression is done to the data, the analysis justifies funding a program (and exactly how the funding should be best spent) and, not coincidentally, the researcher extracts grant money out the funding source. Perhaps this isn’t exactly how it goes, but I bet a lot of the funding of econ departments relies on this. As Dr. Roberts noted, the mathematical analysis is necessary to give the study at least the appearance of science.

Art Devany linked to this (http://www.technologyreview.com/article/19530/page1/) article over the weekend. It is about the difficulty of modeling financial markets (for gain) with any reliability.

It would be great to hear Dr. Devany interviewed on EconTalk, and Steven Pinker too. Pinker may not be obvious, but I think evolutionary psychology has a lot to say about our economic relationships with our families, our friends, and the rest of society and how these relationships differ.

Greg Ransom
Oct 23 2007 at 3:12am

Hayek’s argument is that the central task of economics is to explain the overall spontaneous order of the market and that this overall order can be given a causal explanation only by reference to something which can’t be captured in a mathemtatical model or in statistics — individuals learning, valuing and adapting within the contest of changing relative prices and changing local conditions. Much of this learning can only be understood as changes is judgment and changes in perception — it’s “subjective” to coin a not so helpful word. So Hayek’s argument against Friedman when Friedman was trying to be a philosopher of science was that Friedman was all wrong about the explanatory strategy of economics and the scientific nature of that explanatory strategy. But if you take a look at Friedman’s _Free to Choice_ you’ll see that Friedman adopted Hayek’s explanatory strategy when he himself tried to explain the overall order of the market, and the process by which that order comes about. Really — you’ll find Friedman giving the Hayek account, the same account Friedman often taught in his graduate classes using, that’s right, Hayek’s _The Use of Knowledge in Society_ (see Thomas Sowell’s autobiography.)

Will this insight ever sink in at MIT — or at the local fresh water state college? Doubt it.

Greg Ransom
Oct 23 2007 at 3:24am

Suggested future EconTalk guests:

Steven Horwitz on the economics of the family.

Philip Mirowski on the history of 20th century economics.

Bruce Caldwell on Hayek and the Hayek collected works project.

Peter Boettke on introductory econ textbooks and econ 101 classes.

Daniel Hammond on Milton Friedman and the making of Chicago Price Theory.

Ron Hardin
Oct 23 2007 at 4:41am

There’s a speculation that insurance companies would try to minimize medical costs. I don’t think that’s their incentive.

Insurance companies want the highest possible costs.

Their premiums cover costs in any case. But higher costs give them more customers, and thus a larger flow of funds to take a cut of, as fewer people can afford to self-insure.

There’s no market in hair styling insurance because we don’t have third-party payments yet to make haircuts expensive, that can then go into price instability as price rises and insurance becomes necessary.

An insurance company that then drives price back down is out of business.

Lee Kelly
Oct 23 2007 at 6:36am

Regarding the disagreement between Ayres and Lott. I think the lesson to learn is that using this method to decide policy is at best folly, and at worst conceit. The implicit assumption is that if Ayres is correct, and concealed firearms do not reduce crime rates, then the correct policy is to prohibit concealed firearms. If Lott is correct, and concealed firearms do reduce crime rates, then the correct policy is to permit concealed firearms.

For me, the mere fact that this disagreement exists is important to consider when forming policy, and not just the facts proposed by Ayres and Lott. It is testamant to the limitations of our knowledge (including statistical analysis) and our tendency toward bias. I think that holding policy decisions contingent on such investigations is a mistake, especially where the chosen policy will be enforced on everyone.

If this discussion were to be conducted with the understanding that the facts would not have any consequence for policy, then would Ayres or Lott more likely concede when faced with overwhelming evidence opposed to their respective theories? I think there would be a great deal more progress in economics, and the pursuit of the truth through critical discussion, if policy decisions were unfused from the scientific debate.

Butler T. Reynolds
Oct 23 2007 at 7:58am

I work as a computer programmer in the market research field. I know almost nothing of statistics. When I need to know, a person with a master’s degree or PhD in statistical analysis tells me what my software needs to do.

As someone who is from outside the field, here are my observations:

1. You can crunch the numbers of the response data all day, but if the right questions are not asked then you just won’t learn anything valuable. Seems obvious, but I’ve answered many satisfaction surveys that never really asked the question that got to the heart of my dissatisfaction.

The customer satisfaction questions are usually so narrowly focused that I really wonder sometimes how much companies are learning about their customers.

2. I have been surprised at how eager our clients are to want to modify questionaires, weight the calculations, or even “calibrate” (wink,wink) the results to make some things look better or some things look worse.

3. Sometimes valuable things can be learned from the research, but overall, I think that market research is taken way too seriously. Perhaps I’m wrong, but everyone just seems to pretend that this stuff is really deep and meaningful.

I think that when somebody’s bonus is dependent upon the results of a particular survey, people get really sensitive about it. The survey then becomes political within the company. That’s when the value of it tanks.


Simon Clark
Oct 23 2007 at 8:26am

As someone who lives in a country where handguns, let alone concealed one, are entirely banned, I wonder if I am more or less biased than those who don’t when I say that this has been a disastrous policy for my country.

Oct 23 2007 at 11:41am

Recent studies larger than the original studies has put much of what people though about healthy and non healthy diets in doubt. Complexity it seems can confound statistical studies.

Eric S. Howard
Oct 23 2007 at 1:57pm


Thanks again for a great pod cast! I realize the discussion really just scratched the surface but was stimulating and informative. I am an economist that conducts education research so the issues at play in the discussion were particularly salient. Here are a few observations based on my experience:

1. Complex vs. “Simple” Phenomena: Following from Hayek, I have become increasingly convinced that the distinction between sciences is not so much methodology (i.e., methodological dualism) as the type of phenomena under investigation. The more complex the phenomena, like human action and decisions, the more pattern-like (to steal a term from Hayek) our explanations/predictions must be for many of the empirical reason you mentioned Russ. A very telling example was Ayres’ response to your query regarding a definitive empirical study that has significantly impacted the field and could only offer two rather obscure references (not to slight the scholars involved in either piece and tellingly Heckman’s is economic history).

2. EBM: This topic is of special interest to me since educational researchers have increasingly been encouraged to follow the lead of medicine with the development of “evidence-based edcuation”. However, I am not sure many of us are actually reading the medical literature on EBM and especially the literature on medical education and the teaching of EBM. When we do we find that within the medical literature there is quite a discussion regarding how to asses EBM and measure its impact. With regards to randomized controlled trials (RCTs), the recognized “gold standard” in medical and educational research, we see limitations here also. In a recent example Kent and Hayward (Sept. 12, 2007, JAMA, Vol. 298, No. 10, pp. 1209-1212) warn against the “spurious false-positive subgroup results from chance fluctuations” (p. 1209) among subjects in RCTs. So, even in the medical field where clincal trials have a long and well established history, we see limitations especially when we go to treat individual patients: “Averaging effects across such different patients can give misleading results to physicians who care for individual, not average patients” (p. 1209).

3. Methods & Phenomena: In the social sciences we are primary, though not exclusively, concerned with human action as our outcome of interest, or institutional arrangements that are the result of human action/decisions. True, many of these can be quantified and measured, as well they should be in many instances. But to draw on analogies from medicine, where the outcome of interest is often biological/chemical/physiological and not human action per se (granted, many of the outcomes in medical research can be impacted by human action/decisions, say in a clinical trial) is to fundamentally misrepresent the phenomena under investigation. Further, while we see that RCTs are in fact the best design to ultimately derive causal implications, even here we find limitations and shortcomings for establishing such inferences. This becomes even more pronounced when one shifts from say pharmaceutical clinical trials to studies focused more exclusively on human action (i.e., studies of medical education and the teaching of EBM). This is precisely what we see in economics, particularly in the applied fields and micro.

I apologize for the short novel. Best to one and all and thanks again for a great discussion Russ!

Eric S. Howard

Oct 23 2007 at 3:10pm

I think it is very important to remember the limits of statistical knowledge, but when Ayers asks, “what is the alternative?” you have to answer him for you critique to have any merit. If the alternative is simply to rely on “theory,” our own personal narratives that reflect our biases much more than the empirical results we find, is that really better?

Also, there was no discussion about the magnitude of the change rather than the sign. A lot of the examples Russ gave like the example of the effect of immigrants on native low skilled workers are important, because it is hard to know the sign. If Borjas says the wages of high school drop outs have dropped 5% due to immigration since 1980 using a worst case scenario approach, but some other researcher has estimated they rose 2% due to immigration. These works aren’t nearly as contradictory as just looking at the signs makes them appear. They both essentially estimate small effects. The drop wasn’t 5% a year, but at worst half a percent a year. That’s notable.

Also, comparing the concealed hand gun and lojack studies as being identical cases is a little silly. The differences are too important. When people look at concealed hand guns causing crimes, they worry about escalation – a crime like burglary may be more likely to become murder if both sides have guns and instrumentation – making it easier for law abiding citizens to carry guns may make it easier for criminals to carry a gun. One aspect of this was discussed on the podcast, a person carrying a gun at a time of passion may get emotional and become a criminal (ie. marital fight becomes murder, high schooler/college age/ gang related/ or bar fights may go from shouting matches with the potential for fisticuffs to murder when more guns are present), but instrumentation has another effect by letting a crime committing person carry a gun more often. If a criminal must worry about getting caught with possession of a gun, he may not carry it as much and thus, may commit less crimes. He may substitute toward other weapons (like knives) that may be less deadly. How much effects like these matter is an empirical question. Weighting them may be outside of economics altogether (ie. if guns make burglary happen less, but result in death in a higher percentage of burglaries what’s a good trade off). It is very hard to think of a plausible story where Lojack causes an increase in car theft. Thus, it isn’t surprising that Lojack seems to carry big gains, while the effect of guns seems to jump back and forth in sign close to zero.

I’m convinced enough about Lojack that I’d vote for a subsidy in my city to encourage purchase of Lojack style technology. If enough people agree, we will get a controlled experiment and a chance to gather more data. I am willing to risk the expense of taking action when none is required. With guns I’m not convinced enough either way to try to change the laws. I live in Dallas, where I think we are allowed to carry a handgun.

Also, I’m a grad student in economics, and I don’t really understand why people have such strong ideological biases for such silly things sometimes. If guns cause or deter crime, it doesn’t shatter my value system. If Lojack has a meager and not large effect, I’ll live another day. I’ll still like economics. Even if current research is mostly fulfilling biases, eventually new people without the same biases will come in and review the research and decide which has more merit.


Donald Browne
Oct 24 2007 at 1:15pm

Enjoyed the commentary at the end of the program. Suggest more of this on future podcasts.

Greg Ransom
Oct 24 2007 at 7:20pm


Philosopher Larry Wright, UC-Riverside has written a couple of papers on argument and learning which takes on those who wish to quantify things like the gains in understanding someone might get from a course in, say, critical thinking. Wright is influenced by Kuhn and Wittgenstein on the crucial role of training and learning from examples in the gaining of knowledge and understanding. Wright doesn’t believe that all of what can be acquired — or all advances in understanding — can be numerically quantified and statistically analyzed.

The issue here isn’t simple vs. complex phenomena but the difference between “information” and ability demonstrable though “going on together”, e.g., like a scientist working competently in the lab, or a language user working competently with a part of language.

Wright’s account of all of this is much richer than what I’ve given here — and focuses alot on the function and limitations of argument, having a bit in common with Russ’s discussion of the limitations of numerical and statistical argument in economics.

Oct 26 2007 at 5:14am

Dear Dr Roberts,
Regarding the following passage in your linked blog post:

Regression is cheap so we buy a lot of it. Leamer’s point is that this is “faith-based” empirical work. You just keep running the regressions including or excluding this or that, trying this or that specification until you find the result that confirms your worldview before you started the work.

Doesn’t out-of-sample testing completely solve this problem? So far as I understand it, this means keeping part of the data set completely separate from that part which you used to calibrate all the parameters in your model. This is not any sort of sophisticated statistical technique, but simply tests whether your model works ex ante as opposed to ex post. I’m not not disagreeing with you about the other problems you point out with statistical analysis.

The podcast archive you are building up is hopefully going to be a fantastic resource for many years to come. Many thanks.

Oct 26 2007 at 6:26am

…I definitely liked the commentary. I do wonder, however, if that would have a negative effect on your guests. As in, if they thought you were going to have an uninterrupted chance to disagree with them after they were done speaking, perhaps fewer would want to be on in the first place.

Oct 26 2007 at 8:51am

Having Edward L. Glaeser on to talk about zoning, slow growth policies and real-estate prices would be interesting to me.

Also it might be interesting to have Jerry Taylor and Peter Van Doren on to talk about their Cato institute paper on gasoline taxes.

Although it is easy to find fault with the paper in other areas I find their discussion of the supposed relationship between higher gasoline taxes and national security very good.

Isaac Crawford
Oct 27 2007 at 8:03am

I’d like to see Steve McIntyre of climate Audit http://www.climateaudit.org) on econ talk talking about the misuse of statistical analysis, the importance of auditing statistical work, and especially the issue of data sharing.

Isaac Crawford
Blogging in Yemen

David Stearns
Oct 27 2007 at 10:12pm

The issue of doing multiple statistical regressions from data sets in order to find the one that confirms the researcher’s pre-existing bias seems to be very relevant to the debates about climate modeling. We can never know exactly what variables do influence the climate for certain (David Hume pointed out our epistemological isolation from cause and effect quite compellingly and elegantly). Any yet, the models shown at http://www.ucar.edu/research/climate/warming.jsp for example show a remarkable correlation between the model for the climate with anthropogenic and natural sources of greenhouse gases taken into account. Of course there’s no way to test the predictions of the model when the anthropogenic variables are taken out of the equation, but doesn’t the success of the model otherwise at least suggest that we should take the results somewhat seriously. I’ve got to admit that the graph does look quite compelling to me. I’m curious what others’ views on the correct attitude towards models like this are. Complete skepticism because the climate is so complicated there’s no conceivable way for the scientists to have discovered the actual mechanisms of climate variance? Provisional acceptance based on the apparent success of modeling the past?

Oct 28 2007 at 12:04am

In response to Charlie’s comment about keeping the data separate that forms the basis for the modeling vs the confirmation data set. This is the correct methodology but think of the risk. You just spent 1 year trying to model the influence of the variables and you come up with your model and it does not comport with the data set aside for the check. Now if you go back and start developing a new model the check data based on first check is really influencing the model and you have no data to check the second outcome.

In all this data crunching their all real checks that get to the heart of the issue. The Victoria Tranport Institute has excellent theory on how to account for all the external cost in determining if transit should be subsidized. They have an outstanding range of factors that need to be considered. But their results are not convincing to me because they claim riding a bike has less internal costs than driving a car. Every workday I see thousands of folks driving cars and once a month I see 1 bike actual commuting to work. This discrepancy has a significant impact on how their model validates their favored action. It takes work and effort to interpret studies. Regular folks usually judge on what is best for their pocket books and as Dr. Roberts points out that method usually works out pretty well.

Ken Willis
Oct 28 2007 at 1:48am

Simon Clark: you don’t need any bias at all to see the truth in front of you, and you have seen it and recognized it. Namely, that gun bans do not get rid of guns; they only take guns away from the good people while allowing the criminals to keep their guns thus enabling more violent crime. You didn’t need any statistical data or analysis to see this. You eyes, ears and common sense were all you needed. You used these tools which theoretically are available to everyone, and they served you well. Now if only some of our learned professors and students could discover the power of these faculties.

Charlie: You say a crime like burglary would be more likely to become murder if both burglar and victim have a gun. Huh? Wouldn’t it be more likely to become murder if only the burglar had a gun?

No, I guess you wouldn’t see it that way because the rest of your comment also suggests that you are one of those people who think that everyone else has as little control over their emotions as you do and is going to murder his wife or his neighbor over some silly argument. I suggest you check yourself into the nearest mental hospital and get yourself under control before you try to tell normal people how to conduct their lives and handle their self defense.

And you’re a graduate student in economics. God, help us.

Oct 28 2007 at 6:21am

I second everything that Charlie said and think it would be great if Dr. Roberts could reply.

Podcast request: Emily Oster.

Oh, and I think Ken Willis’s comment should be deleted for etiquette reasons.

Oct 28 2007 at 8:18am

The stupid ad hominem character of Ken Willis’ reply aside, there’s a substantive error that needs to be pointed out. He’s very upset because he thinks someone is trying to tell him (an apparently normal, rational person) how to conduct his life and handle his self defense. Society only has a right to uphold each citizen’s freedom only in so far as that freedom doesn’t adversely affect other citizens. Duh. And that’s the issue: whether concealed hand guns in general adversely or positively affect society at large.

Lee Kelly
Oct 28 2007 at 9:18am

Society only has a right to uphold each citizen’s freedom only in so far as that freedom doesn’t adversely affect other citizens. – ChinaJoe

This principle, if applied consistently, would destroy liberty: “In other words, free speech is not free. There are times when the speech of others will upset or offend, and maybe disspell a falsehood from which we had much to gain. In economic terminology, freedom of speech comes with externalities. However, for those who value rationality, peace and liberty, these externalities must be tolerated.”

Russ Roberts
Oct 28 2007 at 9:27am


It is true that Lojack and concealed handguns are different–concealed handguns can lead to more violence while LoJack has no such potential effect. But we can get direct evidence on the probability of concealed handguns leading to more violence. We generally measure gun attacks and gun deaths. (The hard part is measuring the attacks/deaths that don’t occur.) I will try to find the data, but I think very few crimes have been committed by people with a permit to carry a concealed hand gun. That kind of empirical evidence can of course be mismeasured. But my claim is that we are better at assessing the quality of that kind of empirical measure than a result established by two stage least squares.

Your point about magnitudes is relevant. I think we’ve learned something from the empirical debate on immigration–that even when you torture the data unmercifully, the effects on wages are relatively small. Some would disagree with that conclusion.

At the end of the next podcast (Yandle on tragedy of the commons), I add a few more thoughts on these issues. I don’t argue for replacing data with introspection. I am arguing that some of the most glamorous statistical results of recent years are not very robust.

Ken Willis makes a relevant point though I would say it differently and I would encourage him (and others) to keep the tone in this forum civil. I agree with him that gun bans are likely to impact “good” people more than “bad” people. But that is as always an empirical question. The intuition is undeniable. But what about the magnitude? I think that the source of his confidence is based on his impression of that magnitude–a casual empirical observation that the gun ban in Britain has not ended crimes committed by gun-bearing criminals. Or in the United States, it is clear that gun-free campuses can be exploited by gun-bearing criminals. It is very important to remember that no legislation is enforced perfectly.

These are casual impressions. But these are impressions that are amenable to fairly transparent statistical evidence. Anyone who has that evidence is welcome to leave it here.

By the way, my colleague Robin Hanson has criticized my position here: http://www.overcomingbias.com/2007/10/if-not-data-wha.html

I plan to respond more generally on these issues at Cafe Hayek (http://www.cafehayek.com) this week.

Ken Willis
Oct 28 2007 at 12:50pm

I don’t care for people who make ad hominem attacks so I will take this charge seriously. An ad hominem attack is a logical fallacy in which a claim or argument is rejected on the basis of some irrelevant fact about the author. For example:

Person A makes a claim X.
Person B makes an attack on person A that is irrelevant to claim X.
Therefore claim X is false.

I think it is in a gray area whether I made an attack on Charlie’s person that is irrelevant to his claim, but it’s not worth arguing so I’ll retract my comment to the extent I suggested Charlie should check himself into a mental hospital and get himself under control before he accuses people he doesn’t know of being likely to commit murder. Sorry, Charlie. Please forgive me.

But Charlie, do you have any evidence to support what you said? There exists a mountain of evidence against what you said. I guess I just assumed that it is such common knowledge that everyone who has been paying attention would know it. Maybe not. Also, perhaps the intuitive sense against what you said is not as strong as I thought, at least in certain circles.

There isn’t room or time here to recount the “mountain of evidence”, so just let me say this. No one can legally buy a gun in America without an instant background check through the FBI criminal data base. No one can get a permit to carry a concealed gun without a more extensive background check taking about 60 days, being photographed and finger printed, and in most cases offering evidence of good moral character.

Florida was the first state to adopt a “shall issue” law that provides that anyone who posseses certain objective criteria will be entitled to a permit. When this law was under considered there were predictions, from people that I believe have an outlook similar to yours, Charlie, that blood would run in the streets. It never happened. In fact, permit holders in Florida are probably the most well-behaved and law abiding citizens in the state.

Since Florida enacted its law, this same scenario has been repeated in at least 37 other states. I doubt that you could find a greater data base of empirical evidence to support a thesis, in this case, the thesis that allowing law-abiding citizens to carry concealed firearms has no negative effects, and at the very minimum has the positive effect of recognizing an important liberty interest in people who are full fledged “citizens” of a Republic and are not mere “subjects.”

So, Charlie I don’t think there is any support for what you believe or what you said. Permit holders are surely as likely to get into emotional situations and silly arguments as anyone else, but they do not seem to be allowing there emotions to get away with them. They are remaining just what they always were. Law-abiding and responsible citizens. Just the sort you would want for your neighbor, who might come to your aid in an emergency.

I’ve apologized to you. Is it time for you to apologize to them?

Oct 28 2007 at 10:56pm

I have been listening to econtalk for some time now and really enjoy the experience. I love that Russ brings on a wide variety of speakers, however I found this topic to be a bit off. Perhaps because I just finished reading The Black Swan (which I was introduced to here) but more due to my natural trust of what numbers tell us. i also have a healthy distrust of intuition. I think that Ian was a bit extreme, and that everything must be approached with a large dose of skepticism.

Ken Willis
Oct 29 2007 at 12:42am

Society only has a right to uphold each citizen’s freedom only in so far as that freedom doesn’t adversely affect other citizens. Duh.- ChinaJoe

A citizen acting in self defense only adversely affects one other citizen, the one trying to kill him. So, “Duh” back to you, ChinaJoe.

Oct 29 2007 at 1:01am

It always amazes me that people care so deeply about this issue. I make some arguments that Lojack and gun ownership are hardly comparable, and I really touch a nerve. So I guess, the concerns about biases being too high for good research may be justified, though I still have hope that time and evidence heals that.

“You say a crime like burglary would be more likely to become murder if both burglar and victim have a gun. Huh? Wouldn’t it be more likely to become murder if only the burglar had a gun?”

The escalation argument is that someone pulls a gun on a person and tries to commit robbery or a burglary, and the person has no gun they let the crime succeed without confrontation. If the person has a gun, he tries to defend himself and both the criminal and the defener are more likely to get killed. Most robberies do not end in murder. It’s possible that argument is wrong, but it’s definetely testable. And we can’t just gloss over that it is totally different than Lojack. A car with Lojack never tries to defend itself, it never puts itself in more danger. All of my arguments, were based on differentiating the cases of Lojack and gun ownership, Russ thinks they are the same and I think they are scarcely comparable.

I’m sure you have many reasons to dislike gun control, and maybe it even comes from a well thought out value system. But I just don’t care about your value system. From a research point of view, I don’t care if you think gun ownership is an essential liberty and government is an abomination. It’s irrelevant. All I care about is the facts, the voters / politicians can decide the policy tradeoffs.

It’s always funny and surprising to me how hard of a time people have separating an argument and an opinion. I gave several arguments that guns cause crime, but I never gave any opinions about them. I never said I agreed or disagreed. I said this is what people “worry” about and they don’t worry about it with Lojack.

Anyway, my reading of the literature is that gun control laws have just about no effect on crime one way or the other. So if you are really looking for an opinion there it is. At the voting booth I just don’t care about gun control/gun rights enough to get in a hissy fit, that is why I think there is hope that we might actually have some unbiased researchers out there.

-Anyway, thanks to Lauren for telling me I was “personally attacked” otherwise I probably would not have stopped back by, and Russ had even responded directly! (swoon). To Ken Willis, what you said didn’t hurt my feelings any, I care much more about free speech and exchange than gun control. I can’t remember if that makes me a Liberal, liberal, conservative or Conservative. Should I be surprised (even offended?) that sweet Lauren Landsburg is defending me as part of an Econ Talk nanny state? Or does a little law and order (censorship?) make us all freer to exchange ideas? Ideology – so interesting, so confusing.


Oct 29 2007 at 6:15am

I really, really enjoyed the podcast. I also think I agree with your comments at the end. Still, I think you should not have made them. Essentially you said in your final comments that you are highly, highly sceptical of the kind of work Ayres does. Which is fair game, if you say that during the conversation and give him a chance to answer. But putting it at the end as the conlusion where he cannot respond anymore is a cheap shot.

Russ Roberts
Oct 29 2007 at 12:51pm

David and Shawn,

I certainly didn’t mean the postcript as a cheap shot. I wanted to elaborate a point I raised in the podcast and that I think Ayres agreed with in the main interview–a lot of people do have trouble accepting results that don’t conform to their world view and that some statistical techniques are prone to abuse. As I point out in this week’s postscript, I did not mean to imply that either Ayres or John Lott are dishonest in their own empirical work. I just found their mutual “lack of being convinced by the others work” as a powerful example of what I think is a more general phenomenon.

Nov 5 2007 at 1:39pm

I am not sure that Taleb would have trounced this guy – not to be a contrarian. Taleb is funny and has an important message but his fundamental message that statistical analysis is mostly useless does not hold water. What we need is someone who could take the best of both writers – there are times when the kind of analysis that Ayers offers is absolutely useful. This was a very interesting podcast.

Nov 13 2007 at 8:47pm

I do statistical analysis for a major bank, and Roberts and Ayres touch upon it, but they just don’t quite tackle it. If you work for a company, and if the bosses don’t really understand statistics – and they don’t – the “con” becomes the path to promotion. At Capital One and other banks … you find something … or you watch the con artists get promoted. Telling your boss that you did a test and didn’t find anything is akin to failure … and even a novice statistician can find whatever the boss wants. Politics. Politics. Politics. You’ve got to address that issue.

I think Kahneman’s work is fantastic and offers a beginning to address this issue. Russ … can we get Kahneman on the podcast and ask him his thoughts on statistics and politics at work? Please.

Nov 26 2007 at 3:02pm

When I heard the beginning of the podcast I immediately thought of Taylor and Deming (Frederick Winslow and W. Edwards, of course) but heard not a mention in passing. Is this a bias against applied economics? Or a bias against applied economics done by engineers without political economic axes to grind?

Their roles in industry parallel the roles played by the many people in medicine that changed medical practice, in particular the ones you talked about in the podcast. Deming, in particular, is significant because his methodology was rejected by much of US industry just as you note that today’s doctors reject the statistical approach to justifying pre- and proscriptive medical procedures. Detroit’s rejection of Deming and Japan’s embrace of Deming goes a long way to explain the divergent reputations of the two groups of automakers on quality. As one from the computer industry, I point to Intel’s biggest innovation being the rigorous and aggressive application of Deming’s central thesis: statistical analysis leads to better and CHEAPER products. It is all about applied economics.

As you note in your discussion and postscript, ideology might bring all this supercrunching into question, but listening to Fresh Air today, I suggest a fertile field for exploring the efficacy of such econometric analysis where the numbers are large enough that ideological bias can’t overcome the data and analysis:
“Mark Schapiro, Exposing a Toxic U.S. Policy

“Fresh Air from WHYY, November 26, 2007 · Investigative reporter Mark Schapiro explains in a new book that toxic chemicals exist in many of the products we handle every day — agents that can cause cancer, genetic damage and birth defects, lacing everything from our gadgets to our toys to our beauty products.

“And unlike the European Union, the U.S. doesn’t require businesses to minimize them — or even to list them, so consumers can evaluate the risks. Schapiro argues that that policy isn’t just bad for public health: In an increasingly green economy, he says, American businesses stand to get shut out of a huge market.”

A couple of points he makes:
– The EU is larger than the US economically 500 million vs 300 million people.
– The EU is much more aggressive than the US in regulation and is now banning products that when prohibited in the EU get “dumped” in the US
– The EU is driving the world product development because of the size of its market in the world
– The EU has an increasing economic advantage because its industry is being driven to innovate while the attitude in the US is leading to resisting innovation.
– Complying with the EU regulations does not increase product costs nor reduce industry profits.
– The EU is looking to the long term because the governments that staff the EU regulatory regime pay for healthcare and they want to control costs.

In truth, these are hypothesises, but they would seem to be ones that can be tested. And the interesting challenge is to find a significant set of statistics that support the ideological point of view that government regulation or involvement in industrial or product design has negative affects. Fertile territory for doctoral and post-doc research, right??

And a bit on methodology. One way to eliminate the sense of ideological bias and massaging of the data to get the “required” outcome is to have a central registry of hypothesis where economists send their hypothesis prior to doing their supercrunching. Let’s say that the econ journals required that hypothesis be registered X months before submission of the paper or publication, and in advance of the availability of some portion of the data. Then all submitted hypothesis related cited in the paper and all preceding them would be revealed for all to see. Thus, to cherry pick the data that fits the result, it would be required to submit maybe 50 hypothesis in advance, then crunch the data and publish based on the 2 or 3 that fit the desired outcome. I think that its obvious that such cherry picking readily apparent when an ideological adversary crunches the numbers for the other 48 and finds that 40 produce no significant result and the remaining 6 run counter to the published result.

A comment that I heard recently, and I can’t recall who made it, so it isn’t my idea, is the liberty is paid for with blood every day, not by the blood shed in war. One can argue that we should allow guns to be carried in furtherance of liberty even if the statistical outcome is that the parent most likely to carry a concealed weapon is more prone to pulling his hand gun when angry at his son or wife for some serious violation of trust, ie., sleeping with his brother, that liberty requires that the blood shed in such a case be accepted as the price of liberty.

That suggests another economic number cruncher: What is the cost of all the airport security screening in terms of the potential lives saved, ie., dollar costs per life saved vs the dollar cost of implementing EU industrial and product regulation in the US? What if the cost to save a life from terrorism in the US is $1B and the cost to save a life from product regulation is $100K per life, my SWAG hypothesis of the magnitude, what is the price of liberty in each case, and are we willing to pay it?

Nov 26 2007 at 4:16pm

Without getting into the ideologic debate on gun laws, I would note that the data collected by the FBI data statistics division, which sort of drives the analysis and reporting in local police departments, codes for the relationship between the perpetrator and victim. That gross data point that I see is that most of the time the two parties are related, either by blood, marriage, or social contact such as neighborhood, school, or work.

Thus, unless the statistics collected, and the data is selective on whether the police officials in each police department authorize the effort to fill in the FBI forms, are slanted toward making it look like gun crime victims know their killer, the question of whether someone has a gun is a known quantity. If you are a gangbanger, then you know your gang members or rivals have guns. If you are the spouse killed by a spouse, you almost certainly know that your spouse had a gun and you know where it was. If you are a kid that kills your sibling or the neighbor kid, this occurs because both are playing with the gun.

As I noted in my prior post, liberty is paid for in blood every day, not in some past war — not my observation, but one of someone who’s name I can’t recall, but was related to the loss of liberty since 911 in just the restrictions on going to the airport and boarding a plane.

After debating the issue with a pragmatic libertarian (big and little el) friend who teaches gun safety, and I didn’t really have a strong ideologic bias, but debate is always beneficial to understanding, I concluded that as long as the cost of gun ownership is understood, then guns should be allowed more widely.

But will those who most loudly defend the idea of widspread concealed carry laws and loudly proclaim that liberty has been paid for by the blood of patriots defend the right to carry concealed weapons into gatherings featuring conservative and Republican speakers, in and outside the US?

When Republican presidential hopeful running for president defending the right to carry guns instructs publically the Secret Service to never check the strangers coming to their events for guns, then I will believe that they are true believers in their stated principles on guns.

Dec 3 2007 at 9:13am

Russ, great podcast + commentary. I really respect the challenge you’ve posed of the data mining used to formulate these “biased” hypotheis. I would like to see a new formulation of the “Lucas Critique” applied to more mirco issues of modern economics; more specifically in variable isolation and level of randomness tests.

Comments are closed.


About this week's guest:

About ideas and people mentioned in this podcast:Articles:

Web Pages:

Podcasts and Blogs:



Podcast Episode Highlights
0:36Intro. Data and statistics. What is the case for super crunching? Orley Ashenfelter wine example. Multivariate regression to try to find underlying relations between rainfall and weather and quality of the wine. Surprisingly accurate. Can't drink Bordeaux for several months. Robert Parker, traditional wine taster, originally dismissive of Ashenfelter, but has started incorporating weather into his predictions. Statistics did better than experts, but traditional experts resist the new breed of number-crunchers. Ignaz Semmelweis, doctor who discovered doctors' hand-washing helped save mothers in childbirth. Resisted by medical profession. Even today doctors tend to go from one patient to the next without washing their hands. Business leaders, Wal-mart and hurricanes example. Statistical analysis of consumer purchases after hurricanes. Story goes that statistically included pop-tarts, so they stock up on pop-tarts before hurricanes--urban myth? Does putting beer and diapers together in quick-purchase stores increase sales or is it an urban myth? Grocery store layouts, toothpaste vs. toothbrush placements. Weinberger podcast.
10:54Randomized experiments. Why when you call a credit card company do they ask in a recording for some info and then ask you again? The recording is not telling the agent the info. Why not? Capital One example. What kind of products and services do you want? What you type in during the initial phone contact affects the routing of your call. Prediction, up-selling. Computer calculates even the interest rates you are offered by the representative. Randomized experiments based on mass mailings test what kinds of promotions work best: kitten picture vs. puppy picture? 2% teaser rate for 2 months or 1% for 4 months? Law of large numbers kicks in if you sample enough people. Distributions of the two groups will be the same if you have enough people. Randomized studies give very accurate answers if you have a large enough sample.
16:54EBM (evidence-based medicine). Randomized studies have been done on whether taping your knee actually gives relief to knee pain, oral vs. vitamin shots, types of acupuncture. Grading of evidence creates a kind of competition between evidence-gathering styles. Physicians are in recent years for the first time starting to do patient-specific research. Previously they might read generally in the field but they didn't have a medical library and they didn't check your specific case. Now through the Internet the physician can look up various treatments and see how they are graded, based on quality of evidence. Flip side: When a patient comes into a hospital with pneumonia, he has to have antibiotics within 4 hours. However, because of that mandate, in some hospitals everyone gets chest x-rays just in case he has pneumonia. Pitfall of rules versus gut instinct. Does the data give rise to the right rules or not? Physicians have ceded their control over treatment, and focus on diagnosis instead. Next EBM revolution may result in their ceding diagnosis, digital medical records. Doctors' decisions to order a test are not always the same as your desires. Insurance companies' incentives versus lawyers' incentives. In any business with 1000 employees, it doesn't make sense to build institution around the top 10%. Taking away some of their discretion and basing it on statistical information will do better.
25:26Prediction page has about 40 prediction tools that will help you predict things like how long you will live, your due-date if pregnant, sporting events, likelihood that a book title will become a best seller. Will give you the average length of marriage for people will your characteristics. Regression output also tells you the precision of the prediction, 95% confidence range.
27:57Regression analysis in economics: can you successfully hold variables constant? Hard to isolate the effect of one variable on another. What's happened to the standard of living in the U.S. since the late 1970s? Stagnant, if you look at average hourly earnings corrected for inflation. But that number doesn't include fringe benefits, demographics, etc.--highly controversial number. Have to trust the outcome variable. Statistical analysis cannot make accurate predictions about all things, have to be able to measure the things you care about, have to be able to run a randomized experiment, have to have a large enough population. Some claim that half of all statistical results are wrong. Need other statistics to know if that claim is true. Claim is a relative one: That in case after case, statistical prediction does better than human prediction. Common idea is that the more subtle the event, the more humans should be relied on, but in fact it's the opposite. With even ten causal factors, statistical prediction does better than humans. Humans can't bring themselves to put the right weight on the right factor when there are many factors. 83 legal experts vs. crude statistical algorithm tried to predict Supreme Court, yet the statistical algorithm did better than the legal experts at predicting the Supreme Court's decisions. Supreme Court hates the 9th Circuit, California, but legal experts can't bring themselves to take that history into account. The algorithm wasn't precise but it still did better than humans.
35:33Social issues: immigration, wages, Wal-mart. For LoJack, automobiles with radio chips have been found to actually reduce crime. Concealed handgun laws, John Lott, do they deter crime in the analogous way? Lott and Ayres don't agree about handgun laws. If one person has LoJack, no thief will worry; but if half the city has it would deter car theft. LoJack is never used as an offensive weapon, but concealed weapons could potentially be used to commit crime, so which effect dominates with concealed weapons? Lott has played an important role in changing the norms of data sharing. Journals are starting to require the posting of data publicly. Leamer, "Let's Take the Con Out of Econometrics." Cost of computation has become cheap. Is there a chance we will get closer to the truth with careful studies? Russ: Monetary History of the United States by Friedman and Schwartz, gold standard of simple statistical analysis. Changed opinion about how to measure the money supply. Are there more complicated examples? Ironic that example is in macro. Ayres: gold standard is Heckman on civil rights issues, micro side, 1964 Civil Rights Act affected hiring in Southern textile industries. Donahue and Levitt abortion article. Not the final word. "Clash of competing studies helps us make progress." New categories of inquiry. Does immigration increase wages of native-born workers? No consensus yet, but we won't get it non-statistically. It may not even be a big enough effect for us to come up with a credible belief in its response. Shouldn't just trust any super cruncher. Businesses may not be following the academic clash approach, may need to hire statistical auditors.
46:46Worries: First, most findings confirm the bias of the researcher because of the range of regressions and techniques you can run, so researchers keep crunching till the results show their biases. It's a concern. Second, ordinary folks don't have the statistical sophistication to be skeptical of sloppily done findings. What is the statistical study that you believe but don't like? If you only believe the ones you like you must be a biased consumer. Easier to cook the books on regressions than on randomized trials. First results out of Move to Opportunity, study where they gave housing vouchers to poor families so they could move to middle class neighborhoods. Randomized experiment. Does it impact life-chance results? Hasn't actually improved the life very much of those who have moved. That was certainly not anticipated by those who put their money into the program.
50:16Summary of issues. Social science research. Hand guns example. Spurious correlations in economics. Simultaneity problem. LoJack. Ayres very confident about LoJack but thinks Lott is wrong about guns. Vice versa for Lott. They go back and forth on details, but what if source is different. Neither may measure with any precision. Data may simply not be good enough to allow us to measure even the direction of the results. Results can be paraded around as scientific when they are not. Can either side in a policy debate concede that some statistical results are more convincing than others? Two-stage least squares--techniques are glamorous and elegant but may be used with data that are not up to the task. Ed Leamer: wrong conclusions happen often because researchers try so many specifications that fail and throw those out, trying again till they find something that works. Bloodletting in medieval times. Leamer quote. Faith-based empirical work. Empirical tournaments, the Iron Economist.