Throughout the 1990’s scientific academia was going through a civil war over the role of science in deciphering the truth of the world. Most natural scientists held that science is purely objective, while the social scientists differed, arguing that scientists were ignoring the inherent limitations on objectivity created by human perspective and biases.” In the seminal moment of this conflict, Alan Sokal, a physicist, intentionally wrote a hoax article, published in the academic journal, Social Text, titled “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” The paper argued in favor of an array of ridiculous claims, such as gravity being a social construct, the subjectivity of physical reality, and the link between quantum physics and postmodernist theory. Sokal sought to expose the “arrogance” of certain studies in social science academia, and prove the absurdity of subjectivist analysis of science. However, Social Text was not a well known journal, and social science journals are by no means more prone than others to publishing bad research. According to Retraction Watch, a blog that monitors and reports on retractions of academic papers, as of the end of 2020, not one of the ten most cited retracted papers is in the field of social science. In fact, all ten were published in scientific and medical journals, including Andrew Wakefield’s infamous study on the correlation between vaccines and autism.

Is peer review broken? Why are so many fallacious papers published in renowned journals? How often is erroneous research published? Adam Mastroianni joins EconTalk host Russ Roberts to discuss the  problems with the peer review process, how academic journals spread misinformation and misuse scholars, and why emergent order is the answer to more accessible, accurate, and comprehensive scientific research. Mastroianni is a psychologist, writer, and Postdoctoral Research Scholar in the Management Division at Columbia Business School.

It’s relatively rare to see an academic hold such a low opinion of peer review. Yes, many scholars criticize peer review and wish to reform the system, but very few argue to abolish peer review entirely, as Mastroianni does. His reasoning for this radical shift is the inability of peer review to fulfill what it claims to be and do. The common perception is that peer review has always been the dominant scientific process- the backbone of most of academic research. Mastroianni asserts that peer review is a very new system, which is a problem because it leads people to believe that science can not survive without it and causes the problems of the system to be overlooked.

However, the most impactful misconception revolves around the process of peer review. Mastroianni believes that the review procedure is much less rigorous than it ought to be:

I think probably most people haven’t really thought about it, but if you asked them to, they would go, ‘Well, I assume that when a scientist publishes a paper, it goes out to some experts who check the paper thoroughly and make sure the paper is right…And, all of that is a totally reasonable assumption about how the system works; and it is not at all how the system works. And I think that’s part of the problem.

So, what are the problems with peer review? Most notably, referees don’t check the paper in-depth, and some papers are rejected due to their lack of novelty, not their lack of truth. The focus on novelty over truth may be due to the sheer amount of work needed to adequately check the methodology and results outlined in the paper. This lack of checking and thorough reviewing leaves authors in the dark about what needs to be improved, and readers blind to the potential errors and limitations in the study.

It’s a big undertaking to actually check the results of a paper, which is why it’s virtually never done. Although that is, of course, maybe the single most important thing that this process could do, rather than provide some kind of aesthetic judgment.

So, all that I know is the reviewers said–they didn’t say enough disqualifying things to prevent it from being published in this journal. But, I don’t know if they said, ‘I’m really convinced by this point, but not that point.’ Or, ‘Here’s another alternative explanation that I think warrants inclusion.’ I don’t get to see any of that as a consumer, because generally the reviews disappear forever once the paper is published.

Obviously, this focus on novelty over truth leads to a proliferation of errors. The referees should be catching these errors, but instead it’s left to independent scholars to verify the truth of published data. As Mastroianni says, “it’s always caught after publication.”

To compound this problem, replication studies verifying or challenging published data tend to be looked down upon, once again due to them not being interesting, and when replication studies are published, they don’t often carry the same results. This further prevents the spread of truth in academic circles, and provides additional uncertainty to published data.

And, again, if you’re not in the kitchen, you wouldn’t realize this: Replicating someone else’s paper is almost worthless historically in the last 50 years of this process. And, if you have suspicions and a result might be true, you think, ‘Well, I’ll go find out. I’ll do it again.’ Well, if you find out that it is true, nobody wants to publish it. There’s nothing new there. You find out it’s not true: maybe it isn’t, maybe it is, but it’s not a prestigious pursuit to verify past papers…and results have been deeply disturbing–how few results replicate.

 

Mastroianni goes on to explain that this misplaced trust in peer reviewed journals can be very dangerous, as when false data is published it can have an extremely harmful effect as people will place trust in that false data and use it to justify decisions or beliefs that are injurious. Some examples of this are the Estruch et al. paper which erroneously found an inverse relationship between the Mediterranean diet and cardiovascular disease, and Mastroianni’s highlighting of Andrew Wakefield’s infamously fallacious paper on the link between vaccines and autism. Both were published in prestigious medical journals, the New England Journal of Medicine and The Lancet, respectively, and had disastrous impacts. Wakefield’s paper specifically has fueled the common non-scientific concerns over the side-effects of vaccines, which reached a boiling point during the COVID-19 pandemic.

And so, an example of this is this whole thing about vaccines causing autism was in large part fueled by a paper in The Lancet–which is an extremely prestigious medical journal–with an N of 18 being like, ‘Hey, there’s some kids who have autism and they also had vaccines.’ It was sort of like the standard of evidence. It’s stamped with the imprimatur of The Lancet. And so, people take it really seriously.

So, why are there so many problems with peer review? Roberts thinks it’s because of the poor incentive structures baked into the institution.

So, as an economist, my summary of your insight about the time it takes is that: just the incentives aren’t there. There’s a certain principle and conscientiousness that’s expected for reviewers, for referees. You don’t get paid very much. Sometimes not at all. And, there’s very little professional gain. You do ingratiate yourself sometimes with an editor, which is pleasant. They will or maybe look favorably, you might hope, on your future submissions when you’re on the other side of the fence, but there’s just not much return to it, so people don’t take it terribly seriously.

How can peer review be improved for the better? Roberts says better incentive structures specifically regarding pay, time, and quality of work, and Mastroianni says get rid of it entirely. Mastroianni’s solution is to encourage more informal, public, and unfiltered research. He believes this will allow for scholars to produce more diverse and transparent research, replication of data will become easier, and data will be much more accessible to everyone. 

What I’d like to do is make a historical claim, which is ‘This is new and weird,’ an empirical claim, which is ‘This doesn’t seem to do the thing that we intend for it to do,’ and then leave it to the diversity of humanity to figure out what to do about that. I have my own answer, which is I feel like I know the way that I do science the best, which is: to write it in the words that I think I should use, to write it for a general audience so anyone can understand it, to include all the data and code and materials so that the very few people who want to open up that code and data and see exactly what I did and how it worked can do that, and then put it out there for anybody to see. And to trust that if what I have to say is interesting and useful to people, that they will tell me. And, they’ll tell me how I think it could be better.

Mastroianni is by no means advocating for a system he hasn’t tried himself, as his claim to fame is shifting away from the peer review system with his own research. In fact, Mastroianni doesn’t plan on submitting one of his papers to an academic journal again. With one of his latest papers, Mastroianni decided to write it in a way that a journal would never accept, but would be far more digestible and transparent to readers, and it was a raging success.

That being said, many people doubt the efficacy of Mastroianni’s system, as who will be checking information without a clear system? In his words, “Other people talked about, you know, ‘If everyone did what you did, we would live in a world of chaos. This is just people saying stuff.’” But, Mastroianni responds to this claim with the simple belief that people aren’t stupid, and are able to verify claims through examining data sets and discussing the veracity of conclusions.

A particularly interesting extrapolation Mastroianni and Roberts take from the discussion of the problems of peer review, and the necessary changes in academic research is how more knowledge is not always better.

More information is better only if you can weigh it properly. Only if you can assess it properly. If you overreact to it, if you are overly confident in the information, the imperfect information you get, you make a different kind of error.

Mastroianni believes the impetus of this feeling is the view that humanity has reached its ceiling in terms of knowledge, innovation, or development. He takes issue with the view that humans are enlightened today and primitive in the past.

Yeah. I think behind this feeling of ‘I need to get more information and that’ll help me make a better decision,’ especially in science, is this idea that we are at the end of science. That, in the past these were people who were just groping around in the darkness. They had no idea what they were doing.

You need to make decisions and you need to figure out what to do, but this level of certainty that I think that we want is impossible to get. So, you need to come to terms with the fact that mainly we operate in the darkness. And, I think it’s actually very exciting because there’s so much left to do and so much left to discover.

To Mastroianni, liberating the production and spread of knowledge allows humanity to further pursue the significant gaps we have in the understanding of the world, and how to improve socio-economic conditions for all of humanity.

Now that I’ve shared my thought, we hope you’ll share yours. We’d love to see your responses in the comments, or to receive them via email at econlib@libertyfund.org. Thanks for reading!

 

 

1- Roberts brings up Daniel Kahneman’s incorrectly citing poor research in his book Thinking Fast and Slow to discuss the psychological phenomenon of priming. Kahneman later expressed regret for citing “underpowered studies” that could not be replicated.

Could this example be used to support the system of peer review, as he was convinced by better evidence which was also peer reviewed? Or does the fact that Kahneman’s mistake was made in the first place indict peer review regardless?

 

2- Mastroianni says that peer review is worse than nothing at all because it claims to be something that it actually is not. Can this be extrapolated to almost every institution? For example, take the United States Constitution. The Constitution has promised American citizens that the government will protect their rights, no matter their race, gender, sexuality, gender identity, and so on. However, these promises haven’t always been fulfilled.

Does this mean that Americans have misplaced their trust in the Constitution? Would it be better to have no Constitution or legal framework for the nation? Why or why not?

 

3- Why is the process of peer review so long and arduous when the referees don’t significantly review the methodology of the papers they’re reviewing? Would better incentive structures for referees to conduct reviews quicker, as Roberts suggests, be effective, or would this lead to even more errors in academic papers? 

 

4- How can ordinary people, who don’t have the time to verify which academic papers are true or not, have access to the truth? Given this difficulty in access, how can the spread of anti-intellectualism be halted? How can one verify their beliefs if academic sources can’t be trusted as much as they should be? What’s the line between skepticism and irrational denial of academic research?

 

5- Where does the fallacious idea that more information is better come from? How has this attitude affected the dissemination of information throughout the 2020’s, particularly regarding elections and COVID-19?

 

 

[Editor’s Note: This Extra was originally published on August 11, 2023.]


Kevin Lavery is a student at Western Carolina University studying economic analysis and political science and a 2023 Summer Scholar at Liberty Fund.