Jacob Stegenga on Medical Nihilism
Apr 1 2019

Medical-Nihilism-1-196x300.jpg Philosopher and author Jacob Stegenga of the University of Cambridge talks about his book Medical Nihilism with EconTalk host Russ Roberts. Stegenga argues that many medical treatments either fail to achieve their intended goals or achieve those goals with many negative side effects. Stegenga argues that the approval process for pharmaceuticals, for example, exaggerates benefits and underestimates costs. He criticizes the FDA approval process for approving too many drugs that are not sufficiently helpful relative to their side effects. Stegenga argues for a more realistic understanding of what medical practice can and cannot achieve.

RELATED EPISODE
Robin Feldman on Drug Patents, Generics, and Drug Wars
Robin Feldman of the University of California Hastings College of Law and author of Drug Wars talks about her book with EconTalk host Russ Roberts. Feldman explores the various ways that pharmaceutical companies try to reduce competition from generic drugs....
EXPLORE MORE
Related EPISODE
John Ioannidis on Statistical Significance, Economics, and Replication
John Ioannidis of Stanford University talks with EconTalk host Russ Roberts about his research on the reliability of published research findings. They discuss Ioannidis's recent study on bias in economics research, meta-analysis, the challenge of small sample analysis, and the...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Joe D
Apr 1 2019 at 12:18pm

Man, Paul Ehrlich’s one of the ‘most important’ scientists of the early part of the 20th century… I would disagree…

Lauren Landsburg
Apr 2 2019 at 6:32am

Are you perhaps confusing the Nobel Prize winning Paul Ehrlich, biochemist, with another Paul Ehrlich with the same name?

Paul Ehrlich, the Nobelist and biochemist who researched and found a way to combat syphilis in the early 1900s, is not the same person as a very different person, Paul R. Ehrlich, (1932-) who has published materials on population growth.

If you disagree that the Nobel Prize winning Paul Ehrlich was one of the most important scientists of the 20th century, I’m open to that objection. But why do you disagree?

Jacob Stegenga
Apr 2 2019 at 7:17am

Lauren, thanks for your response to Joe D’s comment. I hope everyone will agree with you (and me) that the chemist Paul Ehrlich was one of the most important scientists in the early 20th century.

Joe D
Apr 2 2019 at 4:52pm

Yes, I was confusing the two. I thought maybe I had the wrong name and did do a google search first, but I guess I didn’t look hard enough.

Lauren Landsburg
Apr 3 2019 at 4:25am

No sweat, Joe. And thanks for clarifying.

The only reason I wondered was because I almost made the same mistake myself after looking up the name “Paul Ehrlich” online. I was overwhelmed with a page of links to the more currently-popular biology writer who shares the same name with the early 20th century Nobelist. I had the same initial reaction as you: “Huh?” Thanks for highlighting that there is a potential confusion!

Perhaps our very gracious guest, Jacob Stegenga, might consider in the future being extra-clear that there are a few different Paul Ehrlichs around, and he’s talking about the Nobel Prize winner who lived 1854-1915.

Floccina
Apr 1 2019 at 12:28pm

Gives credence to Robin Hanson’s view of medicine.

Maybe patenting in drugs has out lived its usefulness, but I agree with Russ that the other forms of medical care probably just as bad. In fact, I’d bet, counter to what he says, that surgeries are worse. His torn ligaments example is evidence of that.

Jacob Stegenga
Apr 2 2019 at 7:18am

Thanks, Floccina, for listening.

Michael Byrnes
Apr 3 2019 at 9:50am

I think it would depend on the surgery. At the very least, some surgeries are a little more of an engineering problem than a biological one.

Smith&Jones
Apr 1 2019 at 12:42pm

Stegenga’s arguments seem disturbingly compelling.

Jacob Stegenga
Apr 2 2019 at 7:18am

Thank you!

Chase Steffensen
Apr 1 2019 at 3:33pm

Today’s episode is a winner. I love the episodes that tread the philosophy of science territory, and this one is exactly that.

Jacob Stegenga
Apr 2 2019 at 7:20am

Chase, thanks so much for your kind comment.

Ben A
Apr 1 2019 at 3:51pm

Stegenga’s arguments do not betray much familiarity with the reality of drug development or the FDA approval process.

Stegenga suggests that safety is a secondary, or largely ignored aspect of FDA approval. This is false. In fact the size of the safety database (patients treated on drug) is a key part of the approval process and is extensively negotiated. Unsurprisingly, the effect size of the drug and the severity of the condition are key determinants of the acceptable safety database. Thus, one can see a very small safety database for a severe, rare disease, and very large databases for drugs with smaller effect size that are largely preventative in effect.
Stegenga’s discussion on primary endpoints simply does not apply to regulatory directed trials. This discussion is almost entirely uninformed. Academics may indeed shift the primary endpoints from their preregistered plans. But this does not occur in regulatory directed studies. The FDA receives all endpoints from clinical trials, and the vast majority of pivotal trials have primary statistical analysis plans which have been extensively negotiated. There are exceptions (Exonydys for DMD, for example). But I would challenge Stegenga to find a *single* primary care drug approved in the last fifteen years in which the FDA accepted a change to the pre-specified primary endpoint.
Stegenga suggests that many (most?) drugs fail to pass a risk-benefit trade-off because the risks are typically poorly assessed in clinical trials. No doubt, the FDA has approved drugs that proved to have severe side effects, and were withdrawn. No doubt, too, that pharmaceutical companies have concealed side effects. But I would have preferred less anecdote and more analysis. The 10 top drugs by sales can be found at this link (https://www.biospace.com/article/drumroll-please-top-10-bestselling-drugs-in-the-u-s-/)  I would be very eager to hear Stegenga’s case that any of these have negative risk benefit. Indeed I think the case for the utility of these drugs (with the possible exception of Avastin) are so overwhelming that I would bet Stegegna $10,000 that if he and I presented evidence to a panel of experts chosen by him I could get them to support the benefit-risk of these medicines.

 

 

Jacob Stegenga
Apr 2 2019 at 7:24am

Ben, thanks for listening, and for you comment. You’re raising important challenges, and probably I passed quickly over these topics in the interview. In the book I treat these issues with more detail than an interview allows. Anyway, thanks for your contribution to the discussion.

Ben A
Apr 2 2019 at 10:50pm

That is a very kind response! I’ll look to the book for the longer treatment. But just to put you on the spot: Do you think any of the top 10 drugs I linked to provide insufficient risk/benefit?

Let me add: Many of  your general claims have merit. There’s no doubt that identifying rare side effects, or even common side effects with low effect size, is tremendously difficult. (this is one reason why the pharma industry has shifted investment away from these indications).

But in the specific, I don’t think the medical nihilism case for pharmaceuticals has much application for large effect size drugs. These can sometimes be precise, genetically targeted interventions (like imatinib, which you mention) — but sometimes it’s just something weird and unexpected like thalidomide/lenalidomide. It’s not a classical magic bullet, but it really does seem to work! VEGF inhibition in AMD would be another example.

Dr Golabki
Apr 5 2019 at 5:12pm

I agree with Bob’s caveats, and I think sometimes we imply things are problems of drug development that are really payor problems (e.g. high price reimbursed for drug with marginal benefit) or prescriber problems (e.g. drugs being prescribed to patient populations that aren’t supported by the data). I’d also note that I think this problem is MUCH worse for things like surgical procedures because the standards are much lower approval.

 

But  in general, I think Jacab is right that drug benefits are overstated and drug risk are understated.

 

My question is… relative to what? Sure the double blind controlled trial isn’t perfect. But it is a way WAY higher standard than we have in ANY other field that involves studying humans.

 

So what’s the right standard? Only give patients drugs if we’re highly confident they will work and the benefits will substantially outweigh the risks? That seems reasonable, but…

(A) It’s actually not what patients want. New drugs are inherently risky and patients are often willing to take that risk. As Bob mentioned, Exonydys is a great example of a drug that patients (actually parents of patients) demanded despite relatively weak evidence of a benefit.

And (B) I think the effect of setting the bar that high would be to effectively end functional progress on human disease. If that’s the standard I don’t think there’s a feasible process to find the next magic bullet. Once you launch a drug you keep learning about it, which helps the next generation of patients.

pyroseed13
Apr 1 2019 at 4:02pm

To be honest, I started listening to this expecting another ill-conceived, conspiracy-minded rant against Big Pharma, but was surprised by the quality of the arguments raised by the guest. As someone who like most people here favors FDA reform this talk did have me questioning how effective some of these  proposals would be.

Jacob Stegenga
Apr 2 2019 at 7:25am

Thank you for listening, and for your compliment.

Luke J
Apr 1 2019 at 5:01pm

At some point drs and drug consumers will need to acknowledge the myth of side effects. There are no side effects; there are only effects wanted and effects unwanted.

I cannot comment on the truthfulness of Stegenga’s claims on the FDA, except that his and former Econtalk guest Marcia Angell’s claims have a lot of overlap.

Aside: why are libertarian and classical liberals hostile to natural medicine, specifically homeopathy (which RCTs demolish) but buy into the junk science of the medical “experts” and technocrats?  seems incongruent

Jacob Stegenga
Apr 2 2019 at 7:27am

Luke, thanks for listening, and for your comment.

Kent Lyon
Apr 1 2019 at 7:31pm

Dr. Stegenga makes a couple of statements that are not really defensible. For example, he states that Statins in an “at-risk” population (without further elaboration or specification) reduce the risk of heat attacks by 1%. That is highly misleading, so much so as to be almost patently false. In certain primary prevention studies in patients with average LDL cholesterol levels of about 130 there was a risk reduction of 1% A YEAR. Over a long term period of management, say from age 40 to 75, the risk reduction would be on the order of 30%. Further, in higher risk populations the risk reduction, including of all-cause mortality in patients treated with Statins is significant. Dr. Stegenga’s statement appears phrased so as to lead one to think that there is minimal to hardly detectable benefit to statin therapy. That is simply not the case. I would suggest that Dr. Roberts link to the 2018 guidelines for cholesterol management promulgated by the ACC and AHA with primary author Scot Grundy, who has been analyzing studies on the management of cholesterol, and treating patients, for almost a half century. I treat patients with diabetes, a patient population that has a high risk of cardiovascular disease. The current guidelines I mention advise aggressive management of high LDL cholesterol with statins in these patients. Taking Dr. Stegenga’s approach in this circumstance might be dangerous to your health. We have enough problem getting patients to comply with Statin therapy without commentators such as Dr. Stegenga misleading them.

Another statement that needs to be mentioned is Dr. Stenenga’s statement that a meta-analysis of Rosiglitazone (Avandia) was performed. I believe he is referring to an article by Dr. Steve Nissen, a cardiologist at Cleveland Clinic at the time the study was published. That study was
published in 2008 in the New England Journal of Medicine. Unfortunately, that study was not a meta-analysis, although there was some attempt to claim that it was. Rather, it was an amalgamation of data from different studies, and did not meet the criteria for a legitimate meta-analysis. If Dr. Stegenga believes that it was a legitimate meta-analysis, he is either being disingenuous, or is completely misinformed. Dr. Nissen simply lifted data from several major studies, and lifted data on several small studies performed by GSK from the GSK website. These studies in particular had a much larger treatment group than the control group. The statistical method Dr. Nissen used to analyze the data, the Peto method, is invalid when the treatment group is larger than the control group, which was the case with some of the data he used. Data sets from several of the studies he used in his (not meta-)analysis did not include cardiovascular events, and hence should not have been used in his analysis. The Peto method was developed to assess data in large statin trials in which there were few events, but it is not applicable to the data Dr. Nissen claimed to have evaluated. In fact, one of the peer reviewers for the NEJM on the article, Dr. Steve Hafner, from UT San Antonio, gave a statement to the press that he had advised the NEJM before publication that the article should not be published, but the NEJM ignored it’s own peer reviewer. Steve Hafner was one of the top lipid and diabetes experts in the world (since retired). In the statement he issued to the press, he said the following: (In publishing this article)…the New England Journal of Medicine has become just like a British tabloid, minus the picture of a bare-chested woman on page 3.” It later came out that Dr. Hafner had sent a copy of the manuscript to GSK prior to publication, which he claimed was inadvertent. However, it looked like he was trying to give GSK a head’s up to the article pre publication. All of this led to considerable controversy, needless to say. Eventually, the FDA permitted the use of Avandia under restricted circumstances. Further exacerbating the controversy was the fact that the first person to receive a copy of the article, which was rush into on-line publication, was Henry Waxman, a congressman who chaired a committee that was at the moment holding hearings on FDA renewal legislation. He wished to provide post marketing regulatory power for the FDA, which it did not then, nor does it now have. Avandia, and it’s risk to the heart was to be the poster drug for that authorization, and almost before any physician in the country had read the article, Henry Waxman was on the steps of the US Capitol waving the paper and asserting that this study proved the need for post marketing police power for the FDA. It appeared that the study was rushed into publication by the NEJM in order to facilitate Congressional power expansion for the FDA; that the whole circumstance was a political set-up. The FDA did not get post marketing power, but was able to require that all new diabetes drugs be assessed for cardiovascular risk prior to approval. The impression I am left with is that Dr. Stegegna is not a reliable commentator on the subject matter here.

Jacob Stegenga
Apr 2 2019 at 7:38am

Kent, thanks for listening, and for your comment. Just to be clear, when you write “In certain primary prevention studies in patients with average LDL cholesterol levels of about 130 there was a risk reduction of 1% A YEAR. Over a long term period of management, say from age 40 to 75, the risk reduction would be on the order of 30%” — you must be referring to a relative outcome measure, like relative risk reduction? For patients with known heart disease, analysis after analysis shows about a 1.5% absolute risk reduction in mortality over five years of statin use. Now, one question is: when discussing the effectiveness of any medical intervention, should we use relative measures (e.g. relative risk reduction), like you do in your comment, or absolute measures (e.g. absolute risk reduction or conversely, ‘number needed to treat’), like I did in the interview? Along with some of the world’s leading statisticians, I have argued that we should only be relying on absolute measures. Those arguments can be found in my book and articles on my website. Anyway, thanks again for your contribution to the discussion, and I’d be delighted to hear from you offline on what you think of the arguments about outcome measures.

Kent Lyon
Apr 2 2019 at 12:39pm

The difference is that I treat individual patients, not the large populations considered by the statisticians. I am not a big fan of deciding to do something based on statistics emanating from large studies of patients who don’t necessarily fit the circumstances of the patients I am treating. When I see a 60 year old male with a 20 year history of type 2 diabetes, with TG level of 1000, HDL to 25 and LDL of 200 (eg, in a range consistent with Heterozygous familial hypercholesterolemia, who also has proteinuria (diabetic nephropathy), hypertension, obesity, sedentary life style, smoking history, and a strong family history of heart disease, the global population statistics are not immediately relevant. If I can reduce the relative risk of a hear attack in such a patient from above 30% to under 5% over the next 10 years, I would judge that to be a reasonable undertaking, given the costs, risks, and benefits of statins (sometimes along with Ezetimibe, or, in some cases, even a PCSK-9 inhibitor). Certainly life style changes are extraordinarily important. We try, but all too often have limited success with those.  Unfortunately, for the most part, my patients do not agree with taking statins, having read extensively on the internet that statin use is a conspiracy of the drug companies and the medical/industrial complex against their better interests.  I am far more interested in the viewpoint of Scott Grundy than all the academic statisticians who have never treated a patient. After 40 years of doing this, and having countless patients wind up in ER’s DOA from massive coronaries, or admitted directly to the bypass suite acutely, particularly when we did not have statins available (yes, I’ve been at this so long my career began before the availability of statins–and I would consider statins the closest thing to a silver bullet we have in cardiovascular risk reduction in diabetes management), I give less attention to the type of statistics you cite than to what I can do for the patient in front of me. If this sounds like a reliance on experts, so be it. We have, as they say, three types of Medical practice:  Evidence based medicine(we have mostly very poor evidence of anything–as Santayana said:  Our knowledge is a torch of smokey pine that lights the pathway but one step ahead, across a void of mystery and dread);  Eminence-based medicine; and Faith based medicine. Admittedly, most of what I do is faith based. But, medicine is more art than science, always has been, and will continue to be, at least for the rest of my career.  Controversies over absolute vs relative risk reduction become, in the confines of the exam room, something akin to arguing about angels dancing on pins. Too cynical?  May be I should think about retirement

Ben A
Apr 3 2019 at 7:44am

For what it’s worth, I also gasped at this way of describing statin benefit. Almost any preventative measure will look terrible in terms of absolute risk reduction. If the baseline risk of an event is 2%, then a great intervention which reduces the risk by half (Hazard ratio (0.50), will “only” reduce risk 1%. But that’s still a great effect size! I should add that almost any vaccine will look *terrible* on this measure. But they are great interventions. Of course, it is important to compare like to like — if you want to look only at absolute risk reduction, you should also look at absolute AE rates. And of course for statins (and for vaccines) these are exceptionally low. Much, much, much lower than 1% for serious AEs.

Again, maybe a direct question would be useful. Professor Stegenga, do you think statins as a class lack risk benefit and should not have been approved? Or do you believe they are over-prescribed for patients outside of high risk groups? Or something else?

 

Todd Kreider
Apr 2 2019 at 10:34am

1) Russ Roberts said that Stegenga wrote little about cancer in his book but in his response, Stegenga briefly mentions his friend’s book on cancer and moves on. There was no mention of the enormous potential of immnotherapy which two cancer researchers won a Nobel Prize in Physiology or Medicine last year. One of the winners, James Allison, has said he expects much more progress with several tumor types in the next five years.

2) There was also no mention of stem cell therapies which are still in trials but are expected to be a  cure for heart failure within two to five years. Stem cell therapies for stroke patients also looks promising and a three to five year timeline looks possible there as well.

3) Less widely known, so understandable that Stegenga wouldn’t know, supplementation with NR, a vitamin B3 derivative increases NAD+ levels in cells and is in 25 trials for heart failure, cardiovascular health, kidney failure, obesity and dementia among others. One recent study showed that NR with pterostilbine improved ALS patients an average of 4 points on a 40 point scale after four months and for one year, whereas the latest FDA approved drug in 2017 only slowed the slide in ALS patient’s health by 30% over four months. (The supplements would cost $2,000 a year whereas the drug costs $140,000 per year.)

Doug Iliff
Apr 2 2019 at 2:14pm

Combined with other useful and informative episodes listed in related podcasts above, the public might get the impression that outside of a few Ehrlichean bullets, we might just be better off avoiding physicians altogether.  Medical nihilism began, in my career, with the publication of Ivan Illich’s Medical Nemesis.  There was plenty of truth there, too, and yet– is the sense of proportion correct?  Could this be a problem of forest and trees, of wheat and chaff, of baby and bathwater?  Contrarianism is always sexier than than conventionality, which boosts sales– but perspective is necessary.

The big four challenges in my practice are hyperlipidemia, adult onset diabetes, hypertension, and emotional disorders.  All of these are complex, multifactorial processes making research a minefield.  Add in a changing substrate– the progressive expansion of waistlines, reduction of weightbearing exercise, and absorption in social media– and there is plenty of evidence for nihilism, especially considering mounting examples of sloppy research driven by Pharma greed, grantsmanship, and the drive for academic advancement.  Dr. Stegenga has described these factors well.  And yet…

As an orderly in my first year of medical school I watched a man in his mid-thirties die of malignant hypertension.  All we had to prescribe was diuretics.  That has never happened again in my career.  Not once.

In residency I sat up all night watching for arrythmias with patients who had overdosed on tricyclic antidepressants– which seemed virtually worthless for treating their pain.  Then Prozac came along, and it proved both safe and relatively effective; maybe only 20% better than placebo, but a significant improvement in quality of life for many patients.

All we used to have for (then, relative rare) adult onset diabetes was sulfonylureas, which flogged an already stressed pancreas to produce more insulin to overcome insulin resistance.  The result was a rapid exhaustion of islet cells, and the need for insulin to survive.  Now I’ve had type 2 diabetics on a variety of oral agents for over 20 years.

And the ravages of atherosclerosis in the face of American affluenza– how do relative skeptics like me and nihilists like Jacob explain the dramatic reduction of deaths from cardiovascular complications without crediting statins?  The number needed to treat for statins doesn’t look very impressive, but we can forget that absolute risk reduction by medication is dependent on time, and  studies are almost never extended beyond 3 to 5 years.  Arterial damage starts with fatty streaks as early as the 20s, as we have known for a long time from Korean war autopsies, and yet statin studies start in middle age and run a few years; it would seem to make more sense to treat a disease which progresses over decades, slowing damaging circulation in the brain, heart, kidneys, eyes, and legs, as early as possible.  But no organization is going to pay for an experiment which would run for 40 years in order to generate an impressively low NNT.

Having practiced a long time, I am profoundly grateful for the pharmaceutical research which has resulted in relatively safe, pretty effective, and now very cheap drugs.  At the same time, emphasis on evidence-based medicine has given us better perspective on dead-end treatments.  There are still great challenges– me-too drugs still on patent, fabulously expensive treatments prolonging quality life by only weeks or months, and the continuing problem of shysters in the research industry.  But let’s keep our perspective.

 

 

Michael McEvoy
Apr 4 2019 at 1:45pm

Doug – You said everything I wanted to say as I listened to this show . I am a primary care doc who is neither a Big Pharma shill nor a “ Pharma nihilist” .

It also occurred to me that Dr Stegenga did not comment on how many substances are tried as drugs and immediately fail in early phases of drug development precisely because of safety issues.

Russ and Dr S thanks for a great discussion

Michael McEvoy
Apr 4 2019 at 1:52pm

“Ehrlichean bullets” …..

Hilarious!!!

Marilyne Tolle
Apr 2 2019 at 2:59pm

It is consoling to listen to Jacob Stegenga’s nuanced approach to non-intervention.

My husband’s tragic experience with the UK National Health Service (NHS) illustrates why the decision not to intervene should not reflect a principled stance, but should be based on a case-by-case, cost-benefit analysis.

My husband died of appendix cancer last summer (yes, appendix cancer). Back in the fall of 2010, he presented with acute appendicitis twice, once on a Friday in September and once on a Sunday in November. I took him to A&E (Accident & Emergency = ER) as he was in very severe pain. We waited for several hours on both occasions and were sent home after doing some blood tests. We learned years later that the hospital’s imaging center closed on Friday evenings and reopened on Monday mornings, which explains why the intern on duty didn’t perform a scan on either occasion.

My husband self-medicated with aspirin, which worked, being an anti-inflammatory (appendicitis is after all an inflammation of the appendix). He went to see his GP (NHS General Practitioner = physician) after each episode and was told “You probably have “grumbling” appendicitis. You seem to manage it with aspirin. We don’t remove the appendix anymore. Anyway, there’s no point going back to the hospital now because they won’t operate on you unless you’re in crisis.”

Nothing happened for three years. Then in May 2013, my husband had another bout of acute appendicitis, again on a Friday night. Knowing there was no point going to A&E, he gritted his teeth and stayed in bed for the whole weekend, taking very high doses of aspirin. We went to A&E on the Monday morning. The imaging center being open, they finally did the scan they should have done three years earlier, and advised that the appendix had to be removed immediately because there was a risk of peritonitis (burst appendix, leading to sepsis and possible death). This risk had been there all along of course.

They removed the appendix and sent it for a routine biopsy. Three weeks later, my husband was called in for a consultation. He was told there was a three-inch tumour bursting out of his appendix and that he had a “very poor prognosis”. Thank you guys.

The following five years were an ordeal of repeated surgeries and horrific complications, on top of the chemotherapy. My husband spent his final year on TPN (intravenous feeding) as the cancer strangled his gastro-intestinal tract and he could no longer eat nor drink.

All this because his appendix was not removed in 2010. Appendicectomies are routine laparoscopic procedures which incur very few risks. By contrast, the risk of untreated appendicitis is peritonitis, which can be lethal.

So while I’m not advocating that people should have their appendix removed preemptively (indeed, I still have my appendix), someone who presents with acute appendicitis should have his appendix removed (note “acute” – the GP’s diagnosis of “grumbling” or chronic appendicitis was wrong; my husband’s blood tests showed that his CRP, a marker of acute inflammation, was above 200, when it should be below 1 in a healthy patient).

My husband’s death was due to a tragic concurrence of errors, negligence and bad luck. I put the NHS medical staff’s non-interventionist approach to patient care high on the list. To me it reflects not just a paucity of (financial and human) resources (itself endemic to socialised health-care) but also a non-interventionist ethos of “let nature do its thing” (I’ve experienced it myself first-hand on separate occasions).

The bottom line is that when life and death are at stake, the decision to intervene or not should be case-dependent.

Michael McEvoy
Apr 4 2019 at 1:50pm

Thank you for sharing what must be a painful story to recall. It serves as a cautionary tale to us doctors.

aldo fantin
Apr 3 2019 at 12:42pm

 

I am a bit concerned that some listeners may end with a Post Modern impression of what medical care is. That it is ” so complicated that nobody can understand it and any opinion no matter how bizarre has some plausibility”. The idea that we need to keep developing magic bullets in areas where we have a good track record such as antibiotics is ridiculous. Medicine is an emergent phenomenon where all the stakeholders: patients and their families, medical personnel, scientists and even drug and device companies have motivation and incentives use, try or develop treatment modalities for every single ailment that may hinder the full potential of a human.

Laura M Miller
Apr 4 2019 at 9:22pm

Really really good episode.  I will say that FDA approvals appear to be on a cycle like hemlines, sometimes easier and other times more difficult.  Unfortunately politics has been injected as well, for example in the case of Plan B.  Much appreciate this.

Stephen Grist
Apr 7 2019 at 8:52am

Excellent episode! I don’t see a future where this view becomes standard due to ignorance/lack of understanding by the general public about the nature of complex systems. This is especially challenging given the understanding of medicine as a “customer service” industry and the fact that firms that develop interventions can market directly to consumers. Even if most medical providers have this understanding, patients will be often just switch to another who will give them what they want. So in the end, just more nihilism.

I’m not seeing hard copies of the book available for purchase on Amazon or Oxford University Press; is there another location where the book is currently available?

 

Mark
Apr 8 2019 at 11:12am

Thank you for the excellent episode.

As a radiologist, it often feels like I have a front row seat to medical excess, or at least the lack of frank discussion of costs versus benefits of medical care.  As an example, every day, I read high end imaging studies on elderly demented patients who are brought in via ambulance for either “fall” or “change in mental status” (which sort of defines dementia).  The societal cost/benefits are never considered in these patients whose costs are paid for by Medicare/Medicaid.  I’ve often wondered if we should frame these costs as evaluating these patients versus other potential uses of the resources, i.e. providing school lunches for poor children.

Finally, these serves as a metaphor for government interventions where the benefits are often concentrated, readily visible and usually not as great as their proponents would have us believe, while the costs are dispersed and often greater than acknowledged.

Satkirath
Apr 8 2019 at 10:56pm

Great episode as usual. I was struck by Russ’ question regarding epistemic vs technical limitations, his invocations of F.A Hayek and the analogising of the human body to the macroeconomy. Hayek somewhat talks about this in his Law Legislation and Liberty. Both systems (the body and the macroeconomy) form as spontaneous orders but neither are they perfectly comparable. Hayek looks at the etymologies of “organism” and “organisation” and actually goes on to lament the analogization of an organism to viewing society as an organisation:

“It was natural that the organismal analogy should have been used since ancient times to describe the spontaneous order of society, since organisms were the only kinds of spontaneous order which everybody was familiar.”
He continues: “The interpretation of society as an organism has almost invariably been used in support hierarchic and authoritarian views to which the more general conception of the spontaneous order gives no support.”

Hayek’s reasoning for these views are that organisms are a special kind of spontaneous order with discrete and concrete properties that are not present in an a societal order: “The chief peculiarity of organisms which distinguishes them from orders of society is that in an organism most of the individual elements occupy fixed places which, at least once the organism is mature, they retain once and for all.”

Societal spontaneous orders are far more complex in that regard; the human body has fixed elements that operate in regular and predictable ways and has thereby allowed science to make much progress in the medicinal field as compared to the economic (which in turn forms a major motivation for Hayek’s condemnation of scientific positivism in economics, or “scientism”). By contrast macro variables in the economy are always changing and persistent/reliable regularities are very rare (Lucas critique much?)

From all this my two cents is that the epistemic limitation is less constraining in medicine than in economics. I think continual progress is possible. Jacob Stegenga makes a lot of serious and compelling points against medical intervention, and I don’t disagree with much of what he says. But then again progress has been made; life expectancy and infant mortality rates have all dramatically improved since the start of the 20th century. Many diseases that used to afflict us no longer do thanks to (silver bullet?) vaccines and inoculations. All thanks to science.

Economics, however, has made comparably less progress. I think the following from Richard Epstein is appropriate here:
“Although science is capable of linear advancement, the same is not true of law, where the same insights and mistakes tend to recur again and again.” Here, Epstein is referring to the law but the same can be said about economics.

In sum, I wouldn’t be uncritically optimistic about the powers of science and the scientific method. But at the same time, by acknowledging both its utility and its limitations, and clearly demarcating its domain of possibility and applicability, then I think real progress can be made in medicine.

Eduardo Alvarez
Apr 14 2019 at 3:37pm

As heard first on Econtalk problems of “female Viagra” and alcohol

https://medicalxpress.com/news/2019-04-fda-alcohol-female-viagra.html

Wrong to call it female Viagra, men have desire without the capability, women do not have desire.

On the other hand alcohol for millenia has beenthe original female Viagra.

Arde
Apr 26 2019 at 4:30am

Thanks you Jacob Stegenga for drawing attention to the harms and risks of drugs, which are often underestimated.

The case of Thalidomide illustrates what can happen if the harms of a drug are not properly researched. Thousands of babies died or were born without arms, legs, had deformed eyes and hearts because their mothers were taking Thalidomide during pregnancy.

https://en.wikipedia.org/wiki/Thalidomide

 

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

  • Medical Nihilism, by Jacob Stegenga on Amazon.com.
  • Economic Concepts: cost-benefit analysis, healthcare, confirmation bias.

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

  • For Better or Worse. EconTalk Extra. Complementary questions for further thought and discussion on this episode.
  • Paul Ehrlich. Biography, Science History Institute. [N.B. There is more than one person of renown with this same name. The Nobel Prize winning Paul Ehrlich discussed in this podcast episode lived 1854-1915 and did foundational research in finding a cure for syphilis and founding what has become modern chemotherapy.--Econlib Ed.] See also Paul Ehrlich at Wikipedia.
  • Benefit-Cost Analysis, by Paul R. Portney. Concise Encyclopedia of Economics.
  • Pharmaceuticals Economics and Regulation, by Charles L. Hooper. Concise Encyclopedia of Economics.
  • Drug Lag, by Daniel Henninger. Concise Encyclopedia of Economics.

A few more EconTalk podcast episodes:


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: February 26, 2019.]

Russ Roberts: My guest is Jacob Stegenga.... His latest book, which is the subject of today's conversation, is Medical Nihilism.... Now, this is an utterly fascinating book that begins with what seems like an essentially untenable claim that can't be true, and then relentlessly makes the case for that claim: That by then end of the book makes you wonder if it is true. And I have to confess--as listeners will discover, and recognize, that I'm sympathetic to some of the arguments in the book. Many of them, in fact. But I'm surprised at how far you got me to come along with you, Jacob. So, let's start with what you mean by this rather daunting term, 'medical nihilism'.

Jacob Stegenga: Sure. So, medical nihilism--medical nihilism [pronunciations: medical nee-hilism or medical nai-hilism] is the term that I'm referring to, to summarize the overall argument of the book. So, the book is constituted by many kind of smaller level arguments, in each chapter. But, the overall argument, I'm referring to as medical nihilism. And, the conclusion of this argument is that we ought to have low confidence in the effectiveness of medical interventions. So, it's a skeptical thesis about how confident we should be in modern medical interventions.

Russ Roberts: Well, I'd say 'skeptical' is not the right word. I would say, at least, 'highly skeptical'.

Jacob Stegenga: Fair enough. Yeah. It's a very pronounced form of skepticism. It runs deep. It's meant to--

Russ Roberts: like most medical interventions are a bad idea. That's the way I would--or a surprisingly large number, are a bad idea, is the way I would describe it.

Jacob Stegenga: Right. That's a fair description. Yeah.

Russ Roberts: So, that seems to be silly. You concede early on that through most of history this was clearly true. Many of the cures and interventions of the past--ingesting mercury, bloodletting, and other things--didn't work; didn't improve the patient; in fact often were dangerous and harmful on net. And yet you admit that most people would say, 'That was then. This is now.' And, of course, in the last 50 or 60 years we've seen many, many--and even a little past that, maybe going back to the 1920s in America and the world--you'd say, 'Since then, we've discovered science; and the Enlightenment and the scientific method has given us many, many great and glorious health improvements. And doctors are to be revered, adored, as well as the people who create the devices and pills that we take and attach to ourselves and deal with.' And yet, you argue that even most of the modern ones are not so good. So, first you should probably make--you do concede there are a few, what you call, magic bullets. So, why don't you talk about what a magic bullet is and the three that you highlight in the book; and then why you think there are so few after that.

Jacob Stegenga: Sure. Yeah. There's a lot packed into your question there. It's a really good summary of part of the motivation of the book. So, you're gesturing towards what I call in the book the 'Today is different' response to medical nihilism. So, the idea is we have modern science, we have strict regulation, we have effective pharmaceuticals, so this skeptical thesis is just nowhere near as compelling as it would have been in, say, the 18th century. And, so, part of the argumentative burden of the book is to dispel the persuasiveness of some of those premises in the 'Today is different' argument. It's also, as you noted, the thesis is not the kind of audacious, radical claim that there's not a single effective medical intervention. Of course there are. I refer to the very best medical interventions as 'magic bullets.' So, a magic bullet is an intervention which targets the pathophysiological basis of a disease with high specificity and high potency. The term 'magic bullet' comes from the chemist Paul Ehrlich. So, one of the most important scientists in the early part of the 20th century. He was looking for a cure for syphilis. And the treatment at the time was mercury. So, he was referring to this need for a chemical to bind to the bacterium that caused syphilis--that had recently been discovered thanks to germ theory in disease. So, he wanted a chemical that would bind to this bacterium, kill it, and only interfere with that bacterium and not the rest of our normal physiology. So, that's where the term comes from. The term comes from the chemist Paul Ehrlich. He and one of his colleagues, Sahashiro Hata, ended up finding a chemical with this kind of specificity; and some people call this the first modern antibiotic. And it was later improved on by penicillin. So, antibiotics like penicillin are magic bullets. They target disease entities with high potency and high specificity. The other example of a magic bullet in the book is insulin for Type 1 diabetes. So, the treatment for Type 1 diabetes until 1920 was starvation therapy. So, children who were born with Type 1 diabetes would be starved into a coma, and they would until maybe the age of 15 or 16 and then they would die. When Banting and Best discovered insulin as an intervention for diabetes--they developed an animal model of diabetes, diabetes in dogs. So, they discovered that you could modulate, radically reverse the symptoms of type 1 diabetes using insulin, they just walked across the street to one of these wards with comatose children who were born with type 1 diabetes and just started jabbing the kids with insulin. And the kids woke up out of their comas. So, it's a magic bullet. Now, penicillin and insulin aren't perfect. I mean, people develop resistance to penicillin; some people have allergies to penicillin and other antibiotics. The dosing of insulin has to be very, very careful for diabetics. But nevertheless they are pretty miraculous drugs. They either eliminate the disease entity altogether--in the case of antibiotics--or in the case of drugs like insulin they really effectively manage the symptoms of the disease without curing the disease.

7:59

Russ Roberts: A part that was so interesting to me, and I learned a lot from the book: there's a large class of pharmaceutical interventions that I would say after reading your book fall into two categories, broadly--the non-magic bullet categories. One is that they just don't work: They might affect some measure of health, like cholesterol level, but they don't necessarily reduce heart attacks, which is what we of course actually care about. So, there's ineffective drugs that seem to perhaps help but ultimately we find, don't. The second group, which is really interesting conceptually are pharmaceutical interventions, drugs that aren't specific. They, because of the complexity of disease, the attempts to cure the bad part leads to too many other things going on at the same time that can't be isolated. So, talk about both of those and help us understand the role of, certainly of complexity and the human body in the second case, because it mirrors the way I think about the macroeconomy and attempts to "cure it" in economic policy.

Jacob Stegenga: Oh, right, yeah. That's an insightful point. I think there's a lot of physical-like[?] conceptual similarities between trying to intervene on a complex physiological system and trying to intervene on a complex social system. So, have articulated one of the arguments in the book: so, those interventions that aren't magic bullets, what is it about these interventions that makes them not magical--what is it about them such that they fail to live up to the standard that insulin and penicillin set? Just as an aside, I wouldn't necessarily want to say that a drug that's not a magic bullet isn't useful at all.

Russ Roberts: Excellent point.

Jacob Stegenga: And certainly some listeners to your podcast and some readers of the book will say, 'Wait a second. Statins might be useful.' The empirical evidence shows that statins can lower the risk of heart attacks by a small amount. Say, 1% in an at-risk population. One percent is better than 0%, so there's certainly some utility to statins. Now, the response to that that kind of effectiveness, 1% reduction in risk of a heart attack, is a completely different order of magnitude than the effectiveness of insulin and penicillin. Okay, so with that caveat aside, let me answer your question. There are two general kinds of physical reasons for an intervention failing to be a magic bullet. One has to do with the complexity of the target system--as you said. So, many disease entities that we're trying to intervene on have a radically complex causal basis. So, intervening on one node or one causal chain in this massively complicated causal nexus won't lead to the kinds of outcomes that we want because the causal network can just be robust against external perturbations. Many diseases are like this--so the pathophysiological basis of heart disease, or pretty much all psychiatric diseases are radically complex. So, that's about the complexity of the disease states. Another reason why many interventions fail to have the specificity or potency that we want is because of the ways in which drugs work on our body. So, drugs work as ligands. A ligand is something that binds to a receptor and changes the way that receptor works in our body. It turns out that there's a one-to-many relationship between ligands--most ligands, most drugs--and receptors. So, a single drug can bind to multiple receptors. It turns out also that there's a one-to-many relationship between activated receptor and chemical pathway. So, if you turn up or turn down one receptor that can modulate multiple biochemical pathways. And also there's a one-to-many relationship between activated biochemical pathway and physiological effects, depending on which organ or tissue the pathway is in. So, there's this like cascading complexity of effectiveness from consumption of drug to physiological effect. So, for these two physical reasons--the complexity of diseases and the complex ways in which drugs modulate our physiology, most drugs aren't magic bullets.

13:10

Russ Roberts: The economist F.A. Hayek said that the curious task of economics is to demonstrate to men how little they really understand about what they imagine they can design--a quote listeners are familiar with. Is it conceivable that some of these cascades of complexity will be better understood in the future? And, that our pharmaceutical interventions will be more successful? Or is there a certain level of complexity in the human body that you think cannot be overcome for some of these problems?

Jacob Stegenga: This is the question to ask, I think, in response to the arguments that I put forward in the book. So, there's a certain ambiguity in the thesis of medical nihilism. To put it in philosophers' terms, the thesis can be either an epistemological thesis or a metaphysical thesis. The epistemological thesis is: Our methods of science as they are today just aren't good enough for us to get what we want. The metaphysical thesis is stronger. It says: The way our bodies are, and the way the medical interventions work on our bodies is just physically such that magic bullets will be out of reach, in principle, for many diseases. I, myself, sit on the fence between these two positions. But let me try to say a few words about how the development of science could possibly proceed such that we get more and more magic bullets in the future. One obvious way is just to pursue more research for diseases that we have a track record of finding magic bullets for. So, if we go back to the penicillin and insulin case, we can conceive of these really broadly as diseases of deficiency. Like, 'There's not enough insulin in your body, so put more in.' Or, scurvy is like, 'There's not enough Vitamin C in your body, so just put some Vitamin C in your body.' So those are diseases of deficiency. And diseases of infection are cases where there's something in your body that shouldn't be there. And so antibiotics work by just getting rid of those things. So, those are pretty basic physical systems that we can intervene on. And so, if we want more magic bullets, we could continue to develop those kinds of interventions. And I think that the most promising and most important line of medical research for the future will be to develop more antibiotics. In part because of the development of antibiotic resistance. So, we really must have in our arsenal more and more antibiotics, for the future. Okay. Another way in which we could develop our science so that we are able to develop more and more magic bullets is to learn more about the physiological basis of what I'm calling complex diseases. So, it could be that, say, depression, the way we're talking about depression now is it's a complex disease. But that might just be a way to mask--

Russ Roberts: ignorance--

Jacob Stegenga: the real nature of disease. Exactly. It might be a way to mask ignorance. So, it could just be that depression is not one kind of disease, but may be a hundred kinds of disease. So, there's a hundred some types of disease. So, the reason why SSRIs [selective serotonin reuptake inhibitors] fail to be effective--

Russ Roberts: and those are?

Jacob Stegenga: Selective Serotonin Reuptake Inhibitors, like the major class of antidepressants that we use. So, the reason why antidepressants might essentially fail to be clinically significant now is that we are using, you know, a handful of drugs to try to intervene on a hundred different subtypes of depression. But as science progresses and we are able to sub-type these kinds of depressions, we'll be able to tailor drugs to those subtypes. And that's one of the promises of personalized medicine. Personalized medicine is supposed to be: Getting a bunch of big data, learning more about the physical basis of diseases, and then looking for interventions to target those physical bases. Now, whether or not you are persuaded by the problems of personalized medicine, in a sense depends on--it goes beyond the current empirical facts. So, some people are cup-half-empty. Some people are cup-half-full. You might be optimistic about what the future of science will bring to medicine. Or you might be more or less pessimistic. And I don't have an argument to sway you one way or the other, if you are kind of inherently optimistic or inherently pessimistic.

Russ Roberts: In Hayek's 1974 Nobel Prize winning address, "The Pretense of Knowledge," he suggested we will never acquire that level of knowledge that will allow us to intervene successfully in the macroeconomy. Many economists disagree with that. And we'll put a link up to that speech. I heartily recommend it for skeptics everywhere.

18:17

Russ Roberts: But, coming back to this question of--I want to ask two things about what you just said. One is--let's start with this, because it's a general issue that runs through, I think, some of your claims, a problem with some of your claims. So, many, many interventions--you mentioned SSRIs or anti-depressants--people would say, 'Okay, they don't show up in clinical trials. But for me, it's fabulous.' Recognizing that for some people it makes them more depressed--people, I think, recognize that. But for many people, once they get the right cocktail or the right drug, they find they are much more capable of getting along in the world. And they would argued--and they have in the past when this kind of issue has come up on the program, they would say: 'You are dangerous, Jacob, because you are discouraging something that is lifesaving, for some people. Not everybody--okay we agree with that. But it's made so many people's lives better.' And, of course, many psychiatrists today are not doing cognitive, behavioral therapy. They are dispensing drugs. That's their overwhelming practice. And they think they are doing God's work. They think they are saving lives and making people's lives better. And if they don't, they just need to tweak it or find a better variation. So, how do you respond to that?

Jacob Stegenga: Good; this is a very important question, and there is a lot that can be said about it. So, in general it raises the following question: What kinds of evidence should we be appealing to when we judge the benefits and harms of medical interventions? In evidence-based medicine, there's been a very, you know, powerful movement in medical research to move towards promoting certain kinds of evidence and downgrading other kinds of evidence. So, the gold standard in evidence-based medicine today is the randomized control trial [RCT]; and meta-analyses of randomized control trials. So meta-analysis is like a bringing together of results from all of the available trials. And, evidence-based medicine did this for good reason. So, the way in which we made causal inferences about the benefits and harms of medical interventions before evidence-based medicine was to appeal to things like expert opinion, background theoretical knowledge--

Russ Roberts: patient[?]--

Jacob Stegenga: anecdotes--yeah, exactly. Case reports. And, the community--statisticians, epidemiologists, regulators--recognized that these forms of evidence were shot through with biases. And so, as medical research progressed through the 20th century and now into the 21st century, the methods for testing the benefits and harms of drugs got better and better and better. Insofar as they controlled for more and more of these biases. Okay. Now, what about 1st person reports? What about 1st person anecdotes? Like, 'This drug worked for me.' Or, 'This drug worked for a good friend of mine,' or 'a patient of mine'?

Russ Roberts: My patients. Yeah.

Jacob Stegenga: My patients. So, what are we supposed to say about these kinds of cases. The short answer is we should approach first-person reports with a huge amount of cautionary skepticism. And this is for 3 fundamental reasons, that all work together. The first reason is that diseases have a natural course of progression. That is, they have a kind of a life of their own. So, symptoms get better and worse over time for many diseases. Some diseases have a natural course of progression in which the symptoms gradually decrease until they are gone. This is, for instance, like illustrated by common cold. Some diseases fluctuate with symptoms varying[?] over time. So, for instance, like bipolar disorder, or depression--symptoms are worse at some times, better at other times. And, people tend to seek treatment from their physician when their symptoms are especially bad. Now, if you seek treatment when your symptoms are especially bad, then merely the passage of time alone entails that your symptoms will get better in the future--for these diseases that have a fluctuating severity of symptoms or a gradually decreasing severity of symptoms. So that's problem Number One: the natural course of disease. Problem Number Two is the infamous placebo effect. So, the placebo effect involves the expectation is that you'll get better because you received treatment from a health care professional, in fact causes you to get better: but not via the biochemical activity of the drug that you've consumed, but via some sort of mysterious psychological phenomenon that we don't actually understand very well at this point. So, that's Problem Number Two: placebo effect. Problem Number Three is a well-known fallacy of reasoning that philosophers call 'confirmation bias.'

Russ Roberts: Yeah, 'the narrative fallacy,' also.

Jacob Stegenga: Is that another word for it?

Russ Roberts: Yeah, it is: You tell yourself a story, and then everything fits the story. It's a version of confirmation bias.

Jacob Stegenga: Exactly. Yeah. So, confirmation bias in general is paying more attention to evidence that confirms your beliefs and ignoring evidence that disconfirms your beliefs. And we have a massive amount of evidence that shows that typical people suffer from confirmation bias in really big ways; but also physicians, patients, and even, you know, professors--

Russ Roberts: Economists.

Jacob Stegenga: Economists. Yeah. So we--the royal "we"--suffer from confirmation bias. So these three problems together--the natural course of diseases, the placebo effect, and confirmation bias--entail that we should treat first-person reports regarding the effects of interventions with a huge amount of skepticism. Now, I should add the following caveat, though. In medicine, there's been a long tradition of neglecting the patient's reports, because medicine, at least sometimes, has been kind of imperialistic in its attitudes. So, 'The physician is the educated one; they know about your disease; you don't know anything about your disease.' You are sick. Maybe you're a woman. Maybe you're disabled. And, like, the white, upper middle-class, male physician knows best. And so there's been a tendency to push back against this medicine. So, 'Medicine should listen more, should hear the patient, and should respect what the patient is reporting.' I agree with all of that. Medicine should--we, like, the physicians should be listening very carefully to our patients and respecting what patients report. However, when it comes to causal inference, that's a completely different ballgame. And I think we ought to be maintaining really, really strict evidential standards when it comes to deciding: Did this drug have the following effect?

26:13

Russ Roberts: So, let's talk about side effects, generally, because they are related to this issue of complexity. And it gets at something I didn't feel you emphasized enough. So, one of the themes of the book is that many of the things that we think work actually don't. Many of the things that we think work, don't work very well. And many of the things that work a little bit, have side-effects that actually are negative or offset or roughly counterbalance the good effects. And, you make a very persuasive case--I hope we'll get to it, but if not, I want to say it here because it's very important, that: There is a strong set of forces that cause us to underestimate the harm of intervention, while over-estimating the benefits. However, just because something is harmful doesn't mean you shouldn't--I mean, just because something has side effects doesn't mean you shouldn't do it. It could be that it's worth it, still. So, talk about that issue, first of all, of whether, of the importance of side effects. And I just want to complicate it a little bit by mentioning that: You don't talk very much about cancer in the book. Cancer treatments--I mean, most people recognize that our current level of cancer treatments are harmful to a person. They are destructive. They often have life-damaging effects, even when the cancer is cured--so-called cured or in remission. So, we understand that cancer drugs are unpleasant, and basically a form of poison; and we just hope it poisons more of the bad stuff and not so much of the good stuff. But, we also understand that that's not necessarily true. So, talk about this issue of tradeoffs between benefits and costs, and risk and return. And in particular if you can add some mention of the cancer issue--because I didn't notice as much about that in the book.

Jacob Stegenga: True. Okay. Yeah. So, this is a really important sub-questions and there's a lot going on there. Just on cancer, I'll say, parenthetically, I'd like to plug a book of one of my colleagues, Anya Plutynski, who has just published a book, which is like a philosophical study of the science and medical treatment of cancer. And it's an excellent book, and one of the few sort of like philosophical discussants on cancer. So, that's a book worth looking at.

Russ Roberts: Author's name again?

Jacob Stegenga: Plutynski, P-l-u-t-y-n-s-k-i. She works at Washington University in St. Louis.

Russ Roberts: Got it.

Jacob Stegenga: So--right. Almost all, if not all medical interventions have harmful side effects. But, of course, that doesn't entail--as you said, that doesn't entail that that's an argument against using them, because their benefits might outweigh the harms. And so, at the end of the day, somebody has to decide if a particular medical intervention has benefits that outweigh the harms. And, I think that's a definitive, general point. So, so, the mere presence of harms, we just have to accept. And I think the case of cancer drugs is illustrative. I like the way you put that. Medical research is tuned in a variety of ways to hunt for benefits of interventions at the expense of hunting for harms. So, even if we agree that we are going to have to be weighing up the benefits and harms of medical interventions, the actual evidence that we have available to us to do that weighing is systematically skewed towards overestimating benefits and underestimating harms. To actually articulate the argument would take me some detail. We can do that if you want. But that's [?chapter 5?]--

Russ Roberts: We'll get to that. We'll get to that. Keep going.

Jacob Stegenga: Okay. So, ultimately we need to do this kind of weighing up of benefits and harms. And, this raises a lot of questions. Like, how should we do that weighing? And, who should be doing that weighing? And on what evidential basis should we base that weighing? These are questions that really haven't been thought through with very much--these are questions that haven't been thought nearly as carefully enough as you would expect. So, just to give an example: To get a new drug approved by the FDA [Food and Drug Administration], the main evidential requirement is to have two positive randomized control trials in which the drug demonstrates benefits compared to placebo or competitor drug. And that benefit could be really, really small. Now, of course, there is also a safety assessment, at this stage. But the actual evidence that's available to properly assess the safety of an experimental drug is pretty thin, at this stage in the research life of a drug. The vast, vast majority of evidence that we get on the harmful side of exit[?] drugs occurs after the drug has been approved for public consumption. And at that point, there's no incentive to do any more careful randomized trials. And so, when you posed this question, you related it back to the issue of whether or not we should trust first-person anecdotes. Now, this is crucially important, because the majority of evidence that we have on the harms of drugs amounts to a collection of first-person anecdotes. So, if a patient thinks that they are suffering a particular side effect from a drug, they may or may not have a conversation with their physician about it. If they do, the physician has to decide if the patient is, like, a reliable reporter of this effect of the drug. And then the physician has to basically upload the harm to a database of, in which information on harms is collected. And then from that database, scientists try to make inferences about whether or not the drugs are in fact causing such and such harms. So, it's a collection of first-person anecdotes. And, it's only in rare cases, in which, after approval, there's a carefully controlled randomized trial done to test for harms.

33:18

Russ Roberts: But you give many examples in the book of harms that are so severe that drugs--later, that come out--where the effects are so obviously harmful that the drugs are taken off the market. Or where the company is sued. Because they knew of the harm and didn't reveal it. It's not like once. It's often, is how I would describe it. It's deeply disturbing. But I--on this issue of side effects--some of these side effects, I never really understood it until I read your book, and maybe I still don't. But, my assumption was: People are different. This drug might make me nauseous [nauseated?], but not you. It might make me tired, but not you. It might make me lose my appetite, but not you. Those are relevant. But the bigger issues are things like: It might stop my heart, but not you. And some of the reasons that it stops your heart is because that cascade of complexity you talked about earlier: We don't totally understand. We have this romance about doctors as sort of scientists, hopefully calibrating the impact of this thing I'm injecting. A lot of it's nothing like that. And, it was deeply enlightening--unfortunately.

Jacob Stegenga: Thanks for saying that. Yeah. Um, right. So, there's--there are just so many problems when it comes to the detection of harms--like the careful, reliable, controlled experimental study of the harms of drugs. An example--this example is not in the book, but it's kind of a funny example. So, a couple of years ago the FDA approved a drug for what was at the time called 'hypoactive sexual desire disorder' in women. So, basically women with low libido--women who weren't enjoying sex. And, so, they would be diagnosed--this was in the DSM-IV [Diagnostic and Statistical Manual of Mental Disorders, 4th edition]--they would be diagnosed with this disease. And, um, for a long time there was no drug available to treat this disease. Of course, there was a kind of male equivalent in Viagra and drugs like it. And the financial success of Viagra motivated the hunt for a female version. Um, a drug was test called flibanserin. It was initially rejected by the FDA, because the positive effects were really tiny, and there were some noticeable harmful effects. And it reacted poorly with alcohol. And then it was finally accepted because the FDA received some pressure from patient advocacy groups. There was a campaign called 'Even the Score'. And the idea was: 'You men have your drug for sexual desire, so we should have ours, too.' Turns out that that patient advocacy group was funded by the company that made the drug. And okay, this is a kind of long-winded story about harms. There was a study of the harmful side-effects of flibanserin. And in that study, the majority of subjects were men. So, it's a kind of funny example of how medical research sets up the conditions under which a physician in the wild--in real life, who is dealing with real patients--has to base prescription decisions on a set of evidence which might not be relevant to the patient that they have in front of them.

Russ Roberts: Well, that's a semi-comic example; it's tragi-comic, obviously. The more general cases that you document in the book are the fact that, in clinical trials, there's a natural incentive on the part of the pharmaceutical company to work with healthier people. Work with younger people. Keep out the elderly. Keep out--super-young--children. And yet, once the drug is approved, the target audience expands from the group that was tested to the general population--for a whole bunch of reasons, economic, human, financial. And then, as a result, a lot of the harms show up that couldn't have been observed in the trial, because the trial didn't have the population in it.

Jacob Stegenga: Exactly. Yeah. And there's a kind of general and principled way to put this point. So, randomized trials that are designed and performed to get regulatory approval exclude subjects with particular characteristics. Those characteristics are age--so, elderly people are excluded--people with other diseases; people on other drugs. And we have really good empirical evidence that shows that those very features increase the harms of drugs. So, an 80-year-old on that drug will experience more harms from that drug than a 50-year-old on that drug. So, we know that age, co-morbidity, and so-called poly-pharmacy--being on multiple drugs--itself modulates the harmfulness of the new drug. So, if you exclude those people from trials--I mean, those are the very people that end up taking new drugs--the elderly, people with multiple diseases, and so on. And so, we have a principle--we can just make a principled prediction that trials are systematically underestimating the harm profile of medical interventions. In the book I am sort of facetious about this, but we talk about the safety of drugs. And there's a lot of talk about the safety profile of drugs. But this is a kind of like Orwellian misnomer. Really we should be talking about the harms of drugs.

39:42

Russ Roberts: And so, I found that very difficult to swallow--bad metaphor, we are talking about pills. But, I think it's really important as an economist to come to grips with this, because, as an economist I've always taken the view that: Well, of course all things are--there's no such thing as a safe drug. And this whole FDA thing about safety is just an intellectual sham. Of course, things have side effects. Life's about tradeoffs. And, of course, when you take a drug that's going to help you, there may be some costs, besides monetary costs--which are increasingly small for most patients, these days. That's another problem we've talked about many times here. But the point is that, I take a drug to help me cure some issue I have; and of course it could raise the risk of something else. It could have lifestyle challenges like fatigue or nausea or whatever. And so I've always said this whole idea of safety is a mistake. It's silly. We don't want a perfectly safe drug. If we did, we wouldn't take anything. And yet, what I've learned from your book, which is a bit alarming, is that, that's true; but the data that we have, and our impression of the evidence, is not nearly as clean as we would think it is in evaluating those tradeoffs. In other words, sure, there's tradeoffs. They're just a lot worse than they actually appear to be, because the incentives for collecting the benefits are very high; and the incentives for being honest about the harm are really low. So, what looks like, 'Yeah, there's some cost to this, but it's worth it,' may turn out not to be the case.

Jacob Stegenga: Yeah. Exactly. So, that's exactly a component of the argument for medical nihilism. And so, and one way that we could offer a kind of different angle towards the general argument is as follows. Over the last generation or so, trials have in fact gotten better and better and better in that they have, for a whole variety of reasons, they've controlled for various biases when it comes to the detection of benefits. So, in short, the epistemic reliability of trials, when it comes to the detection of benefits, has gotten better and better and better. And a result of this is that the--

Russ Roberts: More benefits--

Jacob Stegenga: Well, actually, no. The result is a measured decrease in the effectiveness of drugs. So, the better trials get, the smaller the effect sizes are observed in trials. And so, there's an inverse correlation between trial quality and measured effect size in the trial. Now, you might just extrapolate that into the future: so, no trial is perfect. So if trials get better and better and better, measured effect sizes on the benefits will get smaller and smaller and smaller. Now, if we take the discussion we were just having about harms and apply it, similar kind of logic, we know, based on arguments that I've given in the book, that our current evidential basis for assessing harms radically underestimates harms. If our evidential basis got better at detecting harms, we would detect more harms. And we[?] extrapolate that into the future: So, the better and better and better trials got at detecting harms, the more harmful drugs would look. So, on the one hand, benefits are going down as trials get better; and harms are going up. That's a kind of general and principled argument for medical nihilism.

43:26

Russ Roberts: Well, let's talk about the FDA a little bit, because you give a number of examples in the book where the FDA approves something where there were numerous trials that found no effect. And then there's like a couple that found it, so they approved it. And, it raises the possibility--and I think you explicitly say this--that the FDA is too lenient in approving drugs. Which goes against a long history in economic research of claiming that the FDA is too tough: that the hurdles for drug approval and the costs of drug approval are so large that, Sam Peltzman, for example, a famous study, showed that--'showed'--I retract that word. That's a word I should never use, no one should ever use. It's a study that found--whether it's true or not is a tough question to answer--but, found that thousands of people have died because the FDA took so long to approve drugs that were helpful. You are coming along and saying, 'The FDA is too lenient. There are many cases where the people involved with the FDA decision have a financial incentive either in conducting the trials or in assessing the trials,' and you are concluding the FDA is too lenient. Is that a correct summary of your view? And how would you relate it to the claims by economists that the FDA is too slow in approving important drugs?

Jacob Stegenga: Yeah. Good. So, broadly construed, that is my view, although the issue is complicated. And I should say, when I'm talking about regulatory standards in the FDA, I am only focusing on the evidential standards. So, the barrier that a company has to get over when it comes to the evidence. There are a whole bunch of other regulatory standards, like standards that have to do with the actual manufacturing of the pharmaceutical; and I don't know anything about those standards and my argument doesn't touch those. So, it could be that for some of those standards, like the--how many times a day does the factory have to be cleaned, or something like that--I've got nothing to say about that. They might be too stringent. But when it comes to the evidential standards, my argument is that the evidential standards are far too low. They are far too easy to get a drug which has a negative benefit/harm profile approved. So, the standard, the evidential standard currently is--and typically, a new medical intervention has to be tested in two randomized control trials. And in those trials the drug has to be better than placebo; or better than a competitor drug. And, how much better? That's not part of the standard. According to what kind of statistical inference? That's not part of the standard. As long as it's a Phase 3 RCT [Randomized Control Trial]--which means that there's got to be a certain number of subjects; there has to have been Phase 2 RCTs, which are a bit smaller. Two Phase 3 RCTs which are positive RCTs, the drug gets approved. And that is far, far too low of a standard. Now, what about the argument from economists that people are dying because drugs aren't getting on the market soon enough? And it's not just economists, I should say. There are also patient advocacy groups that have argued for this. And, the most famous case is during the drug trials for HIV [Human Immunodeficiency Virus], activists were arguing that the FDA was moving too slowly; there was a drug that was potentially a lifesaver and people were dying of AIDS [Acquired Immunodeficiency Syndrome]. And so, they had pushed the FDA to hurry up. It's a famous case, in this domain. My--the short answer is that it presupposes that there's a pipeline of many lifesaving drugs that are just getting through the pipeline slowly because the FDA is dragging their feet. Or they're raising regulatory standards too high. And so, rather than getting a drug approved in 2 years, it takes 8 years to get a drug approved; and during those intervening 6 years people's lives could have been saved--but they're not. Well, the overall argument of the book is that there's not such a pipeline. Where are these lifesaving drugs? In the last two generations--really, in the last 50 years--there's been a tiny, tiny handful of drugs that have consistently increased the lifespan of people suffering from particular diseases. Gleevec is one example. HIV drugs are another example. This is--there is just a tiny, tiny handful of drugs like this. Now, moreover, for diseases which are clearly lethal if--the FDA does have a program which allows prescription of drugs before they passed this two-positive RCT standard. Now, there are regulatory and administrative constraints on this program. But the short story is: If a physician has a patient who is dying of a particular disease, like some form of cancer, and they know that there's an experimental drug in the pipeline that can target this disease, even if the drug hasn't been approved by the kind of standard, the two-positive RCT standard, the physician can nevertheless approve the drug. So, this argument doesn't--this argument from economists that the FDA standard is killing people--doesn't carry much weight. For those reasons. I think you could go even further and say the economists' standard would kill orders of magnitude more people, because more harmful drugs would get through the regulatory standard. Thereby killing a lot of people. So, a good example is Rosiglitazone. Rosiglitazone was a drug for Type 2 diabetes. It was on the market for a number of years. In the United States, in fact, last I checked, it was still available. And, in 2007, meta-analysis was done which suggested that in the handful of years that the drug had been on the market, it had caused something like 70,000 heart attacks. So, you know--so, ultimately, we're faced with a tradeoff. The higher the regulatory standard, the fewer drugs that are going to get on the market. Will that entail that more people will suffer or die because of the fewer drugs? It's not so obvious to me.

Russ Roberts: Of course, operating in the background which we haven't talked about is the fact that many, many of these drugs are not paid for by the patients. Their incentive to be careful about taking these drugs certainly is there, because they don't want to die, and they don't want to have side effects. But there's financial incentive, often, for not paying for them. That's happening around the world, as well; not just in the United States. It's really fascinating.

51:14

Russ Roberts: Now, we had Adam Cifu on the program talking about his book with Vinayak Prasad, Ending Medical Reversal. The theme of that book is that many, many things that come to market, interventions, not just pharmaceuticals but various techniques for ameliorating pain or repairing damage to the body through various innovative techniques that work in observational studies, where you take a group of people, you take data and you know something about people who have had this treatment, and you see what happened to them--those studies work out pretty well. But then, when you do the Randomized Control Trial, you find out they actually don't work, because you can then control in a more effective way for the differences between the populations that get the procedure and those that don't. And you discover that actually it either doesn't work at all, or it actually is harmful. And that's also a disturbing book. But I've always thought, until I read your book, that: Well, observational studies--again, that's like the problems of epidemiology and regression analysis and economics, trying to tease out causal relationships in observed data in complex systems, they don't work very well, they are not often replicated. But an experiment--a randomized control trial--that's different. And what you argue in the book is that in both randomized control trials and in meta-analyses where you look across, you aggregate randomize control trials--which would seem to be even better, because you have even more data--that in those studies, there is a problem of what you call malleability. Which is deeply related to the problem of p-hacking. P-hacking is the problem that occurs in observational studies; or because there's a certain standard of statistical significance: people are biased or fraudulent, but mostly just biased in rejecting certain decisions along the way. There's too many degrees of freedom for the researcher. It's what Andrew Gelman calls 'The garden of forking paths.' There's just too many decisions; and so, through no fault of fraud, they just find out that things work when in fact they can't be replicated. Huge problem in psychology today. We've talked about it with Brian Nosek. But, again, I've always thought, 'That doesn't happen in randomized control trials. It's certainly not in meta-analyses.' So, you remind me that that's not the case. So, talk about why that is.

Jacob Stegenga: Yeah. So, first of all, the comparison between observational studies and randomized trials is an interesting illustration of the point we were getting at earlier. Namely, the better that methods get, the less interventions look effective. So, there's a trope in evidence-based medicine, which illustrates this, basically. You see this very often in the literature about evidence-based medicine. Physicians will say, 'We were using such-and-such intervention for decades when I went through medical school. And then finally we did a randomized trial. And we learned that that intervention is actually useless.'--

Russ Roberts: Yep--

Jacob Stegenga: So, there's just a--countless number of these cases. So, the basic idea is these pre-randomized trial methods were biased. And they were suggesting that interventions were effective; and then the randomized trials come along and suggest that in fact these interventions are ineffective. So, the better the methods get, the worse that interventions look. Um, now, okay. But does that mean that randomized trials and that analyses are perfect? Are they, like, the kind of method that, you know, comes down to us from God and just like--

Russ Roberts: Truth--

Jacob Stegenga: speaks the truth to us? Yeah. Yeah. I mean, there are better and worse randomized trials. And better and worse meta-analyses. But, there are just a whole number of ways in which randomized trials and meta-analyses can be shot through with biases. And that's--you know, the arguments that show that make up about a third of the book. So it would take me too long to sort of illustrate all the different ways in which randomized trials can be biased. You mentioned p-hacking. And, there are, there are practices that have the same look and feel as p-hacking, that occur in trials. One is to make a bunch of measurements in a trial and then only report a subset of those measurements. There's an interesting study done by a German regulatory group. What they did was they took a one-year window, and they sorted all of the interventions that had been submitted to the regulatory agency during this one-year window. And they went back to the pre-registration plans of the trials that were deployed to test these interventions. And they just counted the number of outcomes that were planned to be measured in those trials. And then they went to the corresponding publications, and counted how many of those outcomes had data that were then published in the articles. And the publication rate of measured outcomes was about 26%. So, um, the short story is: You can design a trial; measure a hundred things in the trial; and then just publish an article which only reports 20 of those measurements. That's a kind of p-hacking. And so, this kind of malleability exists in trials. Of course, there's publication bias, as well. So, this was, what I was just talking about was publication bias of particular outcomes. But, if you own the rights to a new pharmaceutical and you want to show that it's effective, you can perform 20 trials on that pharmaceutical, and just publish the two trials that show a small, beneficial effect. This phenomenon, publication bias, in medical research, has been extremely widespread. At least in the last generation or so.

57:25

Russ Roberts: I was shocked to hear that. I don't understand it. So, explain. My thinking is: There's two pieces to that. One is, you say you are going to measure a hundred things and you only report 26, but aren't there usually like 1 or 2 things that are really important? Like, not getting a heart attack, or the cancer disappears? So, I'm not quite sure I'm not suicidal--the case of antidepressants--I'm not quite sure how I can play that game when I'm trying to tell the FDA I need that drug. Now, the second question is: Doesn't the FDA, when I register a trial, don't they get all that information? How do I get away with doing 20 trials and only publishing two?

Jacob Stegenga: Yeah, good; so both good questions. So, um, on the first: Medical scientists have what they call the primary outcome that they are measuring. And, the primary outcome--now we have the pre-registration of trials. As you said, it has to--the trial, pre-registration has to happen either in some public database where journals and regulators can go and see like, 'Was this trial pre-registered?' And, in those pre-registration, description of the experiment that is going to be done, the scientists have to stipulate what the primary outcome is going to be. It turns out that there is second-order empirical evidence that looks at how effective these pre-registration practices are, and the extent to which scientists follow pre-registration plans. The extent to which scientists stick to measuring the primary outcome. And the results are shocking. So, for instance: One group looked at randomized trials in the very best medical journals. These are, like, the Lancet, the Journal of the American Medical Association, the New England Journal of Medicine, the British Medical Journal. These are like the absolute pinnacle of medical journals in the world. And they looked at trials in a particular temporal window--I think it was, like, one year. And they compared the publications to the pre-registration plans, and found massive disparities between the pre-registration plans and the trials. Even, like, came to the primary outcome. So, switching what they called the primary outcome happened in about half of these trials. So, outcome-switching occurs randomly in medical research. We might hope that pre-registration plans could be used and enforced. But there's been a lot of wrangling about: What jurisdiction should be responsible for the storing and publishing of pre-registration plans? And then enforcing the sticking-to-them, when it comes to publication or regulation? And, so far, there's just a lot of looseness. So, for instance, journals, for a while said--journal editors got together and said, 'Okay, we're not going to publish trials unless they've been pre-registered.' But it turns out that that wasn't stuck to. So, journals--journals were publishing trials that weren't pre-registered. Um, when it comes to regulation, as far as I know, regulators like the FDA do get access to a large amount of information that doesn't get included in trials. This includes, like, patient-level data, even if that patient-level data wasn't, didn't end up in the publication. So, regulators can get access to this data. And, in an ideal world, they would be able to use that data, in a way that guided their regulatory decisions.

Russ Roberts: You are saying they don't regularly do so?

Jacob Stegenga: Yeah. Exactly. So, the typical practice is to approve when there is--the two positive RCTs that are found. There's--

Russ Roberts: That means that there are 12 others that aren't positive, and they just ignore that?

Jacob Stegenga: With the case of Rosiglitazone there had been something like 45 randomized trials done, testing the benefits of Rosiglitazone. And, they were also measuring some harms. And one of the harms was: Does Rosiglitazone cause heart attacks? Of these--

Russ Roberts: This is for treatment of--

Jacob Stegenga: Type 2 diabetes.

Russ Roberts: Yeah. Go ahead.

Jacob Stegenga: Yeah. So, of those 45 trials, about 15 have been published. Anyways, Rosiglitazone was approved for clinical use. An academic came along, Steven Nissen, and tried to do a meta-analysis on the harms of Rosiglitazone. He wanted, he tried to ask the question, 'Well, does Rosiglitazone cause heart attacks? And if so, by how much?' So, he tried to get all the data from Glassco Smith Kline. And they refused. But, because Glassco Smith Kline had settled a lawsuit about a previous case--Paxil--they had been forced to create a database of all of their trials. And so, via this route, Nissen was able to get access to the data from all of these trials, both published and unpublished. So, not just the 15 published ones, but all 45. So, he and a co-author did a meta-analysis, and they found that Rosiglitazone does increase the risk of heart attack by a really serious amount. They submitted the manuscript of their meta-analysis to the New England Journal for publication. And the story has a kind of perversely funny twist. A peer reviewer at the New England Journal of Medicine faxed a copy of the manuscript to somebody in Glaxo Smith Kline, and that generated a flurry of internal memos. And, one of the internal memos--a journalist got their hands on one of these memos. And one of those memos said, 'Okay, Nissen has discovered at we at Glaxo Smith Kline and what the FDA already know, namely, Rosiglitazone causes an increase in heart attacks by such-and-such percent.' So, this memo was really revealing. It suggested that Glaxo Smith Kline had already done their own meta-analysis based on this unpublished data. And, they'd shared that information with the FDA. And, that no regulatory decision had been made after that. So--

Russ Roberts: Really interesting.

Jacob Stegenga: It's a compelling case in which the regulator had access to either the full set of data, or the, you know, meta-analysis of the full set of data. And anyways, did not change their regulatory stance.

Russ Roberts: It's conceivable they shouldn't have. Right? It's conceivable that the benefits from reducing, whatever did for Type 2 diabetes, outweighed, say, a small risk. You wouldn't want to argue that, because there's a risk of a heart attack, you should never take the drug.

Jacob Stegenga: That's absolutely right. I agree with the point that you made earlier: that all drugs have potentially harmful side-effects. Some of those side-effects might be very serious, like heart attack, and death. And, the mere existence of one of these side-effects doesn't entail that the drug shouldn't be approved. That's absolutely right. Yeah. But of course what matters is, um: Can we make a reliable inference about the benefit/harm ratio?

Russ Roberts: Yeah. And the point you are making, which is one I emphasize, is that the full information at the time that says [?] you [?] make, which [?] wrong, is a bad idea.

1:05:34

Russ Roberts: Now, I'm a skeptic about empirical work in economics, and I get criticized a lot for it. And I always make it clear that I'm not against evidence. I'm not against data. What I'm against is the overconfidence that economists sometimes have in data that's generated in complex systems. And, in particular, that, I would argue that the ability of statistics to tease out those effects is problematic. For that, I get often called, as being anti-science. Um, and, of course, my defense, is: I'm in favor of science. Good science. Different--

Jacob Stegenga: Different mind, like empirical economics. Like, the randomized trial movement in the MIT [Massachusetts Institute of Technology] poverty lab--this kind of work?

Russ Roberts: That would be one example. It comes up a lot in all kinds of areas. It comes up in, say, evaluating the minimum wage. It comes up in evaluating the effect of government spending on fighting unemployment. In the case of the randomized control trial part of economics it comes up when--people will say, in the effective altruism movement, 'We just have to figure out what works,' as if that was something that we know how to do. We don't. I'm very, very in favor of funding things that work rather than things that don't work. Some of the things that we thought worked evidently don't, despite there being shown in randomized control trials to work in the development literature and anti-poverty. But I want to read two paragraphs from your book that I think say this very well in your case. It's near the end of the book. It says--you write the following:

Anti-science sentiments about medicine are widespread. For example, the anti-vaccine movement--prominently associated with a single publication has suggested that the measles-mumps-rubella vaccine can cause autism, which has been thoroughly discredited--has led many parents to not vaccinate their children, putting their own children and others at risk.
One might worry that the view presented in this book contributes to irrational anti-science sentiments. However, one would have to seriously misinterpret the message of the book to portray it this way. To make the master argument compelling, throughout this book I've appealed to high-quality science. The trouble with so much of medical research is not science per se, but poor reasoning based on low-quality science that suffers from many systematic biases exacerbated by financial conflicts of interest.
It's a fabulous summary of what I think your book is trying to do and what I feel good economics should be trying to do. Do you want to add anything to that?

Jacob Stegenga: Thanks. Thanks for bringing this quote out. Yeah. I'm often asked a question that motivates this kind of response that I'm giving there. So, some people worry that by being critical of mainstream medicine and the kind of scientific basis of mainstream medicine that I'm lending a hand to those people who want to develop implausible alternatives, like homeopathy or the anti-vaccine movement, or, you know, like different kinds of religious opposition to particular kinds of medical interventions. And, my response is: I don't align myself with any of those movements. The arguments in this book could apply to those movements in a way stronger fashion than they do to mainstream medicine itself. So, this book is about increasing the quality of science in medicine. It's not an anti-science book at all. It's a pro-science book. It's trying to argue that medicine should be more scientific than it is.

Russ Roberts: And I should just add, as an important footnote: We spent most of this conversation, almost all of it, on pharmaceuticals. But the argument goes way beyond the pharmaceutical area.

Jacob Stegenga: Um, that's a--I'm glad you think so. And I'm sometimes criticized for this among my colleagues. So, my colleagues note that 'I've been focusing on pharmaceuticals; I'm calling the book Medical Nihilism but most of the examples and most of the arguments are framed around pharmaceuticals. Well, what about surgery? What about radiology? What about, say, early detections, screening programs for diseases like cancer? And I'm not talking about screening programs or surgery or vaccines or radiology: I don't have very many examples of those in the book, at all.' And, what I say in response to this line of questioning is: We often give advice to graduate students to pick a focus and not try to be overly ambitious in a book. And, that's part of my strategy here. So, I think that some of the arguments that I make could be extended to domains of medicine that go beyond pharmaceuticals. I'm glad that you think so. So, for instance, certain aspects of surgery, say, or disease screening programs. This is something that I've started to write a little bit about. But, the fine-grain details about how those arguments would go, I think would be a little bit different. So, for instance, the financial incentives in play in the domain of pharmaceuticals are just so fantastic that I think that nudges the biases more than they would in another domain in medicine in which the financial incentives weren't quite so astonishing.

Russ Roberts: Oh, I don't know about that. When everything looks like--when you have a hammer, everything looks like a nail. And if you are a surgeon--

Jacob Stegenga: Yeah--

Russ Roberts: it's shocking to me how often surgery is recommended by surgeons. Strangely enough. So, I think that's important. I don't think it's unimportant--

Jacob Stegenga: right--

Russ Roberts: and the point you made earlier, which is extremely hard to remember, that many things get better by themselves with the passage of time is very challenging for most of us to remember; and I can't tell you how many people have told me they were improved or cured or helped by Procedure X, and in the back of my mind I'm always thinking, 'Yeah, but it may have gotten better anyway.'

Jacob Stegenga: Yeah. So, that general line--I mean, I guess I would want to offer a kind of closing comment. I think some people will read this book or listen to your podcast and think, 'Well, that's an interesting idea, but I'm not totally persuaded.' And that's perfectly fine with me. What I hope is the big-picture argument in the book, what I'm calling the master argument, and then the particular chapter-level arguments, at least offer people a way to think about different domains of medicine, in particular medical interventions, more critically. And I certainly hope that audience includes physicians and policy makers and regulators. So, while I hope that I convince people that the thesis is persuasive and compelling, short of that, I hope that it at least offers people a set of tools and an argumentative strategy to think carefully and critically about medicine and about the evidence that's available for our most widely consumed medical interventions.

Russ Roberts: Yeah. As you say, you are not anti-intervention. You are pro-being careful.

1:13:10

Russ Roberts: I want to close with your interest in what you call 'gentle medicine.'

Jacob Stegenga: Mmmhmm.

Russ Roberts: I'm going to read you a quote from the book and let you talk about it. You say,

Gentle medicine is not the audacious proposal that physicians should not intervene at all. We have a few magic bullets in our arsenal and we should use them. Rather, gentle medicine is the more modest proposal that physicians should intervene less, perhaps much less, than is presently the case, and we should try to improve health with changes to our lives and to our societies.

Jacob Stegenga: Right. So, thanks for this quote. There's a few underlying ideas behind gentle medicine. When I was starting out on this, writing this book, years ago, I tore my ACL [anterior cruciate ligament] ligament in the knee playing tennis. And, my treating physician called himself a 'non-interventionist.' I was, 'What do you mean, a non-interventionist? I tore my ligament in my knee! I want my knee back.'--

Russ Roberts: Fix it!

Jacob Stegenga: Exactly. Fix it. And so, of course I did what everybody would do. I read on the Internet about the various treatment options. There's a very standard surgical procedure to repair the ligament. It's a big industry; $10 billion dollars a year in the United States repairing ACLs. My treating physician said, 'Look. Recently there have been a couple of high-quality trials that compared exercise and stretching and physiotherapy'--that's one treatment regime--so the kind of stretching, physiotherapy regime, 'to the reconstructive surgery regime. And these trials show that if spend a few weeks stretching and doing physiotherapy, you get the same endpoint as if you'd had the surgery.' And that was the first time--you know, I hadn't even started writing this book yet. This book was kind of just a scratch in my mind at the time. But, I was really struck that you could have a torn ACL and a physician would say, 'I'm a non-interventionist about this.' And so, that got me thinking--like, 'Well'--it got me thinking a lot about the different themes that come up in the book. In short, one aspect of gentle medicine is a kind of modest version of non-interventionism. It's at least a kind of mitigated interventionism. It's, uh, we should be intervening less frequently, and using less invasive kinds of interventions. Um, a corollary[?] to this is: Well, we want to do something to improve our health. We want to do something to, like, offset disease and offset harms caused by disease. So, what should we do? Well, in the long history of medicine, we have really good reason to think that some of the great strides that we made in society--like, increased lifespan, decrease in childhood mortality--these have come not from drugs or from surgeries, but from things like clean drinking water. Exercise.

Russ Roberts: Washing your hands.

Jacob Stegenga: Washing your hands. Yeah. There's just an overwhelming mountain of evidence that things like access to nutritional requirements improves our health much more than access to things like streptomycin. And so that's a second aspect to gentle medicine. And a third is a kind of call for particular kinds of research. So, we have, really, just a huge amount of evidence on--whether or not we think the evidence is any good, we have a huge volume of evidence on the benefits of pharmaceuticals. We have a tiny amount of evidence on what it's like to come off pharmaceuticals. And many drugs, once we're on them, we're on them for life: like, statins, blood-pressure-lowering drugs, psychopharmaceuticals. You are on these drugs for many years. There's a kind of study, with, a drug-withdrawal study, where you have subjects who are on drugs and then you take them off and then you observe the effects. And we need a lot more evidence of that kind in order to be responsible non-interventionists. So, that's why, you know, the book is not closing by saying, 'Physicians: Stop prescribing drugs.' That would be totally irresponsible. Rather, it's saying, 'We should intervene less, less severely.' But we also need more evidence about what it's like to intervene less.