Adam Cifu on Ending Medical Reversal
Feb 15 2016

Why do so many medical practices that begin with such promise and confidence turn out to be either ineffective at best or harmful at worst? Adam Cifu of the University of Chicago's School of Medicine and co-author (Vinayak Prasad) of Ending Medical Reversal explores this question with EconTalk host Russ Roberts. Cifu shows that medical reversal--the discovery that prescribed medical practices are ineffective or harmful--is distressingly common. He contrasts the different types of evidence that support or discourage various medical practices and discusses the cultural challenges doctors face in turning away from techniques they have used for many years.

Christy Ford Chapin on the Evolution of the American Health Care System
Historian Christy Ford Chapin of University of Maryland Baltimore County and Johns Hopkins and author of Ensuring America's Health talks with EconTalk host Russ Roberts about her book--a history of how America's health care system came to be dominated by...
Adam Ozimek on the Power of Econometrics and Data
Adam Ozimek of Moody's Analytics and blogger at Forbes talks with EconTalk host Russ Roberts about why economists change their minds or don't. Ozimek argues that economists make erratic but steady progress using econometrics and other forms of evidence to...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Feb 15 2016 at 10:39am

Re: stopping clinical trials early

Imagine I’ve got a coin which I think is biased. I think it comes up heads with probability 0.6. I decide to do the following trial: I will toss the coin a thousand times, and if at the end it’s come up heads at least 60% of the time, I declare the coin biased. But I start tossing the coin, and it’s taking a long time, and I notice that after 135 tosses, the coin has come up heads 60% of the time. I tell myself that this is strong evidence that the coin was biased, and it’s important that we have actionable conclusions soon, because this coin is used to decide who gets the ball first in football games. So I declare the coin biased.

The problem is, I’ve now used a test which is much, much weaker than either flipping the coin 1000 times, or even flipping the coin 135 times. The test I’ve used is this: will there ever be a point during the 1000 flips when more than 60% of the flips so far have come up heads?

Kent Lyon
Feb 15 2016 at 1:34pm

Russ Roberts is doing it again, namely, misleading his audience on medical practice. He seems to have a particular animus toward estrogen replacement therapy. He has found an author who accepts the WHI as gospel, even calls it a “really good clinical study” which is the opposite of the truth. The WHI is one of the worst studies ever done, was stopped early before demonstrating statical significance and then was misleadingly construed to be significant. The full details of the study have never been fully disclosed, such as how many of the women in the study were smokers and given estrogen replacement despite failure to quit smoking (an absolute contraindication). The average age of starting estrogen replacement was 62 in that study, a phenomenon that was not standard medical practice. Premarin and Prempro were used at a time when topical or vaginal estrogen replacement had evidence that they were superior in terms of risk. The study was done for political reasons and was performed politically and was interpreted politically and evidence of it’s wrong conclusions have accumulated since it’s publication, including that estrogen replacement started earlier in menopause does indeed lower risks that the WHI suggested were increased by estrogen replacement. The study was performed using patients that were a decade or more post menopausal but was construed to apply to women at the beginning of the menopause.
The credibility of Dr. Robers and Dr. Cifu is something one should take with a great degree of skepticism.

Feb 15 2016 at 3:33pm

Great discussion.

This should be required listening to anyone who vehemently denies that vaccines could be causing autism and other negative health issues.

Russ Roberts
Feb 15 2016 at 3:37pm

Kent Lyon,

I have no interest in misleading anyone and I doubt Adam Cifu is interested in misleading people, either.

I have no animus toward estrogen replacement therapy. I did not bring up the example–the guest did. He is the second guest in recent episodes to be skeptical of estrogen replacement therapy.

I understand there are quality-of-life reasons to prescribe estrogen for some women. What I understand Cifu to be saying is that estrogen replacement does not reduce heart attack risk and may actually increase it.

If you know of an essay or book or author who challenges that claim in a thoughtful way, I’d love to know about it. I will also encourage Adam Cifu to respond as well.

Keith C
Feb 15 2016 at 4:29pm

Russ, Adam –

I have nothing useful to add other than saying I thought this was a fascinating discussion. Great stuff.

Adam Cifu
Feb 15 2016 at 6:06pm

Kent Lyon – Thanks for the comment. You are correct that the WHI is not a perfect study. You are also correct that vaginal estrogen has a better safety profile than oral estrogens. Oral estrogens also remain a very useful medication in the management of postmenopausal symptoms.

We have two well done, though not perfect, randomized controlled trials (HERS and WHI) that suggest that estrogen replacement therapy (ERT) is not beneficial in preventing cardiovascular events. These studies also suggest that it might be harmful. The WHI (in which 1/3 of the participants were in their 50s) does do a very good job outlining both the risks and benefits of ERT (and there are benefits). The decision to use these drugs should be an individual one. The WHI does argue strongly, however, that ERT should not be universally recommended–as it was in the late ’90s.

Compared to the above well done, though imperfect, trials, we have no similar data supporting the use of ERT for prevention of cardiovascular disease. Only flawed observational ones.

My argument is never for or against any specific treatment, only that we should recommend treatments that we are sure work–or at least inform patients if we are not sure. The burden of proof should be not be on researchers to prove that a therapy does not work but to prove that a therapy does work.

Jim lear
Feb 15 2016 at 10:05pm

You seem to be the eternal pessimist when it comes to economics. Way too much confidence is given to economics when policies are debated, so I appreciate your pessimism. Overconfidence is also given to science, especially when policies are debated or products are to be sold. Many people have died because of the mistakes of settled science, such as Fen-Phen patients or trans fat consumers. We have to realize the limits of science.

Economics may be dismal, but so are all the sciences to an extent. Despite that, all sciences, including macro-economics, have made huge strides. We may know almost nothing, but we know so much.

Damian C
Feb 16 2016 at 8:03am

Thanks for another interesting episode Russ & Adam.

I found discussion about the surrogate and clinical endpoint discussion interesting. But if there is a well established correlation between a surrogate endpoint (eg blood pressure) and a clinical endpoint (eg risk of heart attack), then what is actually going when that correlation disappears in a particular setting, such as for the use of a drug like Atenolol?

Is there a hidden mechanism triggered by the drug that counteracts or blunts any positive benefit that would otherwise have been achieved by reaching the surrogate endpoint? Or do such trials cast doubt more generally on the use of that particular surrogate endpoint as an indicator of the clinical endpoint in any setting?

Feb 17 2016 at 4:59pm

Incredible podcast. Been following econtalk for quite a while now and this is the most nuanced piece you guys have put out. Great work!

Luke J
Feb 17 2016 at 7:31pm

I believe another Econtalk guest, Marcia Angell, suggested that randomized clinical trials comparing new drugs against placebos are methods “big pharma” uses to overstate efficacy.

Dr. Adam Cifu’s medical reversal (new therapy is not as effective as old therapy) might be caused in part by RCTs.

Feb 18 2016 at 11:55pm

It was a delight to hear Dr. Cifu, a former professor of mine, on Econtalk. I’m sure economists and MBAs get to hear their old teachers frequently on this show, but it was unique for me. I appreciated the discussion and the correlation between medicine and economics that were brought out.
I think that in both fields the practitioners are in a situation where they must make a policy or a treatment plan with imperfect evidence to guide them. And in medicine, as well as economics, we often rely on a story that we can tell the patient. Sounding good, hanging together, or making sense (not just common sense but pathophysiiological sense) often fills the void when there is no clear evidence based guideline.

Robert Swan
Feb 19 2016 at 5:31pm

Firstly, a comment on a comment:

Kent Lyon: I think you’re a bit over the top — perhaps you should listen again. There’s no sign of Russ having an axe to grind.

However, like you, I disagree with the WHI being characterised as a good study. In particular, the way they very publicly abandoned the HRT arm of the study on the basis of a very marginal association with breast cancer caused needless suffering to millions of women who had actually been benefitting from HRT’s real effects.

Now to my own comments.

Russ drew a beautiful parallel between the FDA and bank regulators and ratings agencies. The question is: are we waiting for the medical parallel of the financial crisis? Are are we already in it?

I don’t think Dr Cifu’s idea of educating medicos top-down rather than bottom-up would be helpful. What I think might help would be introducing the students to some clinical realities much sooner in the course. I’m thinking along the lines of an apprenticeship where theory and practice are blended. A great way to learn.

In today’s world of “big data”, it would also be a good idea to familiarise doctors with the dark side of statistics. Rather than the usual stats texts, use a book of the flavour of “How to Lie with Statistics”. I have a feeling some of the supposedly definitive meta-studies would have had a harder time getting past review if the reviewers had a bit more statistical savvy.

Lastly, I endorse what Dr Cifu said near the end, and repeated in his comment above. Each patient is an individual, a trial with sample size 1. Too many people are treated epidemiologically rather than medically. Vaccines are special because the population benefits from herd immunity, but how strong is the case for doling out statins in the massive quantities they are prescribed today? And the case for PSA screening is weaker still. “Number needed to treat” is a pernicious concept.

Daniel Barkalow
Feb 19 2016 at 6:20pm

There are often posts on the blog “In the Pipeline” that are relevant to this topic, told from the point of view of someone working in drug discovery.

I found the vertebroplasty discussion really interesting: vertebroplasty doesn’t do better than placebo, but placebo of vertebroplasty does better than the previously-standard treatment. On the one hand, the actual presence of medical cement in the patient’s back clearly doesn’t matter; on the other hand, going back to pain medication and time, which doesn’t have as good outcomes, seems wrong. If you completely follow the evidence without trying to make sense of it, the absolute best treatment in terms of effectiveness and low risk is to inject saline in people’s backs while they sniff glue. I think we’ve got an oddly long way to go in making a visit to the doctor for a condition with no known medical intervention as good as being in the placebo group of a clinical trial.

On the “ending clinical trials early” thing, I think the right conclusion when that happens is that the treatment is reasonably effective, but you’re going to see some exaggerated results, which you should ignore. If you think a coin comes up heads 60% of the time, and you plan to flip it 1000 times to test it, and after 100 flips it’s come up heads 95% of the time, you can be pretty sure that it’s not a fair coin. You can also be pretty sure that it won’t come up heads 95% of the time in the future.

Feb 20 2016 at 12:18am

I am a bit confused about all of the discussion of the problems of stopping medical trials early. Am I to believe that the people running trials and analyzing trials are ignorant of basic statistical results such as those described here?

I feel like a lot of the “issues” that Russ and Russ’ guests get into aren’t real issues and are moreso the result of an outsider coming in with a lot of high-level, obvious questions as if the practitioners haven’t already considered it.

A similar example is the one Russ loves to bring up regarding excluding people who don’t drink at all being excluded from the dataset, despite the fact that one of his guests explained that this was the result of a deliberate back-and-forth between experts, not something that was done without thought.

Robert Swan
Feb 21 2016 at 8:59pm


The WHI HRT study that I examined quite closely at the time did not apply that method. When I analysed the figures for myself it was spot on the 0.05 significance level. This is more or less the coin tossing scenario described by Abe in the first comment, not the one Daniel Barkalow described just above. It was absurdly flimsy evidence on which to abandon a large many-year study.

I believe the common mistake is not ignorance of statistical methods, but ignorance of their limitations. You can legitimately distill information about millions of humans down to some “essence” of that population’s physiology. What you can’t go on to do is say that that essence accurately reflects your particular physiology. Or mine. Or anyone’s. It is still useful in some circumstances e.g. epidemic diseases, but misapplied in others e.g. IMO cholesterol/statins.

It surprised me when Dr Cifu spoke in apparently awed tones that Russ was a true sceptic when all he had said was the plain truth: that a particular conclusion might simply be the result of random chance. That possibility is explicitly there in all statistical conclusions.

If statistical methods were being applied with true mathematical rigour, and given that 0.05 is the usual significance level for studies, would you not expect about 1/20th of positive conclusions to be found to be spurious? That we don’t see this suggests that, one way or another, the methods don’t behave as expected.

But set all my waffle aside. I’m not medically trained, and only have a fairly light undergraduate maths degree. Instead, read:

PLOS Medicine article

and then say why Russ’s scepticism is unfounded.

Michael McEvoy
Feb 23 2016 at 9:03am

to Robert Swan – You sound medically trained . I am medically trained and have practiced for 30 yrs in primary care. I enjoyed your recommendation of D Huff’s book.And I really like the idea of apprenticeship. There is SOME of that baked into medical education today , but I suspect not very much ( e.g., I mentor a freshman med student at my private practice office a few times monthly )
I was unclear on your meaning regarding NNT being pernicious. Also, did you mean that statins utility is overstated?

Robert Swan
Feb 24 2016 at 5:31pm

Michael McEvoy:

Glad you found my comments of interest. My late father graduated in medicine in 1949, and I absorbed a few things over the years as he kept up to date.

On book recommendations, I’ll throw in “Innumeracy” by John Allen Paulos — fun insights, though not restricted to statistics.

I overdid it saying that NNT is a pernicious concept, it’s only a fig leaf to conceal ignorance. The general idea of preventive medicine is great, but its current state is pretty hit-or-miss; a far cry from the general impression that it’s akin to having your car serviced regularly.

I skimmed the Cochrane meta-study on statins a couple of years ago (e-mail conversation with a nephew who was studying medicine). There were a number of oddities in their analysis. However, I am worried that I have become the resident blog bore. I’d be happy to take this to e-mail if you’re interested. The moderator should be able to give you my address.

[Done. But for the record, you are not the resident bore, and your comments are welcome here.–Econlib Ed.]

Robert Best
Mar 2 2016 at 12:58pm

Ending Medical Reversals?

You would see a sharp reduction in reversals if doctors had skin in the game. When a regimen or pharma solution fails, let them bear the cost of attempting a new approach. They should be paid for accomplishments, not for best efforts.

At my white collar job, I have to successfully resolve issues to be paid. My roofer, the same… my car repair guy, the same… If you pay for the fix once, it usually takes fewer attempts to get it done right.

With increased physician fiscal accountability comes fewer re-do’s … thus fewer reversals.

Don Rudolph
Mar 12 2016 at 11:48am

Would it make sense for doctors to submit a log on all their patients to a central collection agency. It would be useful to know the long term effects of certain treatments. If ten million people are taking a blood pressure medicine what was the rate of heart attacks for those people. The control group might be other societies, for instance I have been told that Germany has different standards about introducing cholesetrol reducing drugs then they have in the USA.

Comments are closed.


This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more background readings:

  • Cartels, by Andrew R. Dick. Concise Encyclopedia of Economics.

A few more EconTalk podcast episodes:



Podcast Episode Highlights
0:33Intro. [Recording date: January 28, 2016.] Russ: I want to say first, this is a spectacular book that will resonate deeply with EconTalk listeners who are interested in health, what is reliable evidence, how do we know what we know. And ultimately I think we are going to get, at the end of this conversation, to issues related to economics and the parallels between health, evidence in health and economics, which we have talked about before. So I just want to start by encouraging people to check the book out. It's also delightfully written. I want to start with, what is medical reversal, which is the subject of your book? What does that term mean? How did you coin it? Guest: Medical reversal, yeah; we actually had a lot of trouble coming up with what to call this. We think about--I like to start with thinking about how medicine is supposed to evolve. Which I think of as replacement: We have a good therapy, a pill, a surgery, a device--whatever; we are happy with that. And then some good evidence comes out to tell us that something is better; and that something better replaces what we used to do; and we're sort of happy about that. And that's kind of how we expect medicine to improve, bit by bit, over time. Reversal is when a new therapy comes along that replaces the old therapy. But usually that new therapy is not based on really foolproof evidence. But we only find that out after the new therapy has been adopted--used by hundreds, thousands, maybe millions of patients. And then we discover, when more robust data comes out that: Huh. This new therapy is not as good as the old therapy. Maybe it's worse than the old therapy. Maybe it's worse than not doing anything else. And that's what we consider medical reversal, where it kind of flip-flops on what we've recommended. Russ: And the first question of course is: Is this a big problem or a small problem? We'd assume, given how phenomenal medical care is and its advances in technology and pharmaceuticals--this must happen, what, every once in a while, of course. But you seem to suggest it's a little more of a problem than one might think. Guest: Yeah. To be completely honest, we don't know how much this happens. Vinay Prasad, my co-author, and I began thinking about this just within our own clinical practice: we realized that some things we'd recommended a couple of years before, we not only no longer recommend but we were sort of apologizing to our patients that we'd recommended it in the first place. And so we sought to figure out: Boy, is this just a rare occurrence that sticks in your head; and certainly sticks in my patients' heads? Or is this something that's more frequent? A lot of people have looked into this, in various different interesting ways which I can talk about. Our approach was actually just to look at one journal, a really important journal, The New England Journal of Medicine, and we looked at all the articles published over a 10-year span. We got a lot of people to help us out with that: it was a pretty big job. And over those 10 years, we identified 146 articles, which concerned about 100 different therapies, which were clearly therapies that had been adopted, used widely, lots of money spent on them--that turned out to absolutely be the wrong thing. Our estimate, sort of taking other people's research with our research, is that maybe as much as 35%, 40% of what we do could be wrong. Russ: It's a really big number. And one of the effects of your book--and my listeners know how skeptical I am about lots of things. And one of the things I'm skeptical about is medical treatment and various new--and old--treatments. I'm always wondering: Does this really work? Incredibly, given how skeptical I am, this book made me even more skeptical. Guest: Oh, God, quite [?]. Russ: Which is quite an achievement. Yeah. Things I'm doing now: 'Well, of course, this is a good idea.' I'm starting to think, 'Well, I wonder if there's any evidence for this.' And even if there is some evidence, is it good evidence?
5:12Russ: Let's start with some prominent examples. What's interesting about the book is the range of things that are not effective. It's not just, 'Well, that pill didn't do what it's supposed to do.' Talk about some of the--pick three or four that come to mind and talk about what happened: why they were reversed, the findings. Guest: Sure. Maybe I'll start with where I started on this, and probably the thing most familiar to listeners, is estrogen replacement therapy. Estrogen replacement therapy was really widely recommended for women after menopause and prescribed all throughout the late 1980s, 1990s, and even into the 2000s. And this was based on observational data, predominantly from the Nurses' Health Study that showed us that women who used estrogen replacement therapy did better--had fewer cardiovascular events--heart attacks, strokes, things like that--than women who didn't use estrogen replacement therapy. This idea was made even more attractive because there was a good [?]--biophysical, biochemical rationale of why we should use them, estrogen replacement therapy. We know that women develop coronary artery disease about 10 years later than men. We attribute that to the effect of estrogen. And so most doctors did this. I certainly recommended it to my patients. And then when a really good sort of experimental randomized control trial came out, we figured out that, 'Huh. You know, estrogen replacement therapy really doesn't help reduce the risk of cardiovascular events.' And for the first couple of years you use it, it may actually increase the rate of events. So that, for me, was the first time I began thinking about this. And I think I'll speak a little bit for my co-author, Dr. Prasad. I think the thing that got him the most, and certainly was shocking to me, was the story of using stents for stable coronary artery disease. Now, stents are these little expandable metal tubes which are like magic. They can be inserted with a catheter into, pretty much at this point, any artery in the body; but speaking here about inserting them into coronary arteries. And therefore can effectively open up blockages of the coronary arteries. And we know for an absolute fact that those stents are life-saving in people who have had heart attacks, people who have what we call unstable angina where they are sort of on the verge of having a heart attack. But what happened in the late 2000s is we started using these stents for people who had stable angina--people who were fine, but when they exercised they would get chest pain because they had mild to moderate blockages in their coronary arteries. It turned out that a tone of money was spent on this: by 2009, 80% of the Medicare dollars that were spent on coronary stents were spent on this indication, using them for stable coronary disease. And then this very famous trial, the COURAGE Trial (Clinical Outcomes Utilizing Revascularization and Aggressive druG Evaluation Trial), came out in 2009. Which, if you are looking at things like preventing heart attacks, preventing deaths, stents were no better than just the medical therapy we were using at the time. So I think those stick out in my mind as some of the most striking examples. Russ: And you have a mix of things that were just ineffective--had no impact--and others that were perhaps ineffective, but in the case of the stent, you really don't want to put a stent in if it's not effective. Guest: Right. Russ: The treatment itself often comes with risks of infection, side effects from a pill, etc. Correct? Guest: That's absolutely true. And some of the things that we state in the book--they probably didn't harm anybody; and the harm, you know, probably was the cost of the procedure: in American medicine nothing is cheap. There may be an opportunity cost for some of these interventions, where the person got something that we thought was helping but it wasn't; and maybe at the same time they could have been getting something that was actually effective. And then, certainly another harm is, this really does affect people's faith in medicine. Whether you are a skeptic to begin with or not, once you've spent a year on a medication which your doctor then tells you, you should come off, because it's not doing anything: You are a little bit more slow on the uptake for future recommendations, I think.
9:48Russ: Just for a baseline: You talked about observational studies as being the original motivator for the estrogen replacement therapy. This is fantastic parallel in economics. So, talk about: What is the difference--because we are, I hope, going to talk about this all throughout the conversation--what's an observational study, on the one hand, versus a randomized control trial, an RCT, on the other hand? Guest: Sure. Russ: Why is one better than the other? Guest: Yeah. Great. So, an observational study is really a natural experiment. And I think it's--in economics, probably what you have to struggle with all the time. So, that, in medicine, when already something has differentiated into two populations, two groups of people. It may be that one group has decided to take a medication while the other one hasn't. It may be that one group has been exposed to something--say, you know, living in a poor neighborhood versus another group which has not been exposed to that. Or it may be that a doctor has made a decision to do an intervention on one group and decided not to do that intervention in the other group. And then an observational study will report the difference in outcomes for those groups. Did the people who take the pill do better than the people who didn't take the pill? And-- Russ: What's wrong with that? Guest: The obvious is that--yeah, so, what's wrong with that is that it's not just a pill that's different between those two groups of patients. Something motivated those people who took the pill to use it. Or their doctors to prescribe it. And something motivated the other people not to take the pill, or not to prescribe it. And so, usually, in observational studies, when you look at the groups, the groups are very different. To go back to the estrogen replacement example, when we look at that: Women who took estrogen were younger, thinner, had actually better cholesterol levels; I think they actually drank a little bit more than the women who didn't. So it's a very different population. And so it was probably not the estrogen replacement therapy which was benefitting them, but everything else about them that made the better outcomes. Russ: So, I'm going to play-- Guest: What we call confounding. Russ: I'm going to play the believer for a minute. Which is challenging for me, because this is one of my big skepticisms, in economics: Okay, so there's some confounding factors. You just ran off 4 or 5 of them. Control for those. That's what we have statistics for. That's what multivariate regression and other techniques do to control for those confounding factors. And then we can isolate the effect of the estrogen replacement. Guest: So, you are absolutely right. And these studies have a place. Right? The problem is that we never know if we are completely adjusting for those confounding factors. And, you know, the people who did the Nurses' Health Study, then, you know, they were smart. They were Harvard-school kind of health kind of people. And they controlled for--without the paper in front of me I would say probably a dozen, probably more things. But these groups were so different that they weren't able to control for all the confounders. There's a wonderful study--and this is using all medicine examples, from a couple of years back, where some really brilliant researchers took studies of a single intervention that were studied both in observational trials and in randomized control trials--which are really experiments, which get the whole confounding problem out of the picture. And it turned out that those studies agreed most of the time--think about 80% is what I recall. Which is good. But it's not perfect. And you don't know when those observational studies are steering you in the right direction and when they are steering you in the wrong direction. Russ: And I guess the--I want to just home in on this issue that you call mechanism or the underlying science which of course we have an imperfect knowledge of, also, in both medicine and economics. We have a pretty good idea, given what you mention in the data about the differences between male and female heart attack rates that estrogen in women probably protects them. Somewhat, perhaps-- Guest: Right. Russ: We don't know for sure. Because we don't really know how the heart exactly is affected by it. But it's presumed that that is the case. So, then, when you give people estrogen, that should reduce the probability of a heart attack. The problem was, of course, was that giving people estrogen may not be the same as producing estrogen. That's one problem. And the other problem is we don't really understand the mechanism. But in a lot of these cases, it seems like the mechanism itself is the thing we don't really understand. My question is: Do we ever get at that underlying mechanism? If we did, we'd find a much better way of finding things to help. Guest: That's so true. And I think if there's one thing that I took away from the work that went into this book, it was the humility that was sort of forced upon me, when I saw how many times we are absolutely sure something will work. Because it should work. We understand the mechanism; we think we understand the biology. And boy, we know a ton about how the human body works. And then when everything lines up to say, this intervention should work, and that's part of why it's been adopted; and then real empirical data shows that it doesn't work--you know, it's shocking. I think the fact is the human body is so complicated and there are so many different things that impact on how a medication or a procedure works or doesn't work, that--I don't know. I mean, I hope we'll know it eventually. But right now, we don't. And we need to go with empiricism more than the biochemical rationale that underlies some of these decisions.
15:48Russ: You talk about surrogate end points. And that's one of the challenges that this relates to. Explain what those are and why that's a challenge. Guest: Sure. So, you know, I think of the dichotomy as surrogate endpoints and clinical endpoints. And clinical endpoints are the things that you care about: How do I feel? Am I going to live longer? Am I going to avoid having a stroke? Like, really, really key things. Surrogate endpoints are stand-ins for those. So, it might be: How high is your blood pressure? How high is your cholesterol level? How high is your average sugar? Things that you would have absolutely no idea about those numbers or those values unless you see a doctor. They don't bother you at all. And so when you come up with a new therapy, you really want it to improve a clinical endpoint. You know, you want this new pill to decrease the number of heart attacks people have. But, boy, to show that, you need a bunch of people; you need to follow them for a long time. And that's expensive. So it's a lot easier to pick a surrogate marker as stand-in, like blood pressure, and say, 'Well, you know, we've got this new pill, and it lowers blood pressure; and we know that people who have higher blood pressure are at higher risk for heart attacks.' So, it translates that lowering blood pressure should lower heart attacks. And we'll accept this therapy, because we know it affects the surrogate marker in a good way. Russ: So you mention one blood pressure medicine that in clinical trials did not have any clinical endpoint effect. It did affect, of course, the surrogate endpoint. It did lower blood pressure. But it didn't effect-- Guest: Sure-- Russ: strokes, heart attacks, etc. Is that true of all blood pressure medicine? Guest: Um, so that's not true of all blood pressure medication. I mean, most of the blood pressure medications we use, we do actually have hard endpoints on. And we've shown that they've decreased things like heart attacks, strokes, even some of the overall mortality. But that's not true for all of them. The one that we discuss in the book is Atenolol, which was marketed as Tenormin for a long time. And I'm actually attached to that, because back in medical school we had to write this personal pharmicopedia, which was, you know, 20 of our drugs we were most interested in and write all about them. And Atenolol was the first drug in my personal pharmicopedia. Russ: Hard to let it go. Guest: And it turned out that Atenolol does do a really wonderful job of lowering blood pressure. As good as most of the other blood pressure medications we use today. But when you brought together all the studies in which it was compared to a placebo, it turns out that it doesn't improve mortality. Doesn't improve the risk of heart attacks. It may slightly, slightly decrease the risk of strokes. But there are so many other medications which control the blood pressure just as well that actually control all of those real clinical endpoints. Russ: But to get back to this mechanism issue: We don't understand why it is that some medications that lower blood pressure seem to actually affect the things we care about, while this one did not. Guest: Absolutely true. Absolutely true. I think there are certainly people who know a lot more than me about that. But in the end, nobody can really predict: Will this have the outcomes that we hope it does and expect it does. Russ: So, let's go back to randomized control trials. We talked about the challenge of the confounding factors in an observational study, a so-called natural experiment, a so-called statistical analysis of observable behavior and outcomes. Why is a randomized control problem better? What's better about it? Guest: So, a randomized control trial is really an experiment. And so, you take a group of people who you ask--ask nicely--to enroll in your study. You make sure the study is an ethical study, approved by your institutional review board, say that there's equipoise--we don't know which is better; we don't know if the treatment is better than what we are presently doing. And then you randomize them. And half the group gets the treatment; half the group gets the placebo. And so those groups are exactly the same on average in all the risk factors that you know of. But also all the risk factors that you don't know of but that we might know of in 10 years. And so, in a really well-done randomized control trial, we now at the end of whatever difference there is in the group once the trial has ended, is due to your intervention--whether it be a surgery or a pill or a device implant. Russ: You hope you know. Because there are still issues about, as you say--you don't know everything to control for. It could be by chance that the people in the placebo group are different in ways that you don't observe. But the idea, of course, is the larger the sample, the more you hope you've dealt with that problem, because of the law of large numbers. Correct? Guest: That is true. That is true. And I can tell by the way you talk, you are a true skeptic.
20:54Russ: Yeah. But well, what's fascinating is, you came up--there are a lot of issues that come up in randomized control trials in economics that are problematic, mainly because there's a big difference between physiology and the setting that an experiment takes place in, in economics. It's going to be harder to generalize. There are issues, of course. Medical issues by population and geography, etc. But you did bring up a couple of really interesting challenges of randomized control trials (RCTs) that I had not thought of. One positive, one negative. I loved your point--a lot of people say, 'Well, it's unethical to give people a placebo because they've got this condition and you are not helping them.' So, first, talk about--I think if I've got this correct--vertebroplasty. Talk about how--the degree to which they keep make the placebo as close as possible to the treatment. Guest: Sure. Russ: Which, I love that. And then talk about why it's actually kind of ethical--it is ethical--it's not what you think. Guest: Yeah. Vertebroplasty is a wonderful example. So, to take a step back, what vertebroplasty is. So, people with osteoporosis, with thinning of the bones, most commonly postpone a puzzle of women who we're picking on in today's conversation for some reason--will develop osteoporosis and then may develop what's called a spinal compression fracture. And if you picture the vertebrae in your back, a compression fracture is just when all of a sudden that collapses. Very common; the estimates are that there are 700,000 compression fractures in the United States every year. And about 280,000 of those are clinically important--meaning that people, you know, develop back pain; go to the doctor with it. For years, our only real treatment for compression fractures was pain medication and time. And people do get better from these eventually. And in the 1980s, some radiologists came up with this kind of new idea that, 'Well, what if we take some of those with compression fracture and we inject medical cement into that collapsed vertebrae?' And so the vertebrae puffs up, it's stabilized; the nerves that are coming around that area get a little more room to breathe; and people should get better. And this was an approved therapy, based on some not-perfect trials, which showed that people who got this procedure felt better than people who didn't get this procedure. The real test, though, was to design a test though that had a placebo group as close to the intervention group as possible, other than the vertebroplasty. And it was an amazing study, done by some very brave researchers, where patients were randomized--either to have vertebroplasty or to get sham vertebroplasty. And the sham was that they took people to the procedure room. They prepped their back like they were going to do vertebroplasty. They actually opened the medical cement so the patient could smell the medical cement. And then they just injected saline into their back. And it turned out that over the first month after the procedure, all the endpoints were exactly the same between the sham group and the intervention group. No difference in pain. No difference in quality of life. No difference in activity scores. Nothing. Russ: So, that's fascinating; but I don't want people to hear about the opening of the cement because it's my favorite thing. But the other point is that of course sometimes the procedure harms you. So, the placebo is great for the people who get that luck of the draw. Guest: It's absolutely true. And the placebo group--in a way is an insurance policy. And people argue this--is it ethical to do this sham procedure on people? To intervene on them and in some way it has no chance of helping them. But in fact, in the vertebroplasty case, you know, those people saved--I don't know--thousands, maybe millions of people in the future from getting a procedure which is not helpful. Russ: And as you point out, though, sometimes it's actually harmful-- Guest: Right. Russ: and it's a blessing to get the placebo. I have a lot of interesting things to say about placebo effects. That's for another time. Before I get to the other point about RCTs, I want to ask you a more pointed question about reversal. Which this is a good example of. I suspect there are people listening right now who are either patients or doctors who are either in line to receive this vertebroplasty or they actually do want it. Because one of the depressing aspects of this book is that a lot of these reversed procedures continue. Guest: That's true. Russ: I don't know if that one's totally off: everyone knows it's wrong, nobody does it any more. But there are plenty of things that you talk about in the book that continue. An example is you suggest, you say in the book, that rapid response teams don't work. Don't show any effect. The idea of creating a mobile group of people inside a hospital to respond to crises and emergencies don't seem to have any impact. Imagine that to a doctor friend of mine who says, 'What? They don't?' Either he missed the study or he doesn't agree with it or--so surely some of these so-called reversals, people say, 'That's not a reversal. Ah, it's one study that didn't work. Look, it's helped my patients. I know it.' Guest: Right. Right. So, I'll say in our defense: We were very careful in what we labeled reversals. And we only labeled something a reversal if the study that overturned the practice was clearly a better study than what had supported the practice in the past. Because you are absolutely right. I mean, are there things that clearly work but then one study says they don't work; and yes. And we know from our statistics that that's going to happen. So when we said something was a reversal, it's that the studies which had actually recommended this procedure or this intervention in the past were less robust than the ones that overturned it. Rapid response teams, are, I think a great example. And I think right now we are not really sure if they are beneficial or not. But they have been adopted far and wide. The data that says they work are generally single-center studies. So, one hospital shows that their rapid response teams work. Rapid response teams--also, boy, they make everybody feel better, because there are more people around to come running and helping. And the idea that this would be beneficial makes total sense. Russ: How could it not? Guest: The person's having a problem: give anybody can call the rapid response team. The fact is that for a rapid response team to really clearly be shown to work, it needs to be shown to help patients; and you need to figure out what input that is. Do you want your rapid response teams to save lives? Do you want your rapid response teams to get people out of the hospital faster? Or is your endpoint just that you want your rapid response teams to send more people to the intensive care unit? And to this point, we haven't seen that rapid response teams save lives. Russ: What are some of the psychological and monetary incentives that make it hard for doctors to admit that there is such a phenomena for a therapy or practice they are involved in? Guest: Yeah. This is always like the hardest thing for me to talk about, being 20-some-odd years into my practice and being fully accumulated into medicine. I like to think, and I really do believe, that for the most part, when doctors are, you know, shocked by reversals--maybe when actually they argue against a practice that they've recommended being reversed--it's because they truly believe it works. They've not only invested a lot of time and energy into the practice; they've seen people who get the intervention get better. And they think it's the right thing to do for their patients. There is a part of it, though, that you can't deny: That, boy, if you've made a lot of money over the years doing a procedure on people's knees, which you believe it works but it's also helping you, and your kids through college, when you find out that that doesn't work, you're probably a little bit more apt to argue with it. Russ: Yeah. I don't have any problem making that argument. But it's a very common problem in economics as well. I like to argue that about half of the macroeconomists in America think they are in the top 5% of candidates to head the Federal Reserve, and that affects their willingness to criticize the Federal Reserve even if they aren't aware of that. That subtle bias. But it's there.
30:18Russ: The other issue about randomized control trials I found so interesting is that in the medical area, sometimes an RCT will be stopped prematurely because the effects seem so dramatic it would be cruel then to keep people on the placebo--or on the treatment. How does that affect the accuracy of the trials? Guest: This is another nice piece of research by another group. I'll step back a little bit. Clearly when a randomized control study is designed, we feel it's basically an ethical necessity that if one treatment is clearly coming out to be clearly superior than the other treatment, that that trial needs to be stopped. Because if our new intervention is working really well and we know it, it's unethical to keep giving people the placebo. Right? The issue is, is that if you are doing a bunch of studies, your studies are going to come up with somewhat different outcomes, just through random chance. And, so what we found, looking at multiple studies over time, is that studies that are stopped early tend to overestimate the benefit of a treatment. Which, when you hear it, you say, 'Yeah. Thinking about that makes sense.' It's surprising to me, because I have to say, my reaction is, when I'm listening to the radio in the morning and I hear about a new therapy and it's being released because the trial was stopped early because it was shown to be effective, I'm sort of more convinced by that. I'm like, 'Wow, this must be really good if they have to stop the trial. But it turns out we probably should not look at it that way; and we should say, 'Well, you know, this may be one trial that was positive; but maybe there are other trials which will come out which will be negative; and maybe this doesn't in fact work.' Russ: One of the problems you talk about of course is, even when we believe--and I think you are right--that randomized control trials are better than observational studies, they are very expensive. Guest: Yes. Russ: How do you deal with that reality? We want to make medicine better. How do we deal with the fact that the tool that we have to bring scientific technique to medicine, a true experiment, is really a problem? Guest: Right. I think not only are they expensive, but you really need some really generous people--the volunteers--to be in the randomized control trial. It may not be cheaper for the individual, but it's a whole lot easier to just take the pill that your doctor gives you rather than enrolling in a trial where there's going to be a lot more follow-up, probably a lot more monitoring. And this is something that we struggle with. We know we need more of these trials, but how do you do it in the cheap, easy way? We stir up some examples. We are big fans of the Nudge Principle. And we thought the idea that, for a lot of things, decisions that we don't know which is best, which no patient would conceivably have some sort of predetermined reason to prefer one therapy to another, may be something that you have to opt out to not be in the trial. So, if you go to your doctor with sinusitis and she's deciding between two different medications, two different antibiotics, both of which we know works, we just don't know which is the most effective, and you have no reason to prefer ciprofloxacin [?] to azithromycin, why not just have that person randomized? Unless they opt out. And we could get lots of data quickly in that way. Russ: Do you think we are going to make some progress on these questions as we enter the so-called Big Data Era? One of the things that you seem skeptical about is something we've talked about on the program before with Eric Topol, which is personalized medicine--the innovations that are coming in self-monitoring and other ways of assessing, maybe, effectiveness. You are a little more skeptical on that. Guest: Right. I think that personalized medicine and using people's genetics to tailor therapy to them has enormous, enormous promise. I think the issue, though, is that you still need to prove that therapies work. And there's even more temptation when you talk about personalized medicine to say, 'Hmmm. We know how this drug works on this gene, and so that should fix people.' Well, you know, you still don't know that till you've shown it. And in a way, personalized medicine may make randomized control trials and evidence-based medicine even more important because we need to test each of these personalized medicine interventions on a smaller and small group of people, since our therapies will generalize to smaller and smaller groups. If that makes sense. Russ: Yeah. Sure.
35:27Russ: How do you deal with the criticism that your skepticism about so many received therapies and techniques is a recipe for "doing nothing"? I'm sure one of the things--and I think you write about this, and I get it all the time about economics: so, I'm skeptical that the minimum wage doesn't reduce employment. And I'm skeptical, I have to confess, because I think I understand the mechanism of how incentives work. And I might be wrong. But one of my responses then is when people say, 'Yeah, but look at the data,' I've want to say, 'Then, you better be--you've got to accept a different mechanism than mine; and that's going to have a lot of implications outside of just minimum wage policy.' But anyway, when I say stuff like that, people say, 'Oh, so we just do nothing? We've got these people who have terrible lives, they have terrible jobs, they have terrible opportunities in the labor force.' And they're being--some people would say they are being exploited. 'And you just want to do nothing, because you are not convinced it would help.' Doctors are in even a worse position. Here's a patient in pain, maybe at risk of death, and you are saying, 'Well, we just don't know if it's going to work.' And I'm sure many practitioners--you are a practitioner, so you have to deal with this daily--would say, 'So, what am I supposed to do--just go--I'll wait till the RCT comes out that shows me what to do? I've got to do something now.' Guest: That is so well put. I think the issue in medicine is that, 1. You have to consider who you are treating, and 2. You just need to be very open with the patient. So, if you are talking about a healthy person and you are talking about a screening intervention or preventative therapy, I would say, 'Boy, you know, you need to be absolutely sure that's going to help them.' Because you are taking a basically healthy person and you are basically turning them into a patient and potentially making them sick with your intervention. With someone who is sick, who is in pain--well, then I think the bar is actually a little bit lower. And you think about what you have to offer. You think about the likelihood that maybe it will work, and maybe it's based on observational studies; it may be that it's based on surrogate endpoints. And I think what's important is that you have an open discussion with the patient. And you say, 'Look, this is what I have to offer you. Maybe I have a well-proven older therapy and a less proven newer therapy: and these are the reasons it might work; these are the reasons that it might not. This is why maybe I'm a little bit uncomfortable about it.' And you let the patient make the decision. Just like doctors, we as patients I think have quite a breadth in our values. And I have some patients who, you know, never want to take a medication unless it's been on the market for 10 years. I have other people who are knocking on my door the day after it's advertised saying, 'I want that pill.' Russ: Yeah. And so, you mention screening. We had Robert Aronowitz on the program talking about our urges to reduce our risk. And screening is, I think, very appealing to most of us. Catch it early. But you, like he, appear to be somewhat skeptical. Guest: Yeah. I think we are brought up with the 'ounce of prevention is worth a pound of cure,' right? And there's nothing that makes more sense than screening. You find that breast cancer early, that prostate cancer early--it's got to help, right? The problem is, you know, our tests are not perfect; the diseases out there are--you know, even though we consider them common diseases, they are actually still rare. And so even with a pretty good test, when you are looking for a rare disease you are going to come up with a lot of false positives. And those false positives cause anxiety among the patients--that's probably the least impactful. They probably also lead to procedures that don't need to be done. And often treatment which doesn't need to be done. Our recent data from the world of prostate cancer screening says that to save a life from prostate cancer, we actually need to treat about 30, 35 people for prostate cancer. That's a lot of people being treated just to save one life. And if you are screening, you really need to take that into account. Russ: But if it's my life that you are saving-- Guest: Absolutely. Russ: there's this statistical issue there of what's a statistical life versus, you know, a personal experience. I think the question is--I'm being facetious--not facetious--I'm being, I don't know what the right word is. But the real question is: For me it's 1 out of 36 with lots of unpleasant side effects until I know otherwise. Right? We don't know who the 1 is. We're not saying it's too expensive to save the one. You're saying we don't know who the one is, and that's not a great return. Guest: Absolutely. And you are right. It's one in three for erectile dysfunction or one in three for incontinence after that intervention. So you are very likely to have the side effects. You are less likely to have the benefit. But that's where I think the patient decision-making really has to enter into it. And I feel like as long as people are well informed and you are letting them know what the data is, and there really is some chance of benefit or reasonable chance of benefit that it's reasonable to suggest, the people probably should have the freedom to make those decisions. Russ: One of the lessons for the book for me, and we'll talk about it at the end, is educating oneself as a patient or as a potential patient is really important. Guest: Yes. Russ: And I think most Americans, maybe most people generally, we like deferring to some authority. We like not in other areas. But in medicine, it's like, 'Look, doctor: you're the expert; I trust you; you carry yourself so well.' One of the things that struck me about your book is you are really emphasizing, as I do, the importance of humility in my field; and you are emphasizing the importance of humility in your field. A lot of people--we don't want a humble doctor, 'I want an arrogant one. I want a doctor who can just say, this is going to work; I've done this thousands of times; there's no side-effects,' blah, blah, blah. So, there is an interesting culture there that you are encouraging a change in. Guest: And I would say, 'You need to find the doctor that you need.' I certainly take care of people who are like that, who want me to be the decider, to be very clear about what I think is the right thing to do and they will follow my advice. I have other people who want to have an open discussion, maybe an open argument, about just about every decision. And you need to find a doctor who will do it the way you want to do it. Obviously if you are someone who wants to argue and you have a doctor who wants to dictate, you are probably not going to be a very good pair. Russ: Yeah, for sure.
42:39Russ: Many economists--not all, but many economists really dislike the FDA--the Food and Drug Administration. Guest: Hmmm. Russ: This goes back to work by Sam Peltzman, who argued that the FDA kills people. It's so careful in making sure that drugs are safe, it rules out drugs and therapies that could save lives. And they raise the costs through their tests and their demands, which makes it harder to get any one drug to market. You argue for the other direction. You suggest that the FDA should be more vigilant--not so much in safety but in efficacy. So, defend that position. Guest: Yeah. My co-author Vinay Prasad, he has said to me, and I really take this as truth, is that the people who work at the FDA, those people are the most underappreciated group of people in the world, and do an incredible job. Because on the one hand they are being yelled at by people who are saying: 'You are slowing down progress. You are holding up drugs that could save lives. You are responsible for, you know, mortality, morbidity.' And on the other hand there are people like us who are saying, 'Wait. You should make sure that this absolutely works before you approve it and let people be exposed to this.' So, it's a difficult place to work. And it's a difficult sort of road to hoe. I think what I would say is that we want to make sure that the FDA is assuring that we have data that treatments work eventually. There will be pieces that a therapy looks really good in terms of its ability to affect surrogate endpoints, say. And it's a drug that's really necessary, because maybe the treatments for the disease we have out there aren't so good. Now, it's probably completely reasonable that the FDA lets that drug out there and let's that start being used. But it really seems necessary that there should be studies ongoing at the time that the FDA approves that drug that will show us those real endpoints. Those clinical endpoints. Those clinical endpoints that matter. What often happens is that these drugs are approved based on surrogate endpoints. And then we never get that final data, and we are left with, you know, using therapies that might work but we are not sure they do. And that seems like the wrong way to proceed. Russ: So, let me make another analogy between medicine and economics I hadn't thought of until this conversation. Which is, so in the case of the financial sector, we say: We don't want any banks to go broke, because that can lead to chaos and disaster and people lose their money, and they really don't like losing their money. So what we're going to do is we are going to insure banks' deposits so you won't be at that risk. Of course, we understand that changes the incentives facing banks--that they are going to then tend to want to be riskier because they'll still be able to attract investors and depositors, because their money is insured. So then, we have to then, of course, keep an eye on the banks and have rules about what they can invest in, and what their safety is and whether they are approved or not by a ratings agency. And of course, eventually there is an unpleasant symbiotic relationship between the ratings agency and the banks. And they tend to work together--not as independently as they are supposed to. And banks start investing in things that are actually quite risky, but look not so risky; and the ratings agencies go along because that's how they make their money. Etc., etc. So in the medical area, we've got this lovely thing, on the surface, which is third-party payment, either through health insurance or government programs of various kinds--Medicare, Medicaid. So people don't pay for their medicine. So, my interest in finding out whether this works or not is very small. If it doesn't work, that's life. Negative side-effects--well, I don't want that. So we have the FDA. What they mainly worry about is whether there are negative side effects. Unfortunately, that means there's a natural incentive--and I think economists underestimate this part of the FDA and [?] relationship, to keep the industry somewhat happy. Right? Unfortunately it's true that the high cost of FDA approval mean that drugs take a long time to get approved; and that means a lot of drugs that might have been invested in aren't worth it any more. But, at the same time it kind of creates a cartel for the pharmaceutical industry. So, they don't have a lot of competition. Because there's this huge cost of approving a new drug. And they kind of like that. The first part, the delay, the cost, they don't like. Semi-cartel, monopoly thing: that's really great. So, to me, the FDA--of course the people involved in it day to day have a--you know, a hard job. They are good-hearted people. But the influences they are under must be subtle and in a way I feel that the Federal Reserve Governors are under. They are coming into contact with people every day that are not necessarily what the American people want to be doing. It seems like an unpleasant-- Guest: I think that's an amazing analogy. I mean, two subtle things I would add to it, also, is that because of the cost of developing these drugs, there is a very subtle incentive that if companies feel like they are going to have to be held to what might be unreasonable standards, incentives to spend this money and develop these drugs go down. The other thing, which I thought as you were talking about the banks: Physicians really rely on the FDA, because the FDA is in a way our insurance company. You know--the FDA takes the heat when a drug doesn't work or causes harm. Not the physician. So the FDA is getting both pressure and possibly influence in multiple directions. Russ: And you talk about that very thoughtfully in your book. In fact, let's turn to that. This isn't the FDA per se, but it's about the subtle influences that we all operate under. And just to close the economics/financial thing for the moment: and I think the real key here is that our medical system, the way it is structured through government programs and tax deductibility of health care payments, mostly, it just changes the feedback loops that would normally be there. And it does that in the--between patients and doctors, in the investment world also. And I'll leave it at that. Guest: Absolutely.
49:18Russ: Talk about thought leaders and what you call super-specialists, because I thought that was extremely interesting, about those incentives that face those folks. Guest: Yeah. I think about this in a way that, you know, when we talk about the amount of medicine that can be wrong, you know, there's art to some extent, made up numbers, and they certainly don't affect any one doctor. You may be seeing, you know, a generalist who is practicing from a very clear evidence base because the diseases that that doctor takes care of are common things which are treated by, you know, which have been studied very well. When you start to get sick and see a more and more specialized physicians, maybe, for the problem you have, the therapies that they recommend may be less well studied. And that's because they affect fewer people. And because you are so in need that you are probably more willing to accept therapies that are not as well studied. The other thing is that very often those specialists in medicine today, and it's one of the reasons why American medicine is terrific, is we have people who study such minute areas of medicine that they begin to just understand everything that's just known about that. And it makes them more willing, I think, to adopt therapies based on what they know. Because they are experts. And they feel like it's foolproof. Adding to that, those people are often most involved in the studies; they are most involved with the companies that are developing drugs and devices. And so those are people who, probably--the people you want to see with some of the problems, but maybe ones whose therapies might be most prone to reversal. Russ: Talk about their roles, consultants, and influencers. Guest: Yeah. So it's interesting. So, many of those people, it's hard to generalize on this. Russ: Of course, they are all fine people that would never, ever have any psychological influence. Okay. Put that aside. Go ahead. Guest: So, clearly to become an expert in your field, in today's world where so much of our drug and therapy trials are funded by industry--because the NIH (National Institutes of Health) can't fund anything, can't fund everything, and they certainly are not going to be developing every new drug--very often these super-specialists have some sort of affiliation with industry. They may be consultants on trials. They may actually be running the trials that are funded by drug companies. Or, they may believe strongly in a new medication that works and end up being a spokesperson for these medications, either paid or not. And you can imagine--you are hearing all those possibilities--that it probably gets harder and harder to get completely unbiased in your recommendations and your thoughts about these, if you begin to have a financial stake in a medication. Russ: So, say, somebody, just hypothetically, somebody who worked for Goldman Sachs goes and works for the government, knowing they are going to go back and work for Goldman Sachs--I'm sure it doesn't have any effect. Because they only care about the public interest. Absolutely. And that's a completely hypothetical example. Russ: Unfortunately there's more than one. [?] that damaging. I give you Citibank, too, if we need to. Anyway, let's move on.
53:08Russ: Now, one of the--the book is full of interesting analysis of the current state of this problem of medical reversal. Toward the end of the book you speculate about how we might create a different system of medical education that would change things, medical research, the way it's structured. Sketch out just a little of that--we don't have time for the full thing. I encourage people to read the book. What are the biggest changes you would make to medical education if you were a Czar? And what are the odds anybody is going to listen? Sorry about that last question. Guest: There are a lot of things standing in the way of this. But our idea, and it's an idea which Dr. Prasad, my co-author, actually published first--but it's certainly been a thought for a long time--is that because we think so much reversal is based on 'We think something should work, and so we're going to adopt it before we know that it actually does work,' and one of the reasons for this is because that's how medical education is structured. We learn the biochemistry, the physiology, the pathophysiology as the very first things in medical school. And over the first two years we kind of get convinced that everything works mechanistically the way we think it does. And it's only in the later years that we are introduced to patients and actually making patient care decisions, that more of the empiricism comes into play here. And so our idea is that: Could we flip that around? Could doctors be trained in evidence-based medicine at the beginning? And be presented with patients and cases, and then be asked to research: What is the best treatment here? Why is that the best treatment? What's the evidence base behind that? And then, certainly it's crucial that doctors understand pathophysiology. It's crucial not only for taking care of the patients but it's crucial for moving the field forward, because you can't come up with good research questions without that knowledge. But maybe that should come later. And so you'd be seeing that sort of science through the window of what you've already learned about taking care of patients. Changing medical education is difficult. We've been teaching medicine, you know, with some tweaks, pretty similar to the way we were doing it a hundred years ago. But I think there are more and more people who are realizing the flaws in what we do that I'm hopeful there will be changes. Russ: One of the issues we've talked about lately here at EconTalk is I would call evidence-based economics--which, who could be against it? The challenge, of course, is good evidence and bad evidence. That's always the problem. And I think about the analog of flipping the practice in economic education: I'm not sure I want people--you should look at data and you should look at how the world works. The question is how do you do that in a way that doesn't lead to overconfidence in other problems. And it's very hard. And people say, 'Well, let the data speak for themselves.' But the data often don't speak for themselves. I'd say they never do. You really have to have some kind of theory in the background. Do you agree with that? Guest: Yes, I do. I do. It's been one of the most interesting things that I've experienced, talking about this book and the research that went into the book, is that there are some things that--I've thought a lot about the article and I've been trained for 20 years on how to think about these articles, that I really truly think it says one thing, and someone that I'm talking to really truly thinks it says something different. And when we get down to it and when we discuss it, it's often easy to figure out why we are looking at it from a different viewpoint. But these are sort of honest arguments; and usually I think both of us end up being right. And so, although evidence-based medicine, and probably evidence-based economics is the way to go, I still have to have people thinking about this beyond the evidence and agreeing about what the evidence means. Russ: Yeah, for sure. How has the book and the research that you've done to write it--how has it affected your clinical practice? Does it weigh on you, day to day? Do you feel it? Is it different than it was two years ago, five years ago, when you are with a patient and dealing with a problem? Guest: I think it has. I mean, as you can imagine, talking to me, I wouldn't say I'm a nihilist, but I'm a bit of a skeptic. And I'm generally a less-is-more type practitioner. What I think it's done is it's made me, on the one hand, very aware of that. And so I think hard about whether the treatments I'm recommending are truly beneficial to the patient. Interestingly in the other direction, it's made me realize what my bias is, very clearly. And--and it's made me careful about applying that bias to my patients, who may have a very different bias. So I--I admit to them freely. Listen, this is my value system; this is how I recommend things. And I need to hear what yours are so I can change my thinking in a way to meet what you want. And it's been educational in a way that I would not have expected before beginning this effort. Russ: Yeah; I assume it's created some mindfulness, maybe, about yourself that maybe wasn't there before. That's really, really interesting.
58:53Russ: I'm a big fan of empowering consumers, and patients; and I'm not a big fan of the nanny state and lots of other--especially regulation--that's supposed to help people that actually doesn't, or even worse was designed to help somebody else but was packaged as a way of helping consumers, twisted through some political process. But when I talk about that, especially in medical areas, people say, 'Well, people just aren't smart enough to make their own decisions. They need doctors to tell them what to do.' And I'm curious--I'd like to hear this--let's close with this--your advice. You write about it quite eloquently in the book. But for those who don't have the book yet: What should we do as patients, given the reality that the world and the body is a complicated place? How should this affect, this knowledge of medical reversal--how do we deal with it in the moment, when we've got challenges? And how can we educate patients to be better at dealing with that? Guest: Yeah. That is a struggle. The one big struggle and what I really encourage people to do is to just feel comfortable asking questions when you see your doctor. I take care of lots of people who I know from outside my office, and I can't tell you how many people who are incredibly smart, brave people who never shy away from anything in the rest of their lives come into the office, and all of a sudden, you know, kind of clam up and don't ask the questions they should be asking. And I think that's a little bit like they feel, 'This is medicine; I don't know anything about this.' There's a little bit of that sort of power differential of someone sitting on one side of the table in a white coat and someone on the other side in their street clothes. Russ: Or a paper gown. Guest: Right. Or a paper gown. Which is torn in the back, probably. And so, it's important to arm yourself with the real questions; and those real questions are, to begin with are not: 'What are the side-effects of this medication? Will my insurance pay for this medication?' It's: 'Does this medication work? How do you know it works? How likely is it to benefit me if you give it to me? What are the other options to the care plan that you are outlining here?' And then maybe even ask the doctor to argue the other side of the coin for a second, so you can sort of figure out what you need to do. We as doctors, we are certainly trained to be open to these discussions. This entire generation of doctors practicing now has been brought up with the idea that patient autonomy is key and the paternalism of the past is gone. But, you know, we don't always practice what we learn. I find that when we are really taught to practice what we've learned, we don't know what to do, so you know, whenever there is something like you talked about--prostate screening or mammograms between 40 and 50, the guidelines are: Well, we should definitely discuss this with your patient. And because-- Russ: Now what-- Guest: Because those are the places where we really don't know what to recommend. So we put it on the patient to make the impossible decision. Russ: Isn't the challenge in that situation sometimes that you are visiting the wielder of a hammer, and you've got a nail? Or you don't have the nail, and they are going to use the hammer? 'You need this procedure.' I think one of the biggest challenges we face--and again, I don't think--as you point out in the book, it's not malicious. It's not sinister. But so often specialists, they've come to believe their therapy or surgery works. When in fact maybe it doesn't work so well. And in that situation, I think it's a very difficult challenge for the patient. Guest: Yes. I mean, I'm a general internist. I'm a generalist. And I think that's why those people are key to help you find the right person. Because it is true, you know: the specialists who we rely on every day really do look at things through, you know, the glasses of the hepatologist[?] or you know, our cardiologist; and that's what they think about. Which is really necessary if you need those people. But maybe not for everything. The other problem is that, you know, as a patient, you can always find someone who will look at your health care in the way that you do. So, if you want something done and the first few doctors you see say it's not the right thing to do, you can always find someone who will. So you need to find a doctor you trust, who you think really has your best interest at heart and is going to steer you to the right people when they can't care for you. Which is, honestly, frequently.