Robert Aronowitz on Risky Medicine
Nov 9 2015
Should women get routine mammograms? Should men get regular PSA exams? Robert Aronowitz of the University of Pennsylvania and the author of Risky Medicine talks with EconTalk host Russ Roberts about the increasing focus on risk reduction rather than health itself as a goal. Aronowitz discusses the social and political forces that push us toward more preventive testing even when those tests have not been shown to be effective. Aronowitz's perspective is a provocative look at the opportunity cost of risk-reduction.
RELATED EPISODE
Adam Cifu on the Case for Being a Medical Conservative
Physician and author Adam Cifu of the University of Chicago talks about being a medical conservative with EconTalk host Russ Roberts. Cifu encourages doctors to appreciate the complexity of medical care and the reality that many medical techniques advocated by...
EXPLORE MORE
Related EPISODE
Eric Topol on the Creative Destruction of Medicine
Eric Topol of the Scripps Research Institute and the author of The Creative Destruction of Medicine talks with EconTalk host Russ Roberts about the ideas in his book. Topics discussed include "evidence-based" medicine, the influence of the pharmaceutical industry, how...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Pseu
Nov 9 2015 at 12:46pm

re: looking at the evidence

“And, more than anything else, when you hear this conversation, what I’d like listeners to take away from it is: Educate yourself. Look at the numbers, yourself. And if you can’t look at them yourself, get somebody who, if you are not skilled enough or you don’t know enough to look at them, get somebody who is thoughtful to look at them.”

This sounds great, and perfectly reasonable. But when the numbers themselves are fraught and complicated and contradict each other so much that even people professionally trained to look at such numbers aren’t sure what to do, what is an average person going to gain by plunging in himself? One hopes that the doctor will be the skilled, thoughtful person to help guide the patient to a reasonable decision. (Although of course there are all sorts of incentives a doctor has that don’t necessarily point in this same direction.)

I don’t know what the answer is.

For huge, consequential decisions like “do I undergo this hazardous surgery?” maybe it does make sense for the patient to wade into the numbers himself. (Although someone in such a situation isn’t well-placed to be particularly rational – I know I wouldn’t be.) For smaller, less risky decisions like “should I have this non-invasive screening done” it seems like the rational thing to do is generally to rely on the doctor’s recommendation.

There are so very many decisions to be made in life that it’s impossible to fully research and understand all the factors in every decision. Maybe “should I have the PSA test done?” is one on which I should remain rationally ignorant.

In any event, this was a very thought-provoking discussion and I’m glad I made the decision to listen to it.

Greg Linster
Nov 9 2015 at 12:56pm

First off, thank you Russ for the excellent interview! I just purchased the book.

Like Russ and his wife, my wife and I faced the emotionally overwhelming situation of making the tough decision about having a C-section based on fetal heart monitoring during my wife’s labor of our first child two years ago. At the time, I was aware that C-section rates were high and that it was likely that many of them were unnecessary. My wife and I were both adamantly against having one, unless it meant saving a life or preventing severe harm. We waited as long as possible to see if the heart rate would come back to normal during the labor, but it didn’t. We opted for the C-section at the recommendation of multiple physicians and at the fear of causing damage to our unborn child.

Perhaps it’s a case of “The Elephant in Green Pajamas Problem”, but both my wife and daughter are thriving today. Even though I knew many of the C-sections were unnecessary, I don’t think I could have psychologically lived with the outcome had something terrible happened to either my wife or child. The more information I had, the more culpability I felt for neglecting an intervention.

Not all cases of risk analysis require making a decision in the heat of the moment like this, but almost always they require making decisions with imperfect information. If you can’t live with a bad outcome, and would feel morally culpable for neglect, that changes the dynamics of the intervention decision too.

Russ Roberts
Nov 9 2015 at 1:54pm

Pseu,

I agree with your point which is why I added the postscript. Hope you heard it at the end.

Kent J. Lyon, M.D
Nov 9 2015 at 2:56pm

Dr. Aronowitz reveals his egregious ignorance ( and not inconsiderable arrogance, certainly no appropriate clinical humility–fortunately, he is no longer treating patients; unfortunately, he is promulgating misinformation, one of the problems of academic medicine) in stating that the Womens Health Initiative gave a clearcut answer to a clinical question. That is completely untrue, and raises skepticism about the rest of this podcast. He is promulgating false information. The WHI was one of the worst studies ever done. It was stopped early when trends were observed, but statistical significance was not achieved. It was then presented as if statistical significance had been achieved. Beyond that misrepresentation, the study was extremely poorly designed. While it was interpreted as a study that applied to early post menopausal women, the average age in the study (which included starting treatment with hormone replacement in women who had not been on hormone replacement) was 62. Many of the women were in their mid to late 60’s. The conclusion of the study, that HRT should not be used except to treat severe menopausal symptoms for a brief period peirimenopausally, is a complete misconstruction of the study. That it costs $750 million to undertake, and subject participants to HRT when that was contraindicated, is problematic indeed. Further, subsequent research has invalidated the conclusions drawn from the study, in that the use of HRT early in menopause has many opposite effects (benefits) compared to starting HRT a decade or more after menopause. You are promoting disinformation. This is one problem with these podcasts–they are uninformed, misleading, and cause harm by misrepresentation that is not constrained by any reality. Further problems with the WHI are that women continued smoking on the study on HRT, which is malpractice. Data on this have never been released. Further, such factors as obesity were not adequately analyzed. Those of us in medical practice routinely inform our patients that the WHI does not apply to peri-or early post menopausal women. I would characterize this current podcast as uninformed, misleading, and harmful to the public. Dr. Roberts is obviously incapable of appropriate vetting of information, and assists in promulgating error. This “expert” doesn’t understand his own area of research.

Jeff
Nov 10 2015 at 8:35am

This podcast urged me to consider all the outcomes of a medical treatment and their likelihoods, but did a poor job of developing the skepticism that Russ and the guest are captured by.

The mammogram example was badly developed. The listener is told out of 10,000 women, 10 will be saved by the test and 6,000 will receive false positives. That’s a lot of false positives, but the podcast never pursued the harm imparted by the false positives. Anxiety will pass and become relief. An infection is the worst outcome of a biopsy. There’s a large quantity of harm being done from false positives, but the quality seems very, very low/benign.

Russ and the guest raised the specter of unnecessary mastectomies, but we weren’t provided on any data on it. Its quality of harm is high, but the podcast left the quantity of harm unknown. I’m left unable to evaluate their viewpoint, so I reject it.

10 lives saved? Quantity of good done is low, and quality of good done is infinitely high from the patient’s perspective.

A patient is right to pursue preventive (“risk reduction”) treatments if the harm from false positives can’t be enunciated. It wasn’t here. Did anyone hear it otherwise?

Adam Wildavsky
Nov 10 2015 at 12:00pm

Thanks for another informative episode!

Some of Dr. Aronowitz’s perspective was anticipated by my father’s Searching for Safety, published in 1988. His four page article on Risk for EconLib is in large part a summary of the book.

Mauricio
Nov 11 2015 at 8:13am

Agree entirely with Jeff. I understood the point on the creation of sometimes unnecesary anxiety and should not be taken lightly. However, it is difficult to assess the impact of false positives and the resulting preventive treament especially when when the treatment works for a true positive.

Robert Aronowitz
Nov 11 2015 at 11:27am

For the best review (follow links) of the evidence for and against “hormone replacement therapy” for chronic disease prevention take a look at:

http://www.uspreventiveservicestaskforce.org/Page/Document/UpdateSummaryFinal/menopausal-hormone-therapy-preventive-medication.

This U.S. Preventive Services Task Force report identified 9 randomized controlled trials that produced relevant evidence, but depends heavily on the two Women’s Health Initiative (WHI) trials because of the number of women studied and duration of study. The USPSTF gives a (failing) grade of “D” to the recommendation to use these drugs for chronic disease prevention because the preponderance of evidence is that the harms outweigh the benefits. My comments on the show were primarily directed at the prior consensus, from limited and easily biased observational trials, that HRT reduced the risks of chronic disease when the WHI showed that there was no cardiovascular benefit (in fact there was an increased risk of strokes and a trend towards other cardiovascular bad outcomes) and identified a series of other harms which outweigh any benefits in reduced fractures.

John Saunders
Nov 11 2015 at 2:26pm

I was surprised no mention was made of the work of Gerd Gigerenzer in communicating risks of screening, and other health interventions (see his recent book Risk Savvy, for example), or the Harding Center for Risk Literacy, which he directs. He promotes the use of ‘natural frequencies’ to communicate risks (whole numbers, as in your mammography example, as opposed to relative % changes), and has shown their efficacy in risk communication.

David Spiegelhalter has a great example of this approach to risk communication for breast cancer screening here: http://understandinguncertainty.org/visualisation-information-nhs-breast-cancer-screening-leaflet which illustrates visually some of the points made in the podcast.

Thanks for all the great podcasts!!

Tim Etherington
Nov 11 2015 at 3:15pm

Outstanding episode. Really, very thought provoking.

The epilog left me puzzled; it felt like it had undone most of the episode. We started by saying that the research showed that there was little actual benefit from regular mammograms but then it turned out that the the research was inconclusive after all. Could be more beneficial, could be less, could be as stated. That means that doctors are making recommendations based on poor data and that good data are not actually available. So is it better to make a good decision based on bad information or to make no decision?

It seemed to me that at that point we were back to the original premise: risk management is questionable. It doesn’t matter if the mammogram research is right or not, the reason to do it is in order to manage risk.

Am I missing something?

Mark Peterson
Nov 11 2015 at 7:00pm

Russ, as a a practicing internist and regular listener I loved this episode. I think that most practicing physicians do understand the realities and limitations of a lot of these screening tests and interventions. What I don’t hear brought up in discussions like this is the psychological state and risk-aversion behavior of the physician. What is a doctor’s worst nightmare?
Patient dies. My fault.
When a patient has newly diagnosed metastatic prostate cancer, that they will die from, and I never checked a PSA, no amount of discussion about false positives, over-diagnosis and treatment, etc will change patient and family’s perception that I as the doctor blew it.
Sure I can have the lengthy discussion about pro’s and con’s every time I order the test, but most patients are not sophisticated enough to make an “informed decision”. So I totally get your doctor’s ambivalence- I tell the patient it is an almost worthless test, but I go ahead and order it anyway.

Daniel Barkalow
Nov 11 2015 at 8:10pm

One thing I thought was missing from this discussion was placebo and nocebo effects; people’s expectations about their health have significant and large effects on their outcomes, and tests which change these expectations will therefore have consequences even if they don’t have any direct physiological effects.

Kevin
Nov 12 2015 at 1:01pm

I am a PhD in epidemiology and a physician who currently works in cancer care. I give that so people can appreciate my background and my bias.

Dr. Aronowitz’s perspective is helpful as a counter to some of the trends in medicine. His passion in conversation no doubt outruns his writing and I appreciate that can be the case. But he unfortunately mixes things that are completely unknown and trendy with things that are well established with the highest level of evidence leaving the non-physician listener with a sense of medical knowledge nihilism and this medical listener disappointed.

Dr. Aronowitz no doubt realizes but does not discuss two competing problems in public health – the population and the individual. Benefits to the individual and population are often in tension – most particularly in preventive measures. This is a well known concept and I have hope that further increases in our knowledge will reduce this gap but it is likely to remain for sometime.

Dr. Aronowitz criticizes the way that HRT therapy was introduced and imagines that somehow it was driven by drug companies. He then seems upset that the normal science step was taken to do a randomized controlled trial (imperfect as attested to above) which disproved the prior evidence. He calls for more evidence throughout but the history of HRT is something he should celebrate. Preliminary data showed a benefit – we got more data and collectively decided it was wrong. That is science and progress and the model we want throughout medicine. Waiting until we have perfect evidence is not realistic in real human beings concerned about their health.

Risk factors (my PhD topic- oh joy) are indeed easy to produce and difficult to interpret. However, they are also a testable hypothesis. So while I don’t recommend people listen to health news I do know that many of the “risk factors” have been tested. Framingham introduced 2 big risks among other things – HTN and blood lipids. We have trial evidence that modifying both reduces the risk of heart mortality from robust clinical trials. Complaining about preliminary evidence while not acknowledging the very strong evidence we have is sloppy.

On screening – its not obvious when women should begin mammograms. The American Cancer Society still recommends beginning at 40 with MMG every other year. There is very good data for MMG. Also – there is good data showing that breast cancer mortality has dramatically declined where MMG is used. The fuss about whether it should be started at 40 or 50 or once a year or two years is overshadowed by the dramatic improvements in survival that have been realized and in many studies are attributed to MMGs.

Oh PSA….should you do PSA? Here most the arguments now swing in the argument that to save the population we should sacrifice the individual. We have done 2 giant PSA randomized controlled trials. The US trial failed because the cross over completely ruined the ability of the trial to detect results (a huge portion of the men in the no PSA arm got a PSA). The European study showed that PSA reduced cancer mortality. Further, the great trend in prostate cancer has been to observe early prostate cancer. Most men with early prostate cancer now get no treatment further reducing the toxicity of PSA. Men with more advanced prostate cancer die of prostate cancer and prostate cancer still kills a huge number of men every year.

As I said, my current treatment of cancer patients makes me favor screening. My epidemiology training makes me wary of the data and sympathetic to the discussion here.

Finally, I appreciated Dr. Roberts final comments on the data. For my training and certification I had to intimately know about 300 studies – their relation to one another, their weaknesses, their implication for cancer care. I agree pts should be skeptical and “look at the data” but it is a truly daunting task. It can be done, but don’t expect to acquire the same knowledge your doctor did in 7 years in a few days on the internet. I don’t know much about repairing cars, accounting, and economics beyond folk-knowledge. I don’t expect my patients to know much about their disease but I am eager to answer and address their questions.

Russ Roberts
Nov 12 2015 at 5:19pm

Jeff and Mauricio,

I encourage you to listen to the postscript (perhaps you missed it) and to look at this chart from the Journal of the American Medical Association that I discuss in the postscript. So ten lives are saved out of 10,000 women getting annual screenings between ages 50-59. In addition:

173 women survive whether they were screened or not
62 women of the 10,000 die even with screening.
57 women are overdiagnosed.

So the odds of death go from 72/10000 to 62/10000. In return you get a 61% chance of a false positive with the anxiety and often unnecessary biopsy. You get a 57/10,000 chance of overdiagnosis which I assume leads to some form of surgery. That doesn’t strike me as a great case for mammograms for women in their 50’s.

HOWEVER.

As I point out in the postscript, the improvement in the probability of death is based on older studies with older technology. The newer technology is more effective evidently. And I don’t know if the overdiagnosis numbers and the other numbers in that summary are reliable either.

Here is another very interesting summary of the evidence from the the UK’s National Health Service. It looks at what the costs and benefits are for getting a mammogram every three years for women between the ages of 50 and 70. It is very well done in the sense of presenting the risks on both side of the decision of whether to get a mammogram. I have no idea of whether the numbers are reliable. Read this elegant summary of the data from David Spiegelhalter. It makes it clearer than I can in words that there is some small reduction in the probability of dying from breast cancer in return for an increase in treatment which is presumably unpleasant in various ways.

Again, it is not clear how reliable the probabilities are.

What is interesting about the National Health Service’s approach is they present the evidence and encourage women to read it and make their own decision. There is no recommendation either way. This talk by Spiegelhalter (go to the 24 minute mark for the discussion of mammography) has a nice discussion of that approach. I thank listener Ki Lee for sending me the video.

jw
Nov 14 2015 at 9:28am

A tangential effect of medical testing:

In my state of NC, ten years ago a (Democratic) Speaker of the House forced through a law that every child must have an eye exam before being allowed to begin school. He was later convicted of receiving blank checks from the optometrists lobby to pass the law (this is not an aphorism, they were literally blank checks).

Yet the law remains and costs the citizens of NC millions per year (as does the law that the optometrist lobby passed mandating a two year limit on eye prescriptions, forcing needless eye exams).

jw
Nov 14 2015 at 9:38am

Russ,

What that chart also doesn’t (or can’t) tell you is how many of the cancers found would have been found anyway (and the associated positive or negative outcomes) without a mammogram.

jw
Nov 14 2015 at 10:05am

Kevin,

WRT lipids in the Framingham study, there is a lot of evidence that statins are useless in prolonging life in anyone except men under 65 with a prior cardiac event (they do reduce deaths by heart attacks, but NOT overall mortality).

Dr. Lyon,

I am sure that there are problems with the unnamed studies that you cite that show benefits for early HRT treatment. I know this because there are problems with ALL medical studies, even RCT’s (especially with respect to statistical power). I hope that other doctors do not accuse you of malpractice for treating patients based on those imperfect studies.

This week’s guest briefly mentions the increasing understanding that the ADA guidelines may been wrong for over 40 years. It’s even worse, they may have been the CAUSE of our current obesity epidemic.

For some excellent examples of critiques of popular health/diet “studies”, please see The China Study and anything on Zoë Harcombe.

Science marches on, but studying humans will never be the same as studying physics.

Ayman Chit
Nov 14 2015 at 4:35pm

its a shame that after an episode on evidence based decision making, the author concludes by critizizing the pharma industry based on an “anecdotal” example.

Robert Swan
Nov 17 2015 at 4:51pm

Another enjoyable interview, and the comments have been interesting too. I suppose the postscript reflects the hazard of interviewing someone you agree with (though the postscript seemed to me to replace “LOOK AT THE EVIDENCE” with the much more measured “look at the evidence”).

While I too agree with most of what he said, Dr Aronowitz showed his feet of clay (namely confirmation bias) in talking approvingly of the Women’s Health Initiative’s conclusions on HRT. I looked quite closely at it when it came out. It was just on the 5% level of significance. There were also “benefits” that could be attributed to HRT at that same significance level. All the hallmarks of a chance finding by data dredge. And on such a flimsy basis they stopped that arm of the study!

I agree with Kevin’s comment contrasting outcomes for populations and individuals. The doctor who says something like “there is a 24% chance you have disease X” shows himself to be innumerate. You, the individual, either have disease X or you don’t, there’s no chance about it.

I certainly won’t take our host to task for describing epidemiology as an “intellectual cesspool”. It wasn’t always so; it had good results with smallpox and polio. Its triumphs have been in sanitation and vaccination. In earlier times the benefits were self-evident; today it has been necessary to introduce strange notions like “number needed to treat” to weigh up one marginal intervention against another.

Vaccinations had the population-wide benefit of “herd immunity”. There are no such population-wide benefits in (e.g) statins. So why are they being doled out to millions of people who will never benefit from them? Statins have certainly been good for drug companies.

The thing of it is that there are surely potential Sabins and Salks, or Flemings and Floreys out there today. They’re probably tied up, for now, coaxing borderline significance out of their computers.

Mauricio
Nov 19 2015 at 8:29am

Russ,

Thank you for the clarification and for taking the time to respond. I listened to the posttranscript and it helped a lot. It is also refreshing to see how can one change the tone or degree of conviction depending on further evidence. A great example to follow.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

About this week's guest:

About ideas and people mentioned in this podcast episode:Books:

Articles:

Web Pages and Resources:

Podcast Episodes, Videos, and Blog Entries:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:33Intro. Russ: Before introducing today's guest, I want to alert listeners that there's a special postscript at the end of this week's episode, where I reflect on some of the empirical issues that came up during the conversation. So after I thank my guest, please stay tuned for some thoughts on risk, health, and data that are really important.
0:57Russ: [Recording date: October 22, 2015.] ... Robert Aronowitz's latest book, Risky Medicine: Our Quest to Cure Fear and Uncertainty, is the subject of today's episode. Robert, welcome to EconTalk. Guest: Great to be here. Russ: What is risky medicine? Guest: Well, I mean the term to cover a few different things that I think characterize a lot of modern medicine. For one, it's risk-centered medicine. That is, medicine that's focused on reducing the probably of some bad outcome, as opposed to medicine that is any kind of medical intervention that's there to treat symptoms or change the path of a physiological process that's doing harm in the body. And there are a number of aspects of risky medicine that go along with that risk-centeredness. In particular, the way in which we think about the efficacy--you know, how we know something works--in medicine, has shifted in many cases from seeing the disease disappear or symptoms get resolved or people living longer lives, to looking for intermediate endpoints of reduced probabilities of some bad outcome happening: you are better because your cholesterol is 150 when it used to be 200, or your blood pressure is 110 when it used to be 140. And a third element of risky medicine--you know, I'm obviously a little bit playful with the title, since some of these things are also dangerous; but it's not my primary intention in calling the book Risky Medicine--but a third element is the fact that we live in a world where there is much more profit to be made when pharmaceutical companies and device makers and medical specialists develop interventions that reduce risk. And, what I mean by that is, in the old days--there's an anecdote that I have in the book from actually somebody else, who talks about a 1950s pharmaceutical convention of different drug companies, and someone got up and gave a speech and said, 'We've done really well with our new antibiotics but we have a very, very bad business model. We have products that immediately consume their demand. You know, people are better and they stop buying our products. We've got to figure out a better way.' And if we fast forward to the present, a better way has been figured out. If you have drugs with conventions that promise to treat risk, people are--possibly the whole population--could have even some small probability of a bad outcome. And be the market for a disease. And they might need to take this drug or intervention their entire life. So, that's sort of a third element to what I mean by risky medicine. Russ: It's certainly the case that in a world where often we are paying--we are consuming products using other people's money, third party payments in the medical area, that combined with the profit motive leads to a pretty unhealthy dynamic of pushing products that people think, 'Well, it couldn't hurt. Better safe than sorry.' And your book, to a large extent--you could have called it 'The Dangers of Better Safe than Sorry.' Not a good title. But that's really, I think part of what you are exploring here: the natural impulse that human beings have to avoid danger; and then the opportunity to have somebody else pay for a chance to reduce the risk. Not necessarily the actually effect of it. Which is what I think is your deep insight. Guest: Right. Russ: We are often not getting healthier. We are often "reducing the risk." Which, they are not the same thing, are they? Guest: No, they are not the same thing. It's something of a complicated argument, in that whether or not an intervention works according to what we might think of as the highest standard of scientific efficacy--you know, proven in a randomized clinical trial to improve lifespan or reduce morbidity. Whether that kind of evidence exists behind a practice, it's not necessarily a reason why people take or doctors prescribe or for use in a dimension. There's often an element of what I call psychological or social efficacy at work. You know--'I'm better safe than sorry' captures some of that. That is, often things are done because, you know, they may have some stated objective benefit on health. But the underlying logic of why a product is used may have another reason. Now this is most clearly seen, however, obviously, in practices where there's lots of evidence there's not much benefit. So, things like routine fetal heart monitoring in "normal labor." You know: there have been a lot of studies that have shown there aren't really significant health benefits to, you know--you know what I'm talking about: this is when a woman is in labor and a microphone goes on the belly and the heart rate is measured and you have a continuous sort of feedback loop, hearing. And it's very reassuring to people. Of course it's reassuring only until the obstruction[?] is decelerating and it gets scared. But I'll bracket that point for a second. There's a lot of evidence that it really doesn't improve outcomes. It leads to a lot more C-sections [Caesarian sections]--many expert panels get up and pronounce this evidence and recommend, 'We must do something about reducing the amount of C-sections, and we should probably find a way of getting rid of routine fetal heart monitoring.' But it never happens.
6:54Russ: Well, because you feel like--I found that to be a very fascinating example. It's typical--the number of examples in the book--there are other different kinds as well, but this particular kind: on the surface, there's no side effect from it. It's just--it's harmless. I mean, you're not hurting the child; you're not hurting the fetus. And similarly, when you do a blood test of a particular kind, there's no--you are already taking the blood, so if you check for the healthier prostate, there's no harm done there. But the real side effect--and I think this is one of the most powerful lessons of your work, is that that leads to, often, a chain of events that's particularly unattractive although seemingly inevitable. And it's funny--you mention the fetal heart monitor. Our first child, the heart rate dropped precipitously during my wife's labor, and our main doctor had not arrived yet, and the intern who was there said we have to prepare you for a C-section. And we weren't very happy about that. But it's not a place of calm decision-making, you know? You don't think, 'Well, I don't know.' Guest: Not at all. Russ: There was panic in the air, on everybody's part--including the doctor who was on call, on duty then. And fortunately our doctor arrived in time, before anything was done, and said, 'Oh, well, she had a contraction and the heart rate dropped; and now it's back to normal. It's fine.' And nothing happened. But we were very close to having a C-section that was probably--not probably--unnecessary and would have additional harm to my wife. Guest: Yeah. And to sort of solve[?] that through a little bit: if you had a good outcome with the C-section--I hope you had a good outcome with a regular vaginal delivery-- Russ: We did. Guest: It's a little self-reinforcing, too. Because, of course, things worked out well; it probably was the right decision. We don't tolerate a lot of cognitive dissonance. And there's something I call in the book the elephant in green pajamas problem. Russ: Yeah. Tell that story. Guest: It's a stupid joke--I grew up in Brooklyn; we would tell these [?] each other's sense of irony, these jokes that didn't have powerful punchlines. But the story went: one kid says to the other, 'Why do elephants wear green pajamas?' And the other kid says, 'I don't know.' And the person telling the story says, So elephants can camouflage themselves on pool tables.' And the other kid has this puzzled look and says, 'What?' And the story-telling goes on: 'Have you ever seen an elephant on a pool table? You see: It works.' And you know, in many cases, the absence of something bad happening is putative evidence that something works. And somebody who has had a screening test and early, on the basis of that early pre-cancer diagnosis, and then surgery and chemotherapy for that problem and lives to tell the story 20 years later, feels they dodged a bullet. And there was efficacy to it. Russ: Which may be true. Guest: Well, it could be true, but that's the--I'm a big proponent in having really good evidence. Sometimes people misunderstand my cynicism about the way actual decisions get made to be an argument for letting a thousand flowers bloom, when I think, [?] we should try to find--you know, you can't model every medical decision, and there's a lot of [?] idiosyncrasy in people's bodies and circumstances that make the application of aggregate data to any one individual difficult. But those things aside, it's really important to get evidence. I guess, you know, just to start--the beginning of your question a few minutes ago when you said, 'a blood test is only information,' or 'the heart monitor is only monitoring the baby' kind of thing. I guess I've been sensitized and my hair stands on end when I hear that kind of thing, because it's just as you said--it's not the information that's dangerous, of course; in some things like mammography you could have some radiation risk. But for the most part it's what that information does and how it triggers, like your intern in the middle of the night almost had his way, triggers some unnecessary intervention. And I guess the other thing about just information that is worrisome is that a lot of our--inasmuch as I'm making an argument that a lot of our screening tests--the fetal heart monitoring we just talked about--serve a psychological function to control our fears and reduce uncertainty, you need to ask the question: Where did the fears and uncertainty come from in the first place? You know: we all fear death and [?] some parts of the human condition, that fear of disease. But many of the things we do exaggerate or complicate those fears. And it's often those very things that also have a role in controlling fear. So, for example, screening mammography is historically behind benefit--it's the cause in many ways of the rapidly rising incidence of breast cancer diagnoses through the 1970s, 1980s, and 1990s. It detected a lot more cancers. And many more people were treated. And the prevalence of cancer increased. And at that same moment, people feared the disease more, because there's just more of it. It seemed like everywhere you look, cancer is there. But one of the antidotes to that fear is to go get yourself screened. Which produces more people with a diagnosis. So there's a kind of--and this is for a lot of screening tests, and it actually is true of the development of a lot of public health programs around that use fear as a motivator in even the early part of this century as a kind of self-fulfilling prophecy or a catalytic reaction that feeds on itself [?] things--that's something I identify in the book and most of my previous work on breast cancer that's very troubling.
13:00Russ: So the challenge is--and I face the same issues as you do in economics, because I'm a big skeptic about some of the empirical claims that economists make and the precision of it, and the science claims about it. And then people say, 'Well, you're not interested in evidence.' 'Yes I am.' 'You are anti-science.' 'No, actually I'm pro-science. I'm in favor of good evidence. I'm just not in favor of bad evidence.' So, the alternative, when you say things like 'Oh, this encouragement of mammography and screening, and it's crossed many other diseases, that's led to this epidemic in some ways of cancer, that was there anyway; a lot of people say, 'Well, the alternative is just to say: Okay, so I'm not going to get tested. I don't want to know that I have it. It's better to be ignorant. Ignorance is bliss.' And of course, that's not what you are encouraging, either; and it's certainly not a scientific attitude. But I think that's, for human beings--as opposed to "scientists"--I think the real challenge we face as decision-makers is between those two poles: 'Oh, I'd rather not know, because it's going to lead to a bunch of awful stuff I don't want to have done that might not work,' versus, 'Well, I might be at risk of death; it's better to find out and solve the problem.' Guest: So, you know, there are a couple of things. First I think we need the best possible evidence there is. Which means investing in knowledge production in the form of clinical trials, especially of new preventative measures; and some kind of discipline as a society, through insurance companies or government regulation or the morality of individual investigators to not just do something that seems like just information, as we are talking about, or that's self-evidently effective without much harm. You know, which has been the way many preventative practices, risk-reducing practices, actually introduce in a kind of evidence-free way. So I think it's a kind of clear argument for a high bar of scientific evidence. At the same time, I wouldn't dismiss the, you know, bury-your-head-in-the-sand psychology of wanting peace of mind as some, you know, pull of human psychology that shouldn't be listened to. I think, when we're talking about screening tests and proactive things that medicine[?] decides we're going to find something in people's bodies, I think the actual ethics for the doctor/patient relationship and the relationship of medical authority to lay people is different than when somebody is abjectly ill with cancer and is desperate for a cure. We are basically pushing--we are recommending people come in for tests that are otherwise healthy. They are in a good state of mind potentially around it. And there should be a pretty high bar on ethical grounds as well, I think, before we disturb people's peace of mind that we know what we're doing. And we're not just creating--unlike, you know, a medicine that might respond in my body a little different from your body, you know, things for a cold or something--I could have some individual sense that even though in general this medicine, there's no good evidence for it, I think it works for my body. There's a certain plausibility for that, however unscientific you think that. When it comes to risk reduction, there is no, like, you know, we're talking about nothing that has symptoms. We're talking about just probabilities. And it's just this case where I think we want a really higher bar of scientific evidence. And, you know, I guess, just as a personal anecdote: My wife and I are both physicians. We had our son, born in 1992. And we had decided that screening for Down's Syndrome was something we didn't want to do, because we made decisions unlikely[?] and if it happened we would go through with the pregnancy. And we made the calculation based on looking at the data--my wife had been an Ob/Gyn [Obstetrics and Gynecology] for years--that the dangers of--you know, there were minimal benefits of just routine ultrasonography in pregnancy. They can find what we doctors call 'incidentalomas'--things that have no import, or you can't do anything about--as much as they would find anything that could be done during pregnancy. Russ: Plus there are false positives. Which are important [?]. We always forget about that. Guest: Yeah. Yeah. Anyway, we made the decision we were just going to go without routine ultrasounds. And I can't tell you how many times during my wife's pregnancy we had to, like, fight off the wand[?] that was coming: 'We need to check your dates.' You know. You know. They understood it almost a kind of a service, like teeth whitening at the end of a dental visit or something that made people go home with a picture and feel good about themselves. Or information about gender or something. I'm not defending this--I mean, this is an idiosyncratic decision my wife and I made. But I think there's something real to deciding that you don't want to medicalize some part of your life. If there isn't good data pro that medicalization, I think it's important to find loud people's voice to do it. I don't think it could have been done--I don't think we could have avoided those ultrasounds if my wife and I hadn't been physicians. And, you know, had the authority to push people away. Russ: We did--in some funny--our first child was born in 1992 also. We also made that same decision. We did push them away. It wasn't easy. And part of it's because people acted like we were crazy. It's like--again, like 'What are you, some kind of--' Guest: 'It's only information.' Russ: It's like, again, like 'What are you, some kind of medieval, anti-technology Luddite? You're primitive. Don't you want to know?' And my answer was, 'No, I don't want to know.' And they couldn't--it was puzzling to them. Partly because it was just rare. Guest: And you know, you and I talked before this time. There are certain screening programs that I think there's pretty good data for. In some situations, colonoscopy. I've had two. So, the disturbance to peace of mind thing, I want elaborate on a little bit, because I don't think it's just a matter of the situation you, and your wife, and my wife were in, with the normal part of life--having a child--and wanting it to be as free of medical problems as possible. It's also very, very powerful, this issue is also very, very powerful driver of our overdiagnosis and our treatment of some diseases. Let me illustrate this with the problem of prostate cancer. And I'll do it in the form of a prototypical situation: which is a man goes into his internist or family doctor's office. And often without any actual discussion of costs and benefits, a PSA (Prostate-specific antigen) test is added to "routine blood work". And the patient is called from the nurse that the actual PSA level was high; he immediately thinks that he has cancer; he comes back for a visit; get's referred to a urologist. Urologist discusses the pros and conservatives of the whole thing; may ask for another test. But often it ends up going to biopsy. Biopsy today involves often ultrasounded, guided survey of the whole prostate gland, sometimes up to 20 or 25 biopsy needles, specimens sort of taken. And it's not atypical as men get older for one of those or more of those 25 biopsies to have a low-grade cancer. Cancer is graded by pathologists on something called the Gleason score. And there's these slow-grade numbers that come up; and I know this happens a lot; I'm often called by friends asking what to do. And the urologists have come around to the fact that there's not very good data supporting radical prostatectomy or pushing people definitely have radical prostatectomy or radiation under the situation. That is, many people seem to limp along just fine, even though there is something called cancer in their body, as picked up by a screening test. So, the alternative to going for surgery--I know I am being a little bit long-winded here but I wanted to get to this point--is not just walking away from the urologist, but you are now committed to sort of in most cases a lifetime of every six months getting your PSA tested again. Often there is, you know, there is a lot of innovation in the surveillance routines where people get repeated biopsies or repeated ultrasounds. Or they look at the free PSA or the Rapport[?]--there's a complicated normograms[?] tracking the changing rates of PSA levels. And often this triggers some threshold which leads to surgery. But the thing that I have noticed is that this creates a kind of state of risk, and a state of anxiety--a feeling like the Sword of Damocles is over your head. And many people, many men, decide to get a prostatectomy not to rid themselves, you know, of can live a longer a life, but the riddance also of the state of uncertainty that they found themselves in. Except, you know, [?] it's not a trivial small thing, not that, you know, a decision about having ultrasound is a small one. But it becomes very consequential when the medical routines themselves create a kind of experience state of bodily risk that involves unpleasant routines, you know, fateful visits with doctors and tests, and a very reasonable response to a lot of people is 'Let's be done with it, get the thing out.'
22:22Russ: And not just that, but the family members are often more eager, often, than the person with the problem. Because they are afraid, too. And the Sword of Damocles is hanging over their head as well. And the way you describe it is exactly the way I've heard it described by many, many people in my family and friends when they have these issues come up--which is: Get it out of there. And unfortunately getting out of there aside has consequences. It's not easy surgery. Guest: Oh, [?] Russ: But more than that, a lot of times--there's no point to it. You've got a slow-growing cancer. But it has that 'c' and it's scary. Because it's cancer. You actually mention, at the end of your book--I found it fascinating--you said that, you know, like the ultrasound you've gone to your doctor and make sure when you get your physical to make sure you don't get the PSA. Guest: Yeah. Russ: And my last physical was a few months ago, my last recent physical. And my PSA came back low--fortunately. But I asked my doctor--who is a smart, I respect him a lot, he's a very good doctor--I said, 'Why did you do the PSA?' He said, 'I don't know.' You know, it's because it's kind of like, there's a box and that's what gets checked and it's been checked. And I'm going to be more aggressive next time not to check it. To make sure it doesn't get tested. I don't want to know that number. Because it's not a meaningful number. So, I'm not putting my head in the sand. I'm not being unscientific. There's a lot of evidence that it's a good thing not to know that number. It seems. Listeners make their own choices, consult with their own physicians. We don't give medical advice here. But I personally will not be getting a PSA test any time soon as a routine matter. Guest: Yeah. And I think--you know, with a couple of caveats that you and I should pay attention to--new scientific developments. Russ: Of course. Guest: And, understand that the situation is very different when a PSA test is used as part of a diagnostic routine because your doctor, for example, has felt a nodule on a rectal exam. There's a lot of subtleties to any test, in a way. Russ: Totally [?] Guest: Yeah. The good doc--you are not expected to go to medical school as a patient but you should have a doctor that can explain these subtleties to you. You are an economists. And one of the things that I find interesting in understanding the psychological dimensions of our fear is to think about the behavioral economics literature and how it might apply to things I've looked at historically. And one of the really interesting things is that--this is not from Risky Medicine but from my breast cancer book--I started a book with a case of a woman who had developed breast cancer, what they called, probably was breast cancer, in 1812. Here in Philadelphia. And it turned out her brother [?] was the leading surgeon in America at the time, whose name was Fysic[?]. And Fysic[?] didn't--there's a lot of cynicism[?] about cancer surgery in 1812. It was a brutal operation done without anesthesia. But not so much the danger as people also didn't believe that you could actually cure cancer by surgery alone. So it was rarely done. But this was his sister-in-law. It was really small. And they went into days of consultation about it. And she left a whole slew of letters, that allowed me to get an inkling into her decision-making. But the thing that kept[?] her--which she explained to her father, who was listening in England months later why she did this incredibly painful, half-hour amputation of her breast without anesthesia on her kitchen table in Burlington, New Jersey, she said, 'In the end, she would rather go to her death'--she talked in Philadelphia Quakerese and so I'm not going to get this right now without the book in front of me, but she said she'd rather go to her death with no stone unturned. That she had done everything, so she would have no regret before she died. And you know, the Daniel Kahneman's of the world, you know, refer, powerful heuristic [?] decision-making, they talk about it as anticipated regret. This was and is active--I am reminded of the family members, also, are often the most important interest group in these decisions to go for surgery. You know, because I think, you know, at least in the Jewish world I grew up in, whatever the guilt is kind of a familial issue, not just a visual issue. And anticipating this kind of guilt or regret from not doing everything possible. Now. Russ: And-- Guest: [?] something bad. Russ: Right. Guest: And it's a very powerful force. And it was there long before people knew what behavioral economics was. Russ: Yeah; I've been thinking about it a lot lately, about the science of regret. We really would like to prevent it. We'd like to prevent regret. And yet there is no way to prevent it. Because there's Type I and Type II errors: there's false positives and false negatives. And there's times we act and think bad things happen. We act and good things happen. We don't act and good things happen. We don't act and bad things happen. And it's very hard. We don't feel the same about active and passive actions. It's very hard to accept that. Guest: And errors of omission and commission. Russ: Yeah. It's very difficult. Guest: It's very tricky and it's very hard to be normative here. One of the startling things, not from my own historical work but in colleagues of mine who study the overdiagnosis and overtreatment problem using health service research econometric techniques and surveys, whatever, is that one of the surprising findings is that people who have experienced a false positive diagnosis of cancer and lived for weeks or months with just feeling that they had cancer, but were later found out by some confirmatory test or maybe even a time of operation that there was not cancer in the body--these people do not generally end up being like, the iatrogenically wounded who are the advocates for doing less. Many people who find themselves in that situation are actually more pro-screening and more pro-intervention than people who weren't, in my mind, harmed this way. And you have to imagine--I mean, this is empirical survey data--you have to imagine that there is some psychological condition where people feel like they've had a life enhancing spirit, they've actually dodged a bullet. They got some exposure to death and they didn't--they had some mastery over it even if it was just in some sense a false thing that was then removed. And it has a kind of positive meaning to people. And these aren't like trivial things. These are things that are at the core of a lot of our conundrums of how to actually decide where to look for risk or not and what to do about it. Russ: Yeah; I don't mean to trivialize it. But it's a little like a roller coaster ride. Right? You go on the ride. You have great fun because at the end you survived it; and it's surreal--that horror and thrill of fear. But it's over. So, were you glad you went on the roller coaster? 'Oh, yeah; it was great.' So, there is a similar emotional roller coaster there for that false positive, or false negative in this case, I guess. It's a false positive. Guest: That's great; by the way, I've never gone on a roller coaster-- Russ: Yeah, you've been afraid to, right? I don't like them, either. Guest: Maybe that explains my sensitivity to this issue in some way. Russ: Yeah. Guest: I grew up within a mile, about a mile away from the roller coasters of Coney Island. Russ: Do you like horror movies? Guest: No. Russ: I don't either. See. We've got that in common, too. Guest: So maybe it's a kind of raw carrot[?] test we should give people to help guide them to make risk decisions.
30:17Russ: So, I want to back up a little bit, and I want to talk about this whole general concept of riskiness. And it comes through in the book in a number of places; obviously it runs through the whole book in many ways. But I want to get at it through the chapter where you talk about the Framingham Heart Study. And I argue--and I take some flak for it from my listeners and readers--that epidemiology is--actually, I've described it as an intellectual cesspool. Which probably is not the most flattering way to describe it. But, there is a terrible problem, in epidemiology as well as in economics, that, we're talking about complex systems that we can't control for everything and we're trying to isolate the impact of one variable or two variables. And I'd like your thoughts on that. Frame it in--pardon the pun--in the Framingham Heart Study, as where sort of this phenomenon was born of risk analysis for the general population. Guest: So, that chapter is not meant to damn the Framingham investigators; in fact, I had the great pleasure of, almost all the leading investigators--this study started in 1949--are long dead, but I did some research on this study while some of the principal investigators on this study were alive, who were all--they weren't card-carrying epidemiologists. They were clinicians in practice who, for one reason or another, ended up in public health service--Marine [?] hospital system--and had this practical problem of a newly discovered heart disease epidemic that caused Eisenhower to have a heart attack while in office and seemingly was the white man's burden: stressed out middle class executives were falling left and right around them. And there was very little knowledge about the causes of it. And they ended up not in a very pre-planned way, tacking[?] this way and that, and ending up with a very interesting longitudinal study of people who initially didn't have heart disease and following them for a very long time. In fact the study continues in their children and grandchildren today. To see who developed heart disease. But these were clinicians, and what they were looking for--their audience, essentially was a physician in private practice and what kinds of factors they could find in the course of an oral physical exam and laboratory analysis that could help them predict who might drop dead of a heart attack or not. And on those terms, I think the study actually was remarkably successful. And obviously very consequential in the way we think about heart disease. But they understood, the investigators, that it's clear that if you think about this for a while, that the causality was all based on these individual factors--how many cigarettes an individual smokes and what their blood pressure was. But there could be no way in this kind of study of individuals that what's comparing to another community to understand the things that happened above the individual, or super-individually: in some sense what are the contributions to the mid-century heart attack epidemic was the sale of tobacco, and how did the cigarette get into everybody's body? This is a complex story of marketing, of the profiting made in tobacco, the role of the southern states' Democrats and political economy--a very complex story of factors. But the Framingham Study itself only studied these individual factors and also put the resolution as the knowledge they had at the time. To me, the Framingham story is, it's the kind of story, the risk factor gets attacked the first time in a non-actuarial setting; the term 'risk factor' is used is in the[?] 1952 Framingham Study. You've birthed[?] a certain kind of mindset that has probably contributed certain health benefits, for sure, but it's very narrow, and very individual; and in some sense, the roads not traveled have been ignored. There were [?] before the Framingham study, but we've been so-- Russ: We're farther away from the fork in the road. Guest: Yeah, further away from it. And the other aspect is that these risk factors became--prior to the Framingham Study, doctors gave like homely advice to their patient or something, but they didn't see themselves like diagnosing specific preventable factors and giving people medicines for them. That was kind of hocus pocus--prevention was the job of public health authorities or something, not theirs. But we saw this rapid change, where so much of your visits with your primary care doctor was really about risk interventions, you know. And this kind of epidemiological knowledge base became the platform on which everything is built. And of course today, I wouldn't call it an intellectual cesspool or anything, but many epidemiologists themselves, especially around small relative risks where there is a burgeoning industry of people who find that some tiny factor in lifestyle or diet or behavior increases your chances some statistically significant value but whose impact, you have no idea what it means. And then the next day somebody else produces another study, another observational study, that goes the other way. There's a kind of crisis within what chronic disease epidemiology--so much so that some people call for less, something like a risk factor of 7, a relative risk of 7 or something. Let's not even bother publishing it, just waiting for somebody else to reverse it. Russ: Yeah. But you get it on the front page of the New York Times. Very hard often to resist that temptation. Guest: If you have the positive finding. Not often the negative finding. Russ: Correct. Guest: So, it's an interesting historical moment that I did want to include in the book.
36:26Russ: By the way--you speak about it very neutrally in the book. I was waiting for a little more venom. But it was more of a historical descriptive episode. But I think it's effect on the zeitgeist is what comes through, this sort of general idea that we should be reducing our risk through exercise, diet, lifestyle, etc. That has become, you know, a pervasive aspect of our lives. And so, let me ask you a few questions about that. Should I get-- Guest: Before--I don't want to forget this. Russ: Yeah. Guest: I teach undergraduates here at Penn, and I often ask kids whether they are healthy or what healthy is. And the response I often get is: 'I'm healthy. I don't eat carbs, and avoid glutens, and, you know, go to the gym,' and whatever. It's not like these are--this is the point I'm trying to make in the book; it's maybe too subtle for my own good. But it's: these are not, as I think of them means to health. This is in fact what people think health is. It's the end. Russ: We do. Guest: That's a very--a Martian coming to the United States in 2015 might find this odd. Russ: Yeah, no, it's a great point. So, what should my attitude be, if I want to prosper and live long? Is it good to go to the gym? Is it foolish to be worried about these things? Guest: Well, you know-- Russ: Should I get a physical every year? Guest: The major thing is: don't smoke-- Russ: That's a big one-- Guest: And avoid extreme obesity. And don't get hit by a truck. Those are the major things you need to do. Like I said, a few screening tests and [?] vaccination for children, the required vaccinations are good things to have. In terms of secondary prevention there are some good things like beta blockers, which the heart attack evidence have shown good evidence for. But the majority of like lifestyle claims that people make are not terribly well substantiated; and if you look at it from an historical arc, now people are complaining that the obesity epidemic is a response to the earlier dietary consensus that fat should be avoided. You know, and--it's easy to be cynical here. Russ: It's hard to know. Guest: That's hard to know. Russ: So, as we go forward, we're at this apparent cusp of incredible explosion of knowledge about our own bodies through--I don't know--the confluence of the smart phone, the digital revolution, big data, the genetic mapping and the costs coming down. We are standing, it seems, on the edge of a huge increase in knowledge about how our bodies work. Do you think we are going to make some progress in those areas, where we could actually make some reliable claims about lifestyle, diet, etc.? Because my view is we know remarkably little right now. Guest: Yeah. I share that we know remarkably little right now. The book has a kind of cautionary tale aspect to the new personalized medicine and genetic knowledge of genetic risk, that has already been flowing but is reflected in a much more prevalent as the [?] of whole-body screening happens. The nightmare vision I have is when you get back 23andMe results that say you have a 3-times risk of diabetes and a 2-times risk of heart disease and a 50% lower than average chance of dying of testicular cancer, or something, that this will create a market for all kinds of, especially when the risks are higher than normal, for promised interventions that are based on the genetic manipulation or some temporary pathogen/genealogical[?] understanding that's going to be uprooted a week from now. It's very hard to risk once one believes one is at risk of something. And one of the really tough implications, I think, and maybe we can talk about this a little bit, of the dangers of the knowledge without good evidence of the interventions being effective, is whether we should have some threshold for not communicating this knowledge to people till we know we can do something with it. And that does sound very Luddite-like. And I don't claim to have a crystal ball to know what insights about risk will be fruitful and what not. So I would not really want to be the knowledge or turning on the spigots of who gets research funding and not. But the clinical end--the end of what information gets communicated to patients--I think we are just potentially facing such an avalanche of potentially actionable[?] information for which there isn't evidence about those actions that we might need some kind of Rawlsian bargain with ourselves to keep our heads collectively in the sand until we have good knowledge about particular interventions' efficacy. And I don't know how that would work, practically, so don't press me on it--[?] level. But I do worry about the profit to be made, the fear of uncertainty that will be unleashed by having this information communicated to people without any good sense of what to do about it. Let alone, this epistemological problems with the data about risk itself, which often gets, you know, that often changes with new resolution technologies and unselected populations and things like that. Even if the knowledge was solid about the probabilities without an intervention to do something about it I'm not sure we really are doing people, ourselves, a favor by communicating it. Russ: Well, I think your book--maybe this conversation--is going to help people think about how to think about this, and what they want to know and what they don't want to know.
42:26Russ: It's a particularly appropriate week to be having the conversation: there's been a recent change--I think it was this week, or last week--over the frequency of mammograms that are recommended. Guest: Yeah. Russ: And--I'm not a woman, but I love my wife, so I'm very aware of the risks, and the questions of whether regular mammograms are a good idea. But having said that, I never looked at the numbers. And because this week happened, with these changes in recommendations, because I was reading your book, getting ready for the interview, I happened to look at some of the data that people were putting out about mammography. In fact, I tweeted a Mother Jones article--it doesn't happen that often that I get to tweet an article from Mother Jones, but there was an article about the numbers. And then I found another piece--I forget where it was from--on it. And, I was--I think it was a JAMA (Journal of the American Medical Association) article--I was stunned. I think the JAMA article, it said, out of 10 thousand mammograms, there were over 6000 false positives, and that 10 deaths were averted from those? It's a stunningly imprecise, tragically imprecise thing. And I was just shocked. When you look--again, my point here is that you have to look at the numbers. And one of the themes of your book is that the culture, what's in the air, this sort of what's expected, whether it's the ultrasound during the pregnancy or the PSA test--it's just--everyone is just 'Well, of course you're going to get a mammogram.' Guest: Well, I don't have the numbers at my fingertips. And the numbers needed to treat, the statistic that you quoted, gets better for a woman as they go through menopause and older. But, you're right--I don't know how many thousand woman-years of screening, to lesser amounts of thousands of woman-years of screening. To avert one death from breast cancer at the cost of many false positives, which often lead to overtreatment, which themselves carry death risks. Russ: Yeah. It's not-- Guest: It's a very-- Russ: I'm not saying-- Guest: It's a very--[?] Russ: I'm not saying it's not worth saving the lives. That's not the point. It's all the other costs that come with it, that are just pushed to the side. Guest: Yeah. And, many, many people have made these observations. And one of the things I wrote the book for was to say that I don't think another study or another piece of data is going to change the game very much. And, I think we have to think about what work does screening mammography do for the different interested actors. And, by exploring that work, and the historical structural conditions that make that work possible, maybe there is another way out of the situation. And then, you know, the cynical part of me says: Really, the issue is not screening mammography; it's just screening genetic tests. Things that yet have not been put in the water. Because I think once something reaches kind of an equilibrium in people's--it's part of being an American woman today, to go for their annual mammogram. It's a part of life, in a way. Once a norm like that gets established and does the social/psychological work of reducing fear and controlling uncertainty, it's very, very hard to dislodge. And so my real hope, in some way, is to prevent prevention in a way--prevent more things being done outside of experimental trials. But, you know, it's very tricky. And maybe--it's two men, maybe we should talk about prostate cancer, because it's a very analogous situation in a way. Russ: Yeah. Guest: The--in 2009, some 20 years after PSA testing diffused widely in American society and the rest of the world as well--it was only 2009 that there were the first results of randomized clinical trials screening. Russ: Yeah; I was shocked to read that. I was just shocked.Guest: So, the real fact is not like what the studies show. It's that this thing, this mass phenomena with its own inertia and social and psychological legacy[?] long before there was any scientific evidence. So it's almost a bit player in the story. But let's forget about that for a second and look at the data itself. And one study at the time showed actually no benefit, immortality benefit, for screening. Which in some ways, you can believe those data--the follow-up was long enough and later follow-up complicated this result--you should just not do it. The other study was a multi-centered European study, where if you did some subgroup analysis including some countries and not others, there did seem to be a mortality benefit for any prostate cancer deaths. But it's a very high cost. And the rule of thumb, number-of-needed statistic that was quoted by the editorialist and the authors themselves of the study was that: through 50 men would need to be treated--forget about screened: thousands of men would need to be screened to get to those 50 men who were going to be treated. But anyway, 50 men needed to be treated for prostate cancer that was picked up by screening in order to avert a single death. And those numbers have since changed; and they've moved. But you know: It's very hard psychologically for an individual sitting in the decision-making seat to sort of wrap their hands around what that might mean. I'm in medicine. I've seen lots of people end up with incontinence and impotence after surgery or develop a blood clot in their leg, or they are hospitalized incidentally and die--in a way. My own common sense, it doesn't add up that I would risk 50-to-1 odds of getting the--I actually have to have--50 men would have to have the surgery in order to save a life. That's just clearly doesn't make much--I guess plausibly it might make sense for someone else. Though I don't really think so, if they really understood just what the risks were involved, in some ways. And recently another thing: which is, I think when we get to these really difficult dilemmas, the mantra of many of my well-minded clinicians and ethicists, is to say, 'Well, let the patient decide, the doctor decide,' in some kind of idealized model of shared decision-making. And, I--it's not like I have a great solution myself, but I'm fairly cynical about that being the last resort, when we say we don't have good enough evidence to actually resolve a policy or clinical problem, to say, 'Let's just send the information out and let the doctor and patient decide.' I just think cognitively it's too complex. And maybe, just like I was saying about knowledge, shutting off or at least modulating the knowledge production spigot in some ways. Maybe there are situations where the data is so confusing, and we really don't have some clear idea, to use some kind of principle of, first do no harm. And not bring it up in the first place at all. And not make it an element of like, take the test and then have a shared decision-making around it. And part of this, I have to tell you, is informed by my fairly negative experiences as a practicing permanenture[?] physician in the 1990s, when all the major--maybe I'm exaggerating, but many major advisory panels from different physician groups suggested either that women should get hormone replacement therapy when they are menopausal; or at least you should initiate a discussion of the risks or benefits of giving this therapy. And, at the time, my analysis of the many observational trials that seemed to show some efficacy were all flawed in the same direction of a kind of healthy-woman effect. There had not been a randomized control trial. There was so much profit to be made. And crass manipulation of the market by the hormone-replacement manufacturers. And just the very term 'hormone-replacement therapy' was basically one of these terms that, you know, creates its own demand by, you know, if you have a [?]-- Russ: [?] Guest: --fulfill it. Russ: [?] You are missing something. Guest: You know--you could call it, names of the drugs or something. So, anyway, I had this cynicism, reluctance. And I did not initiate discussions with my [?] patients. If they brought it up, I talked to them about it. Because I didn't think there was a good reason to do it. And that was part of what I considered my medical responsibility. There's thousands of other things not being pushed by special interests I could have brought up by talking to them about it, because I didn't think there was a good reason to do it. And that was part of what I considered my medical responsibility. Thousands of other things are not being pushed by special interests I could have brought up that are thrown out into the ether that are not being discussed. Why this? And, it's one of those few cases, in my clinical life, and observer of medical developments where this is this unbelievable right answer that came out in the form of a clinical trial, called the Women's Health Initiative, that showed that these hormone replacement therapies--which, by the way, were given to prevent osteoporosis and heart disease; they weren't given for the menopausal symptoms. When they were given as preventatives, did more harm than good. It was incontrovertible. It reminds me of that Woody Allen movie where two people are arguing about the meaning of Marshall McLuhan [Annie Hall--Econlib Ed.] on a line to get into a [?] theater, and Marshall McLuhan comes up and says, 'You're right, and you're wrong.' It's just rare that there's stuff like that, in some ways. So-- Russ: I think Woody Allen turns to the camera at that point and says, 'Why can't life be more like this?' So, yeah. Guest: Yeah. So that Women's Health Initiative result was like a little moment like that. Around it. Having lived through that, in some ways, and having some historical research under my belt, too, the way we deal with uncertainty in the present can often be very laughable or, you know, look back with some, by our children or this Martian I evoke here or there, with some degree of alarm; and a lot of people do that?
52:17Russ: When you mention the idea that people talk it over with their doctor and come to a decision--to be honest, what we are really doing there is almost--there's no evidence. It's almost like saying, 'Flip a coin.' We would never--it would be very hard for most people to say, 'Well, should I get the surgery or not? Well, I'll flip a coin.' Even if it's not a fair coin, it's an unattractive way to make to a decision. I only thought about that mutual decision, or whatever you want to call it--is it--there's some aspect of talk therapy, at least. At least, having discussed it a person would feel better actually than actually just flipping a coin. I'm interested in what kind of response you've gotten from your fellow physicians, from these kinds of arguments that you make. Do they see you as dangerous? And I just want to say by the way--I just want to get this in, it's really important for those people listening--it's a very interesting book, Risky Medicine. I encourage you to read it. And, more than anything else, when you hear this conversation, what I'd like listeners to take away from it is: Educate yourself. Look at the numbers, yourself. And if you can't look at them yourself, get somebody who, if you are not skilled enough or you don't know enough to look at them, get somebody who is thoughtful to look at them. A friend of mine asked me, she's facing a hysterectomy and she wants to know if she can get her ovaries removed at the same time; and she sent me a study she found, because she wanted to educate herself. And it said that a recent study has found that removing the ovaries was dangerous because it led to an increase in risk of heart disease. And, I looked at the study. And it didn't look like a very good study to me. Because it had this problem that you are talking about, that--I don't know enough about the nature of the women who made those decisions and whether they are like the way I am--it's not a clinical trial. And so, I said, 'It's not a good study'--that one. And she said, 'But it had a big population.' And I said, 'That's not enough.' Guest: Yeah. Russ: So, educate yourself. Think about it. Talk to people who have thought about data and try to help make a decision with evidence rather than what everyone tells you, because it's complicated. Guest: So, your question was: How do my physician colleagues-- Russ: Sorry, back to that. Enough of my soapbox. Guest: It's okay. It's okay. I haven't lost any physician friends. Maybe, [?], in Pennsylvania and I, [?] high-powered researchers were--there's a lot of fealty[?] played to evidence. And most people realize that I'm essentially providing a backstory, has to do with social, economic, and structural context for things that are puzzling and, you know, troubling in the epidemiological trends, even they are often just looked in terms of data itself. So, I don't--I would say, moreover, I have given like ram-arounds[?] to audiences of people who largely comply with some screen recommendations, and I'm not in favor of or whatever. I get occasional person who tells an anecdotal story about how someone, you know, went for a screening test and got, was found a temporary [?] of cancer and it got taken out and they are alive but if they hadn't had the screening test it would be dead, 'So there, what do you say now?' And you know--but that's very rare. Especially younger physicians, you know--they are trained in a sophisticated way to think about science togetherness. I went through medical school at Yale; I wasn't trained in that way, really. But I think they are lift[?] experience, of, you know, on the street--doctors talk about incidentalomas. And they have some intuitive knowledge of how one thing, one damn thing leads to another, and some reluctance to go a certain way because of that. And I think, unlike some of, you know, my [?] oriented sociology friends, whatever, go on, a kind of direct frontal attack on the false consciousnesses of medicine and people of authority. I try to tap into lived experience of patients, friends; but also doctors. And try to give a kind of scaffold to hang this kind of uncertainty and discomfort. But your doctor who said to you, 'I don't know why I did a PSA test. I'm just kind of doing it,' that's not an easy thing to live with, you know, when you are on the medical end of it, your whole thing, you are actually doing good for people, and not complicating it. So I think if you, you don't--this is more of a message I give to people outside of medicine that want to reach medical audiences without turning them off in a way--is to realize that these contradictions are lived out every day in offices of people in practice. And they are subject[?] to all kinds of undue influences that they are trying to sort of [?] some tasks [?] shoals their ship's about to crash into. So, on my good days I feel I provide some social and historical context for this thing. And I hope the book is--doesn't appear shrill. There are some things we just don't know. And you know, I'm not by any means against--in fact, I'm very pro even the bio-technological emphasis in medicine today. As long as we're subjecting what we find to clinical trials and good evidence. I have a good friend being kept alive right now by some targeted therapy that was just developed in like a Phase I trial; and I'm thankful living--it's not like they have a population effect, but for the individuals affected, this is really some wonderful things that come out of the United States's and Western world's commitment to biomedical research. It's just that you don't want to throw the baby out with the bathwater. There's a lot of problematic things. The pharmaceutical companies--one of the reasons I got interested in this--that one of those prongs the story [?] was about--the profit motive to treat risk rather than some sort of disease, was I literally had a seeing[?] over, very, very big, maybe one of the biggest pharmaceutical companies, who just like left his job the week before; took some students[?]. And he told this very cynical story: he said, the Street--he meant Wall Street--requires a 10% return on increase in sales every year. He said, 'We have to have a new effective drug in 10 years.' Where is this going to come from? And he said: 'Ultimately, even this big pharmaceutical company didn't have the kind of start-up culture to develop things. They had to basically buy the patents of other people's drugs.' But what they were really good at was detailing[?] physicians and getting people to use medicines. And so, the big focus of a lot of their R&D (Research and Development)--and the R&D was impossible to separate from marketing, frankly, was to develop drugs against risk. And common symptoms--like Viagra, things, complaints that everyone has. But drugs that will be for everybody. And for life.
59:50Russ: Yeah. So, given that--that's why I'd like to see a world of medicine with less third-party payments and more out of pocket. We seem to be moving in the other direction. But that's another topic. Guest: Yeah. Russ: Let's close with your thoughts on medical education. As you said, you were trained at an acceptably good medical school. And yet, you weren't exposed to a lot of these kind of ideas. I find it remarkable how few doctors understand statistical issues and taking a statistics class or a biostatistics class is not sufficient. What you learn in those classes typically is the definitions of the different techniques and how the tests are run. But we don't give many people, let alone doctors, much of a training in this. We would call risk analysis, the kind of thoughtful tradeoffs that we're talking about, and when you mention--you mentioned survivorship, your friend being alive--the whole issue that runs through your book is there's a quality of life issue here that typically gets totally lost. So, how do you train--should we train--physicians--do you think it's a good idea, more effectively in these issues? Guest: Well, I'll give you two answers that maybe you'll [?]security [?] the question, but, one is: This is an issue for--so many of the things we are talking about are mass interventions that everybody has to sort of make a decision about, in a way. Decisions are only just one piece of input to the--they are not even the gate-keeper for many of these things. It's an issue of educating the public as well as health care workers--physicians or otherwise. But the second thing, is I tell you how I vote with my feet: which is, one of the things I do in my day job--I'm sure, history and sociology and [?], we have very large major, it's one of the largest majors at Penn, called 'Health and Societies[?]'. And it's getting people--maybe a third of the, this very large undergraduate group are going to end up being physicians, but many of the others go into the health care industry and they are going to [?] in public health--well, who knows what they'll end up doing, and the rest, starting this way in some level. And I think--intellectually mind-opening part of people's lives, like undergraduate life should be--if you can have a serious engagement with not just assistance[?] but the humanistic and other social sciences that, you know, you talk about comparative health systems and the development, the history of therapeutics, or the history of public health--numbers, quantitative things, but also the historical development of it--that would kind of--you know, use an odd image, but we're kind of vaccinating these future doctors and health care workers and people in the health system to be the consumers of what's later going to happen during medical socialization, with a jaundiced eye or at least a skeptical eye and a way, at least some vocabulary to make sense of the experience that happens to them. That's certainly [?] similar to my own energies in. Part of it has to do with how incredibly demanding medical school is; but the real training--some of the formation of people's medical personas happen when they do the residency training in the United States, in the year's internship and residency where it's often, maybe things would have been better now than in the bad old days I trained in, where people are pushed to the very extreme of their physical capacities and overwhelmed by things. And it's not a time of great reflection, overall. So, I guess I put my own effort into doing so many things rather than being, another person saying the medical curriculum should include my little thing. Which--medical curriculum behaves sort of like pretty boring. But I always feel like, you know, it's like rearranging the deck chairs on the Titanic. It's just not--too much work is being pushed on it, in some ways. Having said that, some of my colleagues keep little clinical epi-subsections [?] and I'm sure there are other things that get taught. The most powerful thing about the medical school training is the fact that you are dealing with real people and real conundrums. So, helping people process those experiences. People have been harmed by medical over-diagnosis and medical treatment in a way, and getting some [?] to make sense of it would probably be of help. But I don't have a great program of reform of my own at the moment.
1:04:44Russ: [Postscript] Now for a brief postscript on the EconTalk conversation with Robert Aronowitz about his provocative book, Risky Medicine.

In the middle of this week's conversation I made a reference to the evidence on the efficacy of mammograms. And I got a little fired up and I think I pleaded that people should check out the data and the evidence when you consider getting any sort of diagnostic test. Some of my reaction was to the philosophical issues that Robert Aronowitz raised in his book--our human desire to reduce risk. But part of my reaction was also to some reading I had done in advance of the interview--the Mother Jones article I mentioned and a JAMA article that estimated ten deaths averted for every 10,000 women getting an annual mammogram from age 50 to 59. Ten deaths averted struck me as a small number compared to the other human costs of regular screening--6100 false positives, 900 biopsies that show nothing but lead to anxiety. In addition, there are non-trivial numbers of over-diagnosis that lead to unnecessary mastectomies. Then there's the risk from the radiation from the mammography. The evidence in that chart seems important to consider in a culture where until the recent change in recommendations, an annual mammogram was treated as a no-brainer. So looking at the evidence seems like a very good idea.

I shared these thoughts with a friend of mine who is an OB/GYN [Obstetrician and Gynecologist] and she was not nearly as impressed as I was with the JAMA summary. She pointed out that the measure of deaths averted probably included older studies when mammogram technology was less effective. She also wondered how they measured over diagnosis and survival rates with and without mammograms. And she wondered if it distinguished between the average woman and women who have breast cancer in their family.

That reminded me to go back and look at the chart I'd been reading and to see what the source for the numbers were. I had stupidly treated it as something of a census--an exercise in counting--rather than a set of estimates. I found the supporting article.

Discussing mortality, the authors say that they referenced eight large randomized control trials between 1960 and 1990. They then concede:
some argue that the RCTs are unlikely to be applicable to women undergoing screening today, because they preceded treatment advances that have powerfully influenced breast cancer mortality and used older mammography techniques. However, the RCTs nevertheless provide the best data available.

Hmm. My friend was right. That gave me pause. You'll find links to the chart I mentioned and the source article in the links to this episode.

The authors' estimates of over diagnosis and other costs also had some issues. Some of their estimates are perhaps reliable but it is hard to know without looking at the sources they used to generate their estimates.

I mention all this for two reasons. The evidence is almost never straightforward. It's almost always complicated. Secondly, it's hard to stay bias free. i like to think of myself as a skeptic, but I can struggle to be skeptical about my skepticism. I think I was a little too eager to embrace Aronowitz's skepticism about regular testing. A lot of that was going on in my head--but I worried that some of that skepticism may have come out in the conversation and that I may have been too strong. I worried that I may have encouraged some of you out there to think that the evidence was more black and white than it really is. So I want to make it clear here. Looking at the evidence IS always a good idea. But the evidence is almost always murkier than advocates on either side of an issue will concede when we're looking at complex issues such as health or economics for that matter. The numbers rarely speak for themselves. There are always questions of interpretation, leaps of faith in trying to measure some variables along with the issue of confounding effects from additional variable that often go unmeasured. Inevitably, assessing risk is complicated. Thanks for listening.