Intro. [Recording date: May 13, 2019.]
Russ Roberts: My guest is cardiologist and author Eric Topol,... This is his third appearance on EconTalk, having appeared in April 2013 and May 2015 to discuss two of his books. His latest book, the topic of today's episode, is Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.... What do you mean by 'deep medicine?'
Eric Topol: So, 'deep medicine' is really three separate layers. The first of which is deep phenotyping, which refers to getting all the relevant information, the medical essence, about a person, about that individual. Then, that deep learning, to take all that data, to process it, to learn from it, so the we can do a far better job, a more accurate, more ideal, a person, more management of their conditions and their outcomes. And that would get us to a state of deep empathy. Which is enhancing the human bond. The patient/doctor relationship. By the use of technology. This counter-intuitive sense that technology can enhance humanity.
Russ Roberts: And you argue, and I certainly agree that right now there are some now there are some being caused by technology or we're going to come to that later. But, I want to contrast your definition of deep medicine with what you call shallow medicine. You argue that's what we're practicing now. What do you mean by 'shallow medicine'? What's wrong with it--in under an hour?
Eric Topol: Well, yeah.
Russ Roberts: What's wrong with our current medical practice?
Eric Topol: Heh, heh, heh. Right. Well, it's sad. And we haven't really 'fessed up to it, Russ, that we have so many errors. Over 12 million serious diagnostic errors a year. We've got the problems of insufficient time, insufficient context, insufficient presence. So, the average appointment is 7 minutes for a return visit; 12 minutes for a return visit. Totally inadequate. And we know that mistakes are being made, no less a lack of real connection, human connection. So, altogether, we have this burnout of doctors. Because, during those limiting minutes they are pecking away at a keyboard, typically. They are data clerks. So, they have burnout, and depression, at peak levels. We know that burnout doubles the rate of medical errors. So we've got a recipe here, when you put it all together, of horrendous lack of care. Lack of humanity, empathy, compassion. All the qualities, like presence, trust--all the things we used to have 40 years ago, before medicine became a big business. And clinicians were squeezed to the hilt.
Russ Roberts: Now, your book is about artificial intelligence. And I can't help but notice in that summary of what's wrong with today's situation is that most of those things that you mention--empathy, and others--are not quantifiable. But the revolution that is promised for artificial intelligence is a revolution based on data--quantification of aspects of human health and various interventions into that health and results of those interventions. So, we're going to come back at the end and the potential for empathy to be important again in medicine. But, in the meanwhile, there's a lot of excitement, maybe too much, about what--maybe machine learning and big data can bring to medicine. And much of your book--your book is a phenomenal survey of where we stand in that potential revolution. But I have to say that one of the things I got from your book is that much of that everything has been over-hyped. Would you say that, so far? Would you say that's accurate?
Eric Topol: That's absolutely true, Russ. There's very few prospective studies: That is, in a clinical environment. What we have, mostly, we are relying upon, is long on promise, short on proof, are these retrospective data sets. Sometimes very large; sometimes even millions of people in them. But that's a very different matter when you have this buttoned up, encyclical[?] demonstration of impact, as compared to going forward in a real-world environment in medicine, where things aren't so pristine--more challenging. And so, we only have a very limited handful of studies that have been in this prospective category. So that's why there's lots of excitement. But we are missing lots of validation. And so, hopefully over time we'll start to see that gap close. Because, otherwise, we've got just an inordinate amount of hype associated with AI (artificial intelligence} in medicine so far.
Russ Roberts: Yeah; I was giving a talk yesterday and somebody said, 'Boy, it's amazing what Watson has done for cancer diagnosis.' And, I thought, 'Hmmm. I just talk to someone--who was David Epstein's book, I've read his book, Range, where he said that the Watson diagnostic attempts were a debacle.' And that we had to tone down a lot of the original enthusiasm. But I think a lot of people didn't get that second message. They got the first one.
Eric Topol: Yeah, well, you know, there's probably no company among the tech titans--as IBM [International Business Machines], as I wrote in the book, has been out there promoting--hyping things--they have not accomplished. And, the Watson oncology, cancer product, has really never delivered as it promised. The only thing it has done--which doesn't even need AI, is match up patients with the clinical trials of experimental drugs. But that is far from what we have hopes for. Eventually we'll see, not just of course in cancer but across all aspects of medicine and health care.
Russ Roberts: Let's start with cancer diagnosis. What was the hope, at least at the time, when Watson--Watson being a large super-computer with some, ideally some machine learning and other types of techniques to improve diagnostics? What was the hope? And why did it fall short as far as you can tell?
Eric Topol: Well, the hope would be to have the data of a person all culled[?] together. And crystalized. For the clinician. And, of course, ultimately, for the patient. So, that would mean taking their electronic health record data, their pathology data from a biopsy, their scan data; genomic data--sequencing their tumor--putting all of that together, and help with the analysis to come up with the best therapy. So, that fell apart. Because, initially IBM did a big project with M.D. Anderson, one of the country's leading cancer hospitals. And they couldn't even get past the electronic health record, uh, side, because as it turns out: the records we have today--these "hallowed"--I use that in quotes--electronic health records: They are farcical. Because they are cut and pasted notes. 80% are cut and pasted. And they are laden--just chock full of errors that go from one note to the next. And there are lots of free text that are typed in. So, it is not structured. So, it can't be digitized. So, basically they were trying to do something that was not possible. And, they didn't ever execute; and eventually the whole relationship with M.D. Anderson, who had paid tens of millions of dollars to IBM, it all fell apart.
Russ Roberts: The years that you are talking about, the cut-and-paste part--this, I assume, are a mistake that a doctor makes in working on a keyboard in the presence of a patient; and then when the patient comes back, cutting and pasting the same text from the last visit to save time, thereby repeating the error, making it look like it's more true even when it's not true at all. Is that what we're talking about, when we're talking about errors?
Eric Topol: Well, yes, but it's even worse than that. It is that somehow or other the patient acquires diagnoses that they don't have, or medications that they've never taken; and they just go from one to the next. So, what's fascinating is you give patients the right to edit their notes--which, they should have that right--then you start getting the truth. Which is: I never had that diagnosis. I never had that medication. So, what happens is, whenever something is entered about a patient that's erroneous, it is just perpetuated.
Russ Roberts: It persists.
Eric Topol: Yeah. Yeah.
Russ Roberts: And you talked about the importance of patients taking control of and owning and having ownership of their own data in your previous book. And we talked about that in an EconTalk episode and I encourage listeners to go back to that. It's an extraordinary example of a bizarre culture. Right? When I go to get my oil changed and they put in a certain kind of oil at a certain level of mileage of my car, and they put a little sticker on my car to tell me what that mileage was at the time or when I'm due for my next time in--of course, I can choose not to come back. I can tear the sticker off if it's grossly wrong. I can write on it; I can decide it's too early and I can come back later. But somehow, just the very idea that I could edit my own record is a bizarre one in our current culture of doctor-patient relationship. It seems somehow presumptuous for me to edit my own data. And remember, data here is just a narrative. It could be data, literally: it could be what my blood pressure was or what my, you know, my heart rate was at a certain time. But this is also usually, 'The patient said, complained of'; and if you misheard me or spelled it wrong and it's prone to a dangerous misinterpretation, I could just, like a Google Doc, in theory I should just go in and fix it. But I can't. That's bizarre.
Eric Topol: Yeah; it's so frustrating, and it's just an outgrowth of the medical paternalism. You know: doctors in control. Control freaks. And so we have seen evidence now--you know, this project, MyOpenNotes [OpenNotes] at several medical health systems in the country, are working on, where they have started letting the patients edit their notes. And it's been a huge success. The patients are happier that they don't have all these errors. They get copies of their notes. And the doctors are happy that these notes, these mistakes have been cleared up. So, that should be the norm. I hope we'll get there some day. It's also really important in the year of AI. Because if you have wrong inputs, you'll have bad outputs. So we've got to get this problem fixed.
Russ Roberts: Well, you know, I'm talking to Dr. Topol right now, but I call you Eric; but it's a bit presumptuous of me. But I mention that because I once had a friend who said he always called, or she always called--I can't remember what the gender of the friend was--but they always called the doctor by their first name to sort of even the playing field. Because the doctor comes in and says, 'Russ, how are you doing?' 'Well, thank you for asking, Dr. Topol.' And the fact that I have a Ph.D. in economics of course is worth nothing when I'm in a white gown in the examining room. But it's not just the doctors who have the paternalism. I think we, as patients, see the doctor as a shaman, godlike figure who is going to save us. And we don't really want to be on a first-name basis with you; and probably some of us like the idea that we don't have access to our records. Because that would be, somehow, I don't know--rebellious.
Eric Topol: Well, yeah. It's a bit of the suppression mode that has continued. The problem is, when medicine became data-centric, and eminently portable, then at least those people who want to have their data, they should have rights to it; and it should be corrected data. It should be real. So, you know, I think we are in this transition: it's really going to be important to get this right, because it is about ownership, control; it's about accuracy of data. All these things are going to be important if we are going to ever get all that can be obtained and achieved with machine support. Otherwise it's a real compromise. And I know exactly what you are alluding to, because when I, with publishing Deep Medicine, I had to have a fight with the publisher to take 'M.D.' off the cover. And the same thing--
Russ Roberts: After your name. After your name--
Eric Topol: Yeah. They said, 'Oh, it would be good for sales.' I said, 'I don't care if it's good for sales. I don't want "M.D." I'm just another person, and my patients and I are on equal footing. And maybe I have more knowledge base and experience.' But the key is: we've got to get this thing democratized. Because if patients take more charge--they have their data; they have algorithms helping them--that's going to make life better for everybody. But that requires this less control-freak on the doctor's side. And willingness to let patients take on this responsibility. Which they--not all--but so many are eager to have that charge on their side. And we're going to empower them if they so choose.
Russ Roberts: 14:57 I've mentioned my 86-year-old mom--and, this is the day after Mother's Day: Happy Mother's Day, Mom! She feels guilty getting a second opinion. I say, 'Why?' She said, 'Well, it will hurt the doctor's feelings.' I said, 'Well, but your life is at stake. Does that count at all?' And I suspect there's a generational aspect to this. I assume younger patients are more willing and eager to take charge.
Eric Topol: That is true. There isn't any question there is an age gradient. But, it is intimidating. Because a lot of patients feel like if they question or get second opinions, they are going to somehow get a lesser form of care, and just not get that bond that they seek. So, there is still a problem irrespective of age. But you are absolutely right: Older people are generally, are used to paternalism; they are not going to question it.
Russ Roberts: In a normal world, a doctor would get a market edge and find it useful marketing to tell his or her patients that the doctor goes and gets a second opinion for you. Right? And I'm sure there's many--there are many, many difficult diagnoses where you ask a colleague for insight. But it would be an interesting model to have a clinic where you routinely ask for a second opinion--that the doctor did that, and the patient didn't have to.
Eric Topol: Right. That's the way it should be. And a lot of the mistakes that are being made are because those second opinions are not obtained. And that's where, again, this machine algorithm support, in lieu of a doctor, and perhaps even better in a second or multiple opinions, can help bring us to a higher plane of accuracy.
Russ Roberts: Let's talk about radiology and imaging. You quote someone in your book--I can't remember--but someone says, 'We shouldn't be training radiologists any more because they are soon going to be obsolete.' This appears to be one of the areas where machine learning and AI have made some real inroads and are quite successful. Talk about what's going on in radiology.
Eric Topol: Right. Well, the quote you mention is from Geoffrey Hinton, who is the father of Deep Learning. Who recently received the Turing Prize, which, as you know, is like the Nobel Prize of Computer Science. And that of course is erroneous. As much as I have the highest regard for Geoffrey Hinton and his colleagues, who ushered in this Deep Learning. The problem is, is that radiologists are going to be needed. And there's--AI is going to help them. It's not going to replace them. So, let's just look at the data for medical scans, and note that over 30% of scans read today by human radiologists--30% have a false negative. They miss something. That's a pretty big rate of false negatives. And that rate can be brought down, maybe not to zero but to a very low, single-digit number, with Deep Learning of the scans. So, we've already seen that across the board, largely in these retrospective data sets. But, whether it's chest x-rays or CT-scans [Computerized Tomography], or MRI [Magnetic Resonance Imaging] scans, ultrasound scans--whatever, you name it, the accuracy rate has been markedly enhanced by having deep learning, algorithmic interpretation first. And then, um, the eyes of the radiologist. So, that's where we're headed. That will help us get to this machine and human symbiosis. Which is what we're after. But the other thing it introduces, Russ, is: What will the radiologist do with all this extra time, since the pre-screening--the primary review--is done by the machine? And, you know, I think that opens up some really exciting opportunities.
Russ Roberts: And talk about what those are, because it reminded me. I don't think you mentioned it in the book, but it's my understanding--I don't know that it's true--but it's my understanding that a chess program--the best chess program in the world can beat the best human chess player. But the best human working with the chess program can beat the best program working by itself. And it struck me that this could be an area that might be analogous: That the radiologist, using some human art, could supplement the narrow analytic ability of the reading of the x-ray or the scan. You think that's a good analogy? And if so, what's going on? What would be the nature of that added synergy or complementarity?
Eric Topol: Yeah. The perfect analogy. And I know Garry Kasparov would agree with that. So, the idea is proven now, I think unequivocally, that machines can see things that humans can't. That humans can never see. I think the best example, again, is a medical scan of the retina. And if I show pictures of the retina to the retina, international expert, and if I say, 'Is this from a male or a female?' the chance of them getting it right is 50%. But if I put that through a deep learning algorithm I can get to 97% accuracy. So, there are certain features that a machine can see that humans can't. Now, on the other hand, if machines are treating to find pulmonary nodules on a chest x-ray or wrist fractures or whatever, they are trained for that specific purpose. But the renologist [reologist?] who is looking at it in a larger context, which machines don't really have a contextual basis. So, there are going to be complementarity of strengths. Which is just getting at your point. And that's what's exciting. Beyond the fact that, you know, a lot of renologist[?] don't really like living in the dark basement. And wouldn't it be nice if they can, you know, come out of the darkness and talk to patients? And wouldn't patients want that? Because, you know, they are getting worked up for possible surgery? And they are not going to get straight talk from the surgeon. But the renologist[?] has no vested interest in doing the procedure or the operation. And they have the expertise of having reviewed the scan. So, they can be the honest brokers and helping patients and be patient advocates. And also, as you know, Russ--so many scans in the United States are unnecessary. And so radiologists, they don't provide this today, but they could provide a gate-keeper function and make the system more efficient. That is, doing the scans, for good reason. And unnecessarily we are exposing our population to inordinate radiation, ionizing radiation, because we just have this willy-nilly approach. Again, going back to the fast appointment, encounter-with patient, it is easier to just get lab tests done and scans, because you don't have enough time with the patient. And that's been proven. So, if we have more time with patients, we'll have less scans. And if we have radiologists who are not having to spend as much time reading these scans, they can help out in making the whole system better.
Russ Roberts: It reminds me of what I think is one of the misunderstandings of the modern era, which is that I think a lot of people believe that more information is always better than less. This is a case where it is clearly not the case. And I don't want miss the chance for you to talk about incidentalomas. Now, incidentaloma--the word, '-oma' at the end of a word sounds like an '-oma'--like a cancer. Or a growth. What's an 'incidentaloma'?
Eric Topol: Well, it's like that. It's a growth. Some people would even characterize it as a cancer, a spreading. But, what it is, is that when you do a test that is unnecessary and you find something you weren't looking for--so, it appears incidentally--this is a serious problem throughout medicine today. Because we have so much over-cooking, over-testing. Unnecessary scans are a perfect example. That we find all these things. So you have, you know, a system--the liver or the gall bladder, when you are looking at some other thing, why you ordered a scan in the first place: 'And then we have to work that up.' And so--
Russ Roberts: Isn't that great? I mean, better to find it than not find it.
Eric Topol: Well, no. As it turns out most of these rabbit hole adventures wind up--they are very costly. They are very traumatic. I mean, you have to go through biopsies and sometimes other operations and all sorts of lab tests, so they are really expensive. And they create tremendous anxiety. And almost invariably there is nothing there. So, basically, it's a wild goose chase. And it's good for the revenue of certain entities--of people. But it's bad for patients. And we want to get rid of all of these. And, actually--this is getting to some fundamental economics of health care in the United States, because we have the big inequity problems. But, that's not the only reason why our health care is so expensive, with poor outcomes. Part of it is because we overdo so much, and we have these rabbit-hole, incident, almost stories that are so frequent. And that is where our 18% GDP [Gross Domestic Product] with our decreasing life expectancies and our worst maternal childhood and infant mortality of the whole ECD[?] 36 countries in the world. So, we have a really bad business model. And part of it is rooted on overdoing things. Unnecessary, incidental, [?] chasing.
Russ Roberts: And I just can't, you know, emphasize enough that more information is better. It's just not true all the time. But our human impulse is: Give me the data. Which is a pretty good impulse to start with. But, in other contexts, it's not so good. It takes away your piece of mind, for starters, also. I'm sure you mentioned that, besides the biopsy and the unnecessary surgery that sometimes kills ya', and the radiation that kills you later, because you don't realize is part of the cost of this. It's not free. Even if it's free out of pocket. Which is part of our problem. And that people who make that decision--the doctor--who don't bear any cost if it turns out to be unnecessary, it still has these other costs which are harder to notice.
Russ Roberts: Let's talk about pharma--pharmaceutical discovery. About 5 years ago I met someone at Stanford who was working in this area. And I thought, 'This is going to be such a big revolution. The ability to search for compounds and molecules in many, many times faster, exploring many, many times the variation that's possible in a primitive lab setting.' And yet, much of that has not produced the discoveries that certainly he was promising at the time. What's the challenge there?
Eric Topol: Well, as you know, there's such a bad track record for such these drug candidates that ultimately fail, whether it's because of unanticipated toxicity or lack of efficacy. So, we have a really big problem where there is so much research and development dollars that go into drug efforts and development, and so little that comes out of the funnel at the end that really works. And that leads to these multi-billion dollars per drug development program. And then we see it in the ridiculous pricing of new drugs. So, the idea--which is not yet proven--but it is certainly being pushed on intensely with a crowded field of at least 25 companies that are in one aspect or another of using AI to accelerate, to make more efficient drug-discovery efforts. And it's every aspect, whether it's mining the literature, all the way through simulations, modeling, to do far better in predicting efficacy and safety. So, it's encouraging. We've seen some really big deals out there. Like Incitro[?] and Gilead[?], where some of the traditional pharma companies, bio-pharma, are starting to realize they ought to team up with the AI experts. But, we have yet to see a new drug that's a true AI product. So, we're--again, this is one of those waiting-for-proof points. But, at least there's some intense effort.
Russ Roberts: A friend of mine who is a chemist said, when I asked him about this, prepping for this conversation, he said, 'We just don't know enough about the body.' It's not so much we don't have data. We just don't know about how the body works. And of course maybe eventually the data will get us get a better grip on that. But, there is just so much to be discovered. And I think as lay people we tend to assume, we've, quote, "pretty much figured everything out.' Maybe the brain but we've pretty much think to have figured everything out. Maybe the brain. But, you know, the body, we've kind of got the basics. But it's still really hard.
Eric Topol: Well, and also because we're so heterogeneous. And it's, so, you think you've understood things, and then you start to apply this, and people who are of different ancestry or different co-morbidities or whatever the differences are, they are all over the place, and it's challenging. And so it's hard to predict. And that's a lot of what AI tries to do, is to use data--at depth. Process that. And predict what's going to happen. And, you know, we'll see. I think, the drug development--it's kind of like overall health care: It's so inefficient that it probably is unidirectional--probably have to get better. But it remains to be proven.
Russ Roberts: Let's talk about mental health. You talked about the potential of machines to accurately detect depression--machine learning and AI to accurately detect depression 70% of the time. And, I thought that was a really nice example of one of the challenges of dealing with false positives and dealing with false negatives. Which is: How would you know what the level of and size of the false positive, false negative population is? And how accurate a forecast is? I don't think we can clinically define depression. So, what would it mean for a machine diagnostic to be accurate 70% of the time?
Eric Topol: Well, I think it's going to get, at some point, much higher, and that is because up until now we've relied on these subjective--I wouldn't even want to call them metrics. You know, 'How are you feeling? Are you down?' You know, this is really soft. But now, we have hard metrics. And multiples. So, it turns out our speech, our voice, is really rich: you can tell a lot about a person's state of mind, their mood, from their voice. And beyond their voice, their breathing. So, if they're breathing with lots of sighs, that would denote depression. And then there's the keyboard of your smart phone where you are texting--the strokes on the keyboard are very telling. No less your physical activity: how much you are communicating. There's so many features that passively can be collected now, with minimal effort--truly passive--that would give a person's state of mind, which we didn't have before. So, one of the interesting things, Russ, is: how much of this data do we really need to be truly getting an insightful score if you will or grade of a person's state of mind? Because, it's that person that we're trying to have, whether or not antidepressant medicine is working, a dose of the medicine, or other means of therapy--whether it's just increasing physical activity or sleep or whatever it is. So, what you want to do is have these metrics, and then use long data. So, long data, you know, serially you are assessing this. So, it just has to be good enough. It really is a genuine measurement for that person. So, it's still in the early days, but we didn't have objective, multiple metrics before that could be seamlessly captured; and now we do. So, there's excitement for that. There's one other area in mental health that's unanticipated that's especially exciting, and that's the idea that people are more comfortable to share their innermost, deep secrets with an avatar instead of a human being. So that will help the cause as well, potentially.
Russ Roberts: Yeah; I found that extremely interesting. My great-grandmother--I think I wrote about this in my Adam Smith book--my great-grandmother, who was probably born in either Poland or Russia or Hungary, depending on what war it was after--her joke was she used to live in Russia, and then after I think it was WWI the territory got renamed Poland, part of Poland, which is great because then she didn't have any more Russian winters. Sorry. Rim shot, there. But, she used to say to my father, or her parents, 'If you're depressed, go outside and talk to a rock.' And there was a lot of folk wisdom there, because there are a lot of things you'll say to a rock that you won't say to another person. It's part of the religious impulses; it's the idea that you are talking to something larger than yourself. Now, a rock is a little smaller than yourself, perhaps; but it's an avatar that we might endow, irrationally, with the ability to listen. And we can share more; and sharing might be the key, not somebody listening.
Eric Topol: I think you've nailed it; and so has your great-grandmother with that. The point being is that a lot of us did not expect that there would be this great comfort--enhanced comfort--of dealing with a machine versus a human in such a private matter. But it turns out this is the case: it's been replicated by many groups now. And so when you combine the ability to passively have deep learning about a person's state of mind and that feature, you start to see a path of being able to have less reliance on the lack of mental health professionals we have today. So, as you well know, depression is one of the greatest burdens we have, of chronic illness. And it has an immense impact on disability. It's something that is not supported enough because we don't have the counselors and psychologists, psychiatrists. There's a mismatch that's profound. And it's not just depression: there's all the other mental health conditions. So, the fact that we could take advantage of these new aspects where we can take all this data--so you have machines in kind of two dimensions making a potential solution. It's still very early, though; and again, this is like so many things we're discussion: great potential, but it has to be proven.
Russ Roberts: Yeah. I think most people think, correctly, that depression is a definable disease with symptoms that can be established in a way that cancer cells can be. And, that's not true. As you say, it's often defined as the conjunction of a set of scores on answering a set of questions. It's not something you can do a scan of and determine. One of the things I worry about, after a conversation with Amy Webb here about artificial intelligence and some of the privacy issues you're worried about: I can imagine if my keyboard touches were sufficiently unenthusiastic and my tone of voice when talking to my virtual assistant was sluggish or low energy, that I might be put on leave from my job--for my own protection, say. Or required by some surveilling to report in--and for my own good, of course, would be the claim. But there are some privacy issues there that are very scary to me.
Eric Topol: I agree with you. That is something that hasn't adequately been addressed. And needs to. We have to have assurance that your voice, state of mind, and these other metrics are not going to be used by employers or health insurers or any discriminative use. So, this is something that has not showed[?] up, and it is one of the many aspects--it's ultimately soluble--I mean, you could have laws; there could be technological ways to help preserve privacy, and misuse of data. But we haven't really got--the technology is way ahead of the legal and ethical aspects that need to catch up.
Russ Roberts: Yeah. The thing I worry about is the company that promises it won't sell my data, it won't use my data. How do I know? What would be the--I like the Reagan rule: Trust but verify. I like trusting. I'm all for it. But, how do I verify that?
Eric Topol: And that's why I think the only model that works is ownership: You own your data. And if you want to sell, okay. If you want to share it for a research study, great. And if you want to participate in or just have co-production of your health care with a doctor or health system, fine. But, if you are in control of it, you own it--otherwise, I don't know how we are going to get to the trust-plus-verify mode. And we've already seen in the non-medical sphere how our data are being brokered and sold left and right. And that's just unacceptable. You know, you might get away with that with things that are not material. But, there's nothing more precious about your data than your health, your conditions, the medications you are taking. You really don't want that out on the Internet.
Russ Roberts: A lot of people, of course, are worried about--they call it monopoly; it's not literally monopoly--but the market power that large firms have that do collect a lot of data from us. It seems to me--a lot of other people have noticed; I'm not the only one--that the traditional antitrust solutions to these problems aren't really, don't seem relevant; they seem literally orthogonal to what the issues are. I think we ought to be looking at changing the regulatory, the legal environment for property rights and ownership of that data. And then if people want to sell that data, bring profit from it, and force--that would force these companies to share their gains with their--so-called customers--which in fact who are not their customers a lot of the time; we're just their--we're their inputs.
Eric Topol: Or their prey.
Russ Roberts: Yeah, I would say that. I'm trying to be neutral. I'm drifting into a bad place. I'm worried.
Russ Roberts: You mentioned a potential for hospitals' becoming obsolete. You mentioned that hospitals I think are about a third of our medical costs right now. They don't go literally just to the hospital: they go to the doctors in the hospital. But, the hospital structure is a huge, formidable[?] barrier to innovation in my mind. Why do you think there is a chance they could become obsolete, and how would that happen? And why would it potentially be a good thing?
Eric Topol: Well, I don't think there's a question that it will happen. It's just a matter of when. So, we have the ability now to provide exquisite monitoring. For centers[?]. As good or better than at an intensive care unit. And, all we need is to get remote patient bedroom, not at the hospital, to replace this and prove that people are safely looked after in their own home. That would be immensely costly, because the average hospital stay in the United States is a charge of $5000, and a true cost of at least half that much. And then you have the risks that are involved in a hospital room. We're not talking about an intensive care unit: but a regular hospital room, these people have a 1 in 4 chance of being harmed. Particularly through acquiring an infection, a serious, so-called nosocomial infection, in the hospital setting. So, to be able to get people in the comfort of their own home at a more safe environment, with continuous monitoring, during the time when they are ill instead of replacing the time they'd be sitting, lying in a hospital room--this is something that is eminently achievable. And it just is a problem in that it challenges this critical aspect--as you said, about a third of our health care budget each year in the United States. So, taking it on with such vested interests--so little is being done, amazingly, to replace hospitals. You can only imagine why. And I also would add that we are not talking about getting rid of operating rooms and intensive care units and emergency rooms, or imaging suites. We are talking about the vast, the bulk of a hospital, which is regular rooms. And they are really unnecessary. And we should be making the movement to eradicate their need. But, interestingly, Russ, at the same time we are talking, there's new hospitals that are being built that are an enormous number of rooms.
Russ Roberts: But some of that cost--I would think a significant part of that cost--is the technology inside that room. And you'd have to replicate a good chunk of it in the bedroom of that patient that you want to keep at home. How is that going to work and still be able to save costs?
Eric Topol: No, I mean, I think it's basically you get a risk censor[?]. There have been a couple that have been improved now for home use. And you continuously monitor vital signs. And then if you want to have machine vision, with a web cam [camera]--basically, you have the deep phenotyping[?] of that person who, understanding everything about that person, all their conditions, their medications, you are now getting all their sensor data. And you are getting an alert as to whether there's a problem, predicted to arise, before it happens. And so, that alerts, at a center--a virtual center--like we already have seen operational in St. Louis at Mercy Hospital, where they have a virtual medical center with no patients in it. And so, that gets rid of the need for all these rooms. At any given site you could have doctors and nurses monitoring patients at scale. At very low cost. Because it's basically using AI for each patient, to have their data continuously updated. And alerts that have to be reacted to, in case there's any sense that the patient is starting to have a decompensation or a risk.
Russ Roberts: It seems to me that, for a lot of us--incorrectly--but for a lot of us, a hospital is a haven. It's where we get to--and once we get there, we're safe. You know, in the movie, the ambulance is racing to the emergency room. You know, whether it's the pregnant wife or the accident victim or the terrorist victim. And once you are in the hospital, you are okay. Because in the TV show, House is there--Mr. House [Dr. House]--and he's going to diagnose you; and his doctors and colleagues will stitch you up and give you the magic pills and the fabulous operation that's going to remove the thing or whatever it is. And you are going to be okay. A lot of times that's true. The flip side of that is that one out of four times you are going to get a life-ending, potentially life-ending infection there. So, there are people who view the hospital as a last place to go. But it just seems to me psychologically, that human touch is really important. So, let's shift to the last part of the book where you talk about the role of empathy in medicine. And, although I'm intrigued by the idea of home monitoring, the human touch seems essential. Physically, touch, even sometimes, as you emphasize in your chapter on this. So, talk about why empathy is so important, and how you see that interacting with a world of so much more data--cold, inhuman machinery, data, monitoring, sensors, etc.
Eric Topol: Right. Well, just to round out our hospital discussion: I hasten to add that George Orwell called the hospital the 'antechamber to the tomb.' And also that there isn't that much human contact with doctors because they don't have time, so, on rounds, they'll just step in to see a patient; maybe say a few words, very little in the way of exam and meaningful interaction. So, that, of course, could be achieved through other means. And we'll get to that. But, getting to the empathy, the deep empathy, potential, for all this efficiency, productivity, keyboard liberation, the patient taking more charge--every way you look at it, you've got ability to decompress the current lives of doctors so they have more time to think, to interact, to listen. I mean, you know, one thing--a patient's story, the life story--that will never get digitized. I mean, that's something you really want to listen to and pick up all the nonverbal cues. And there isn't just time for that today. So, when you add up all these things--reestablishing trust takes time, presence--that is, that you are actually looking at the patient, talking to them, and doing a real exam rather than a pseudo-exam or something cursory. Patients want to be examined. They know when the stethoscope is being put on the outside of the clothes that that's not a real, hard exam; and they're just disappointed that the doctor is not taking the time to do it right. So, no matter how you look at it is: if you have time and productivity, potentially this could be turned back to the patient-doctor relationship. And it can reestablish what health care was meant to be, what it used to be--when I graduated medical school 40 years ago, it was there. There was a precious relationship. There was an intense trust and human bond--not always, but characteristically. And that's, of course, the year of Marcus Welby, who was kind of the icon for all that. Well, now, we don't have that. We've lost our way. We've got all the burnout. We've got patients who are disenchanted because they feel--they don't feel like they're cared for. But we have a mechanism now to get that back--which we may not see ever again, or at least for generations, because we've not ever seen something that has this much potential. And so, the problem, Russ, is, as you can see through quickly, is that it could go, be used the wrong way. It could be taken down further while having more productivity: 'Well, go see more patients. Go read more scans and slides.' And if that happens, that would be the ultimate disappointment--that this technology was not used to restore care in health care.
Russ Roberts: Yeah. To me, the dark side is there's a kiosk. But it's also the bright side, because it will be cheap. You walk into the kiosk, you close the door, you talk to the robot--the computer, the thing that looks like a human but isn't. You tell your problems to it. It decides you've got an ulcer; and it can tell--it dispenses the antibiotic right there, and then everything is great. Except, one of the things we've learned, and it just shines through in your book--almost incidentally. I mean, you mention it. But it just can't bear enough mentioning. So many times in your book there are stories--I think it's at least three--where a human being makes contact with another human being and transforms them. Certainly emotionally, spiritually, humanly, but also medically even. And that empathy, that human connection is a crucial part of the medical experience. And we suspect that's part of why the placebo effect is so important and so significant, in medicine.
Eric Topol: Oh, there isn't any question of it. I mean, study after study in the placebo science shows that it's the interaction with the doctor. Even if the doctor says, 'This is a placebo,' it has no action. But it's that human touch, the human factor. That is what medicine is all about. That is why we went into medicine. Which is we want to care for people. And we've been made unable, or impaired, in trying to execute that charge. That privilege, if you will. So, that is something that is so deep and important that we are looking at a potential to get that back. And no matter how you look at it: Medicine is a human touch, human factor story that's lost its way. And, I don't know of any other ways we can get it back. But it's going to be requiring a lot of activism. It's not going to happen by accident. Eventually, though, we'll see whether doctors can rise up and say, 'I demand this time be turned inward. I have time with my patients so that we can go after the things that are important.' Now, you are getting to a kiosk model where the unimportant things--like, today, in the United Kingdom, you can go into a drug store and you have your UTI [urinary tract infection] diagnosed with AI, and get a prescription--
Russ Roberts: --Urinary tract infection--
Eric Topol: Yeah. Which is a urinary tract infection. Which is a common medical problem. It's not serious. It's doctorless diagnosed. And treated. And so, we are going to see a lot more of those things--like, ear infections for kids and skin lesions and rashes, and the list goes on and on. And that again will decompress the need for the in-person appointment. It's much more efficient. It's a lot much less costly. And so, we can get that time back--which is that essence, that gift of time--to pave the way, for bringing back relationships. But if we don't use that, if we don't achieve those, the relationships, then we will just go down the spiral of course that we are on right now.
Russ Roberts: Well, as an economist, it strikes me that the incentives to move to that more human model aren't there. And I would suggest, drawing on my extensive bias against the way the medical system is structured, and a lack of feedback loops, the lack of skin in the game for, which should be the ultimate skin in the game, the fact that most money is being paid for by a third party. That just is such a strange, convoluted system. I think it's really hard to get there from here.
Eric Topol: I think there's a way.
Russ Roberts: Tell me. Tell me.
Eric Topol: Yeah. It's going to take the unionization of doctors. And the reason I say that is, we don't have such a thing now. The AMA [American Medical Association], which is the largest, is not a union of doctors to help patients. It's a self-serving organization which is only representing less than a fourth of doctors who are in practice. So, we don't have--all we have is a bunch of trade deals. And we have no entity that wants to stand up for patients. If we had that, it could take on, you know, this mission. And I believe that ultimately that could be formed. And that would promote activism. And the reason I say that, Russ, is, you remember recently that the National Rifle Association went after doctors? And they say, 'Stay in your lane.' And the doctors came back, like, 'I've never seen in my career'--and they came back with this is our lane. And it was solidarity I have never seen. Okay?--
Russ Roberts: Over the right, over the right to ask a patient where if they have a gun in the house--
Eric Topol: Yeah--
Russ Roberts: Which I find intrusive, offensive. And outside their lane. But okay, at least they rose up together as one.
Eric Topol: Yeah. Well, that's the point, is that for the first time ever, really, we saw the passion and the, you know, posting of pictures of doctors splattered with blood. Of floors and emergency rooms flooded with blood. And, you know, all the passion came out. And you saw, now, that doctors can rise. They can rally to work together for common cause. And social media helped that. I think we can do that. If we don't do that, whether it's with a formal union organization, which is solely for this purpose, not for, you know, having better reimbursement or other matters that are typically picked up by trade guilds, then we might get there.
Russ Roberts: The irony, to me, as an outsider in this field looking in, is a lot of this is driven by what are called non-profits. The hospital system. By doctors who are human beings, like money, like everybody else, but like to think that they are in it for higher causes. And of course they are. But the money is in it, too. Most doctors don't want to work for nothing. I get it. They make huge sacrifices to acquire the knowledge that we rely on, and draw on. So I don't have any problem with doctors getting really rich. But it seems to me--the way this would happen--I'm not a big fan of the unionization. Let me suggest another model and you can tell me what's wrong with it.
Eric Topol: Good.
Russ Roberts: So, some time in the past, a young man--I don't know how old he was, started the Cleveland Clinic. And, that was you. You are not as young as you were then. But let's say you started another--let's put it in San Diego or La Jolla. You're in La Jolla right now, so we'll call it the La Jolla Clinic. And the La Jolla Clinic would have a motto that would be very different than the actual mottos of most hospitals, which is: Maximize how many people you see, and drive the revenue, the bottom line, etc., etc. Now, of course, it's going to have trouble competing, in a certain dimension, with the current set of options. So it would have to draw on, probably, some philanthropic support. But I suspect there are an enormous number of very large foundations that would be excited to see the kind of model you are talking about. And you take that clinic, and you would be a champion, and inspire these kinds of cultural differences in how patients are treated. And you change the world. Now, of course, that's happening right now on a very small level, for very rich people who can afford a boutique experience and still get more than those 12 minutes, and 7-minute appointments. We just need a [?] bigger data set, more people.
Eric Topol: Well, but the other problem is that if you look outside the United States, you see countries that are far more efficient. And are eager to adopt AI: like, in the United Kingdom, where I led their NHS [National Health Study?] review and planning for the next 20 years; and in China. But the point, I guess is: We're at a desperate moment, in economics, in health care. Because we have the worst outcomes, in the United States, by, for at least 2- to 3-fold, expenditures. I mean, this is--if there was ever a broken business model, this is it. So, we need a new solution. And whether you start clinics, as you are projecting, that's not going to, of course, deal with this at the national scale. But, watching other countries that are embracing, that are planning, that are incorporating, implementing the things that we've been talking about today--you know, they are going to show that they are revving up their efficiency and their productivity is going to be enhanced. Because that's what these tools provide. So, we have no plan in the United States. We have an executive order of an AI Initiative--the American AI Initiative that was announced in February--without one dollar of new resources. Without any committee or expert planning, or anything. So, we are behind almost all of the major countries in the world right now that are using this to come up with a better business model for how they can deliver health care--potentially restore the care, or the human element, at a fraction of the cost per individual as today.
Russ Roberts: Well, I have a different perspective. We're near the end of our conversation so I'm not going to open this Pandora's Box. But I do want to mention that, it's true we spend a lot more than many, many, almost every country. It's true we have on some dimensions worse outcomes. Some of those outcomes are complicated by the fact that our population is not like their population. And, I think--I'm going to guess, but maybe you'll disagree--that if you had a complicated medical situation you'd rather be treated in La Jolla than in London as part of the NHS [National Health Service, British health system]. Now, that might not be true if you are a poor person. And that's unacceptable; we can debate how best to fix that. But I think some of the international comparisons are misleading. Do you worry about that?
Eric Topol: I thought that, just like you articulated, Russ, before I spent a lot of time in the United Kingdom, and then I started realizing that the care they are delivering--without all the rabbit-hole incidentalomas that can hurt people--and the quality of care was excellent. And that explained to me why their outcomes are better--at a third of the individual costs to the country. So, I understand--you're demonstrating the kind of party mind of, that we have such great health care outcomes, and the United States is superior. I used to think that. And now I have a different view. I also think that we have the mechanism, the ability to be the world leader in delivery of health care. But we have so many mal-incentives that get in the way, that are formidable. And, can we override them? You know, it's possible. I mean, talking to a noted person in economics from me who has no background in that area--of course, I'm definitely not going to come out with the leading light ideas. But, it just is challenging the status quo, that we've watched erosion of care, worsening of outcomes, at higher costs and relative to any other model in the world. Something needs to be done to really rethink what we're doing.
Russ Roberts: Yeah; I don't disagree with that. I think we waste an enormous amount of money. We are extremely innovative, because we pay for all these innovations out of, with other people's money--that the rest of the world often enjoys. I would just mention in passing that, my friends in London, who are American, don't feel the same way you do about the NHS. But maybe they come with their own biases about--you know, they want to get all the tests. And you don't get them in England.
Eric Topol: No, they don't.
Russ Roberts: You've got to be bleeding or have a bone sticking out before they'll take care of you. And, there are some advantages to that. I get it. And our culture just not easily takes to that. So, they don't have very many MRIs [Magnetic Resonance Imaging]. The advantage is they don't find a lot of incidentalomas. The disadvantage is they miss some stuff--because there's a longer waiting list, and so on. But I'm--I don't deny the reality that our current system is atrociously, has atrocious incentives for spending too much money with very little return. And I do think we need to do something different. The question is, you know, how do we get there from here? And--tell me if I'm wrong--I don't think there's a lot of face-to-face, long periods of doctor interaction in the rest of the world, either--
Eric Topol: Oh, no--
Russ Roberts: So, some of that is just a universal problem.
Eric Topol: Right. And that's where the AI tools that we've discussed could certainly lead to improvements there. I mean, that's essential. That's medicine at a global level, that is deficient in time. I mean, in Asia, in many parts of Asia, it's 2 minutes an appointment instead of 7. So, you know, it has to get better. But, just getting back to the U.K. model--because, in the United States, as you know, life expectancy has decreased, three years in a row. Since that's been charted for well over a century that's never been the case of any country having less life expectancy three years in a row. And at the same time, the United Kingdom is increasing like most, like every other country--increasing life expectancy. So, just to your point about things are being missed: Well, they must not be too important if they are not having any effect on life expectancy. So, you know, we've got to address this, and have, I think, an open mind that our model is hurting people somehow.
Russ Roberts: Well, again, I would just mention, respectfully, that many of the things going on in the United States are not going on elsewhere. We have too easy access to opioids--
Eric Topol: yes--
Russ Roberts: We have a big country with lots of cars and inexpensive gasoline, so we kill a lot of people in our cars. We hope to make some headway against that. We have women delivering babies at much later ages because we subsidize fertility treatments that they don't. And most of those things are tricky for us to, "fix." There's tradeoffs there that I think a lot of Americans would not be in favor of. But, you're right: we can do a lot better. I think that's undeniable.
Eric Topol: You know, I take your point about the need for risk adjustment, different populations. But it's sobering to look at that, to see this data. And I think that--you've referred to, so much of our embedded problems of, you know, with insurance companies, and all the different perverse incentives. If we can somehow work our way through this, where we are truly patient-centric, and get back to the basics here: What are we doing all this for? It's for the promoting health of our people. You know, that would be a fundamental axiom that I hope, again, that the tools of AI, that they lay before us, that they really can make a difference if we develop this properly.
Russ Roberts: I want to close with a personal question. If you don't want to answer, I'll take this out of the, out the recording. You strike me as an extremely empathetic person. Could be a show. Could be a facade. It screams out of your interactions with me, in email or in the conversation we are having--in your book. And I think one of the things I learned from your book is that I'm a big fan of conversation. We are having one now. I think we learn as humans from conversation, and reading your book made me think that medical students would benefit from, of course, on how to have a conversation. And not just like an ethics class--which I think is maybe not so valuable--but at least may be better than nothing. I don't know. But, I’m curious what your thoughts are on how to train people to be empathetic. And, if you think you have any understanding about why you are the way you are. Is it--is it genetic? Did somebody influence you as a younger man to be more empathetic, as a physician? You seem remarkably humble; and I haven't even listed your professional accomplishments at any length, in my introduction earlier. So, what--do you have any thoughts on that?
Eric Topol: Well, it's a great question. And I do feel that I can, um, have the connect with other people and patients. That's a real big part of me. And the question that I grapple with in the Deep Empathy chapter is: Can that be nurtured? Can that be trained? I do think you are getting at a fundamental point about this, which is: People with emotional intelligence, at high level, that really can read other people--you know, through nonverbal cues--and feel, the, the, what's going on in other people as they try to express themselves, and what worries them, and what, what excites them. So, uh, I think that in the future, if we want to get medicine that [?] level, human contact, human bonds, we want to really foster, select the people, who have this high empathy quotient, empathy factor. And that isn't how we pick people today to become doctors. They are largely, you know, finding the brainiacs. Finding the people with the best test scores, and the best grade point averages in college. And that sort of thing. Whereas, I think, because we can outsource so much of what used to be the case, that required the mind of the doctor. A lot of that's going to be done through machine support. So, I hope that we can find the people who--whether it's genetic, whether it's the way they were brought up by their family. Whatever it is. It's probably some complex admixture. That it's there. I'm not sure that you can train--you can train people to listen. But to some extent, they have to have that in them before they go on the road of being a real important listener, clinician. You probably just can't--now of course, in that. So, we'll see. But, I think, of course, there's some embedded qualities that we can select for.