Intro. [Recording date: September 22, 2016.]
Russ Roberts: Now, I'm a little disappointed when I read the book (Messy: The Power of Disorder to Transform Our Lives) that only a small portion of it is about the virtues of a messy desk--as a particular example. Which is a lifelong habit of mine. But it turns out, you are interested in 'messy'--writ very large. So, let's start by talking about what you mean by messy.
Tim Harford: So, I do partly mean literal mess-the mess of a cluttered in-box or a cluttered office or an untidy desk. All of that. But, yes, I also mean a more metaphorical mess. So, the sense of annoyance when you get interrupted or when you have to work in a complex team, with people you don't really agree with: You come from a different background, or the mess of ambiguity versus a very tidy system of targets and quantification, all the mess of improvising a response, whether you are on a sales desk or whether you are running to be President versus the tidiness of prescripting everything. So, it's really an argument that we instinctively like to keep things tidy. We like things to be systematic. We feel bad if we don't keep things tidy. We like to quantify things. We like targets. We like preparation. We like script. We--all of these things make us feel better. But actually, the things that we really value--for example, responsiveness, creativity, speed, resilience--they often have big elements of mess, inextricably bound up with--so the book explores lots of different examples of that and how we can learn to suppress our untidy-minded tendencies.
Russ Roberts: One thing you didn't talk much about, or maybe I missed it was, why we have this desire for order. And it's certainly true that I think people like myself, who have a messy desk, and who are not particularly organized elsewhere in my life--such as, you know, a to-do list. I don't plan very well. In fact I may have mentioned on the program before, I used to teach a course in how to be organized, and how to use your time effectively, when I was at the Olin School of Business at Washington University. And it was a brilliant class--fabulously popular! It was a session, a lecture; and I stopped teaching it when I realized I couldn't implement any of my advice. So, I'm kidding about it being brilliant. But we have this idea that we should be organized. We should be clean and tidy. Where does that come from? Why is that the sort of--why do we think of that as attractive at all?
Tim Harford: I'm not sure. It seems to be quite deep-seated. I mean, it's partly just that order is a thing that you can very quickly perceive. It's a--at least--superficial order is a thing that you can superficially perceive. You can see whether someone dresses well. You can see whether someone keeps their inbox empty, keeps their desk tidy. You can see this stuff. And of course we often make judgments about people based on superficial things. And those judgments are often wrong. And so it's partly the way we look at other people. But I am sure that's not the whole answer. And I don't know really what the answer is to your question. But it's a thing that we often seem to feel. So, you have something in common with Benjamin Franklin. Maybe you have many things in common with Benjamin Franklin, Russ, I don't know.
Russ Roberts: Oh, I like [?]
Tim Harford: So, Franklin, of course, one of the most fascinating men in history--charted the Gulf Stream, signature on the Declaration of Independence, President of Pennsylvania, inventor of a clean-burning stove, and bifocals--I mean all these amazing things that the man did. But some people know the story about Franklin's virtue journal, where he wanted to improve himself. And so he made a list of virtues. And they were not really moral virtues. They were all kind of, to do with productivity and good conduct. But they were things like: Don't drink too much. Don't--you know--Be humble. Don't talk too much. He had this list of things. He had 13 virtues. And he systematically worked through his journal, every 13 weeks, going back to focus on each virtue a week at a time, and recording his efforts to improve. And remarkably he succeeded with every single one, except one. And the one he couldn't nail was Order. He said, 'Let all your things have their place. Let each part of your business have its time.' In other words: Keep a tidy desk. Keep a tidy filing cabinet. Keep a tidy and well-organized diary. He couldn't do it. And he felt guilty about this his whole life. He sort of felt, 'I finally have managed to--make a more assiduous use of manila folders--I would actually have achieved something in my life instead of being this loser.' So, it's extraordinary. Even Benjamin Franklin felt that if he was able to tidy up, he would have done better. Now, of course, I discuss in the book various bits of research why messy desks are highly functional. There's a theoretical reason for this, which actually I've learned more about since I wrote the book--we can discuss if you like--which is also just a practical observation: People with messy desks find they seem to do very well; people with very tidy desks, they often keep them tidy, by filing stuff away. And the filing is often premature. There is a such a thing as premature filing, Russ. And so, premature filing is effectively used--rather than letting stuff sit on your desk for a bit, you get a bit more context. You understand whether this thing is going to be important, and where it might fit or whether it just needs to go in the trash can. You can throw it away. Whereas if you feel you need to keep your desk tidy immediately, you are filing stuff. And actually that stuff should just have been thrown away. So people who have very tidy desks often have huge archives stuffed full of irrelevant material and they can't actually find anything. So, the mess is surprisingly well organized.
Russ Roberts: My daughter once was using sidewalk chalk to embellish our walkway, when she was, probably, 4 years old or so--5, 3, I don't know, very young. And she made a beautiful drawing. And then she abandoned her chalks--they were scattered all over the sidewalk--and went into the house. And I thought, 'You know, I've got this bad habit of being messy and sloppy and disorganized; but I'm not going to let my daughter go through life with this handicap.' So I brought her out. And I said, 'You know, everything has its place.' Which is a phrase you use in the book, which is a standard argument, people who are in favor of tidiness use: Everything has its place. And before I could continue, she said, 'Oh, I know. And these are the places where the chalk belongs now.' And she went back in the house. This is its place. And I thought, 'Hmm. I wonder if it's genetic'--
Tim Harford: She kind of has a point there, doesn't she?
Russ Roberts: Yes, she did.
Tim Harford: Actually, it's a pretty good place to have the chalk, which is next to the sidewalk, where you want to do the sidewalk drawings.
Russ Roberts: Absolutely. And--
Tim Harford: The tricky thing about the motto, 'Everything has it's place,' is, that's fine if you are talking about a pair of chutes or a corkscrew or your keys. I mean, that makes perfect sense: 'There is a place for my keys and I should make sure I should always put my keys in that place when I enter the house; and then I won't lose them.' It's totally logical. Makes perfect sense. But when you've got mail arriving every day--documents, email, colleagues sharing with you, stuff coming in over social media--where is the place for that stuff? How do you decide what place it goes? And actually it turns out a big pile of stuff is probably a pretty functional way to deal with that. Because you are never going to--you won't be able to categorize that stuff quickly enough. And it's an illusion to think you can. And it's counterproductive to try.
Russ Roberts: When I taught this course, this lecture in time management, I used to talk about the messy desk. And say, 'What you should do is take all your papers--they are spread out over the desk in piles--and make, organize it. Put a pile for things you have to do immediately: Urgent things. Make a pile for stuff that you need to get done soon but not urgently. Another pile for stuff you are going to file. And then, finally, a pile to throw away, or at least to consider throwing away. And by the time you've made the piles, if you are like me, you are so tired, and so worn out by making those decisions that you realize you aren't going to do the filing right now or the execution of the urgent tasks. You'll get to that later. And then, what you end up doing is taking the different piles and piling them on top of each other crosswise--so, you keep the organization in place, so you are going to come back to it later. And I found that almost never, ever could I get that, those simple piles executed in the strategy that I planned. And so finally I just gave up.
Tim Harford: Well, let's ask ourselves how a computer would do this. So, I've been studying computer science since I put the finishing touches to the book. So, there's an analogous problem in computer, which is: Computers, as you may know, have memory caches. So you've got a big store of memory, but it's pretty slow. And then you have smaller, faster chunks of memory that the computer can access really quickly. And modern computers--there are quite a few different levels of this. And cache management is a really important problem. If you want a computer to run quickly, the decision of what data gets put in the fast caches is really important. So, it's pretty tricky, because you--that involves predicting what data you have to use, and you don't know what data you are going to have to use. It turns out that a very common solution to this problem that has been proved to be highly effective is called the 'Last Recently Used Rule.' So, what you do, is you just put everything you've done recently--all the data you've used recently--in the cache. And then, when the cache is full and have to eject stuff, you just eject whatever you've used least recently. So, everything you've been touching, everything the computer has been analyzing recently, you keep in the cache. And then anything it hasn't looked at recently, you throw it out. It turns out to be very effective. Okay. Now, how do we implement this strategy on our own desks? Well, I'll tell you what you do. You put everything in a big pile. And then, whenever you touch anything, put it on the top of the pile. And the stuff that is becoming obsolete and that you are not looking at slowly drifts to the bottom of the pile. And the stuff that you keep accessing keeps being put on the top of the pile. So, your pile is now self-organizing. And you have the stuff you really need on the top. It's convenient. And every now and then, you pick up the bottom of the pile; have a quick flick through it; and then most of it you can probably throw away. So, I mean, the point is, that's not necessarily the very best way to organize your desk. But this pile, we think it's random; it's not random. It's actually quite carefully structured by a natural process of picking up papers and putting it back again. And I think we under-rate that natural organization the pile of papers on the desk has. We assume it's random. We assume it's just mess. It's actually carefully structure.
Russ Roberts: Yeah. Who says EconTalk isn't practical, those of you listening out there? I do like that suggestion. And of course many of us do that implicitly without thinking about it in some way or another. Thereby violating my--a friend of mine's dictum, which is: Touch each paper once and only once, which is another one of these rules for a neat desk--which I could never implement. I would pick up a piece of paper, and I'll look at it later, maybe I'll decide to file it later; maybe I'll execute it; maybe I'll throw it away. But it's better, safe now, just to keep it on the desk. Later on, I pick it up again. So, the chronological filing system which you're suggesting is more effective than that. I have to say, there is something appealing, even for a messy person like me, to a neat desk. And there is an argument--and it's interesting how difficult it is to implement this argument--there is an argument for, as you say: If you haven't touched something in a very, very long time, it's very unlikely you'll need it; you should just throw it away. So, I have a surface--my wife complains that we should buy nothing that has a horizontal surface because all we do is pile stuff on it. She's right, of course. But that bothers me, that that is the reality. So I'll always say things like, 'Well, we'll just--eventually, we won't pile anything on this one, this desk' in my bedroom, say. 'We'll just use this for working at.' And it never works out that way. It accumulates clothing, it accumulates folded clothes, unfolded clothes. Books, papers, knickknacks, pens, all kinds of strange stuff end up across the desk. Including a bunch of papers. Those papers, I have not touched in about a year, the last time I tried to neaten things up. Why don't I just throw them away? And I can't. It's very difficult to do. Even though I do like the idea of a clean desk. And I don't know why.
Tim Harford: It is tricky. And sometimes I wonder whether it's compulsive--the hoarding behavior--whether it's not exactly about getting organized at all: there's something else going on. What I would say is, I'm not against throwing out a bunch of stuff. I'm actually a bit of a minimalist myself. I don't like to keep too many papers; I don't like to keep too many clothes. I'm not a big fan of huge hoards of stuff. But the point is, a lot of people, when they are faced with hoards of stuff, their impulse is to try to get it organized.
Russ Roberts: Yep.
Tim Harford: And I think the organization efforts often fail. And you are better--if the stuff is overwhelming, you should probably get rid of the stuff. There is famous book by Marie Kondo, the Japanese decluttering expert--which I'm actually a big fan of. People have already starting talking about messy as the opposite to Marie Kondo's book, The Life-Changing Magic of Tidying Up. Actually, if you read that book, I mean we are getting a long, long way away from EconTalk here, but if you read Mary Kondo's book, one of the things she says is systems for tidying up are a trap. Systems for getting organized--you buy a new filing cabinet--it doesn't actually help you. At some stage you just have to throw the stuff away. And I sympathize with that, I have to say.
Russ Roberts: Yeah. Those systems--file cabinets, and storage bins--they are all, I think designed to employ people who like to create those. But they don't seem, for me, to be very helpful. At least not so helpful in my personal life. The last thing I want to say about tidying up--then we'll move on to meatier topics--is that when people talk about the productivity of a clean desk and how you are more productive--and they make the case; I don't think it's true but people make the claim that, 'Your head's clearer, like the desk,' etc., etc., they don't take into account the time you spend cleaning it up. Which is time you can't spend working. So, for me, my constant justification--which is probably a crutch--but my constant justification for messiness on my desk, in my personal life where I'm late taking care of this task or another--it's just that I'm more productive right now; I should be doing the productive things and not doing this act of organization. There's nothing productive about it.
Tim Harford: I think people also--I think you are absolutely right. People also get cause and effect mixed up. So, my desk gets really messy when I'm really busy. And so I feel a bit stressed. And I feel there's lots of stuff buzzing around in my head. And it's natural to blame the desk. But it's not the desk's fault. The desk got messy because I got busy. It's not that I started feeling busy because my desk was messy. But we naturally, I think, flip that causation in our heads.
Russ Roberts: So the first part of the book--strangely enough, not about desks. It's about creativity of various kinds. And you start out with a Keith Jarrett story which is pretty remarkable. Tell that story and what we can learn from it.
Tim Harford: This story begins in early 1975. Keith Jarrett, great jazz musician, and he was invited to play a concert in Cologne. And the organizer of this concert was actually a 17-year-old girl. She was the youngest concert promoter in Germany. And she had managed to persuade the Cologne Opera House to host this late-night jazz concert with Jarrett. But, we know--maybe she wasn't quite as competent--she was very ambitious but there was a problem. Anyway, whoever's fault it was, there was a problem with the piano. So, Jarrett used to do these amazing concerts where he would just step out and improvise: sit down at the piano, solo jazz, no rehearsal, nothing. But when this girl--she was called Vera Brandes, when she introduced Jarrett to the piano, it became apparent that there was a big problem with the piano. It was a rehearsal model that had just come out of storage. It was out of tune. A lot of the keys were sticking. The treble notes were all--the felt was worn away so they started very tinny. The piano was too small. There were so many different things wrong with it. And, some of them could be fixed: you get the piano tuner and you could fix them. But a lot of them couldn't be fixed.
Russ Roberts: And there's wasn't much time.
Tim Harford: Yeah. There wasn't much time. Just a few hours. It was raining outside. And as it happens, you couldn't bring another piano in; and the Cologne concert hall staff they had all--it was Friday afternoon, they'd all gone home. And this is a late-night concert. So, they just couldn't solve this problem. And Jarrett--I think very naturally--said, 'I'm not going to play.' And of course Vera Brandes then begged him to please play. And then he felt guilty because she's 17 years old and there's 1500 people coming; and, 'Yeah, okay, I'll play.' So he agrees to play. And he and his producer decide they are going to record this thing, because they want a documentary record of what a musical catastrophe sounds like. And they record the concert; and he plays. And it's magical. It's absolutely magical. And it is beautiful. This concert, this music was actually played at the birth of two of my children. My wife loves it; I love it; lots of people love it. It's the most successful solo piano album in history--the most successful selling jazz album in history. And yet it was recorded because he thought it would be terrible and he wanted a record of it. Musicologists can explain quite why it was so beautiful: basically, he was forced to make various decisions about how he played this piano to adapt to all the faults in the piano. And those decisions led to a new way of playing that had very soothing middle-tone notes: they weren't too high, they weren't too low. So it sounded very ambient. But he really had to bang the piano down because it was a quiet piano. And so there's this drive and energy, and peacefulness as well. And it's beautiful. So, a few things to say about that. One is: Obstacles sometimes help us solve problems. Two, is, we don't expect to see that in advance. So, Keith Jarrett was--it was totally understandable that he refused to play. And he's a radical musician; he's very innovative; he's a very brave performer. But even he didn't want to play this thing, and didn't see this as an opportunity. He didn't think, 'Oh, this is great. This is really going to bring out my creative side.' He said, 'There's no way I'm going to do this.' He had to be guilt-tripped into it. So, we tend to resist. Now, you might reasonably ask: Why would the bad piano make you better? And, does that generalize? And I think there probably is a lot of reason to think it does generalize. That--various pieces of research from cognitive psychology show that when we are kind of knocked sideways or distracted or given slightly more tricky tasks, unnecessarily tricky tasks, we often produce a more creative response. We can see this through the lens of computer algorithms. A computer algorithm that has to try to find an approximate solution to a very complex problem. There are a whole family of these algorithms, but they all have one thing in common, which is they all tend to use some randomness, especially early on. And the reason they use randomness is because otherwise they get stuck at a local optimum. They go down some blind alley; they find a spot that's pretty good but not all that good. And they can't get out. And this search program, the computers using, can't get out of this blind alley. But if you throw randomness in and the computer suddenly makes a big move and tries something totally different, then it gets unstuck. So, both psychology and computer science suggest that we should not be entirely unhappy when something unexpected happens, when we are forced to work with a difficult person or with a substandard tool. It's not necessarily a problem.
Russ Roberts: Well, I think there's two things going on. And they are important to distinguish. One is the power of constraints. So, haiku being an obvious example, or even just the classical sonnet. The constraints, the poet, often leads to wonderful results. And then there's the element of surprise: the unexpected, the discordant, the thing that you weren't anticipating that's thrown into your path. And I would think that people respond very differently to that. I think some people rise to the occasion--that would be Keith Jarrett. Other people don't. I was thinking of the economist at a conference who lost his notes. And you give a wonderful example of how, the end of Martin Luther King's 'I have a dream' speech--the actually phrase, 'I have a dream" was not in the text of the speech. And some people, forced to improvise like that, or forced to work on a bad piano, will suddenly, through adrenalin and inspiration, rise to the occasion. Other people, like the economist I'm talking about--he couldn't stop talking about it for the first couple of minutes of his talk, about how unfortunate it was that someone had stolen his briefcase; how awful his talk was now going to be, and how mad he was that--just ruined. He just couldn't give it up. So it would seem to me as very person-specific. What do you think of that?
Tim Harford: I think that's true. It is person-specific. Although what I would say is it--this economist might well have done really amazing economic research if forced to operate under some constraint or forced to work with somebody who saw things in a very different way. They--so, I think his problem was probably that he wasn't a very confident presenter in the first place. And if he wasn't a very good presenter in the first place, then giving him even more stuff to worry about didn't really help. Though, I think, you never know. My own career as a public speaker--I've been a public speaker for a long, long time, and I was very nervous at first. And I had a breakthrough. I was speaking at a school competition, and I had--the rules were, everything had to be on a 3x5-inch piece of paper. A 3/5" card. And I had printed out my entire, the entire text of my speech, double-sided on this one card. In tiny, tiny print. Because I was so terrified that I couldn't speak from some bullet points. And halfway through the speech I looked down and I realized I couldn't find my place on the card, because there was this tiny writing. And then I started to realize that I knew the speech anyway. Or at least close enough. And I just started to give the speech without looking at the card. And I got a lot better. So, you know, even if you are not that good, sometimes the shock is what you need to realize that you can cope. But you are right. I am not making the argument that distractions and mess and obstacles and problems are always better and always improving. I don't think they are at all. What I'm saying is that they sometimes are; but we are never going to look for them. We are never going to seek them out. We under-rate them. We are always looking for ways to prepare more and to make things more systematized. And that doesn't always help. That doesn't always help.
Russ Roberts: Yeah; I was reminded of--I don't know if it's true, but I've read that Jackson Browne has a limited vocal range. And that that gives his songs--it's a little bit like the Keith Jarrett story, it reminded me of it--that he has a limited vocal range. And so he writes songs that he can sing. And that gives his songs a certain homey warmth to them that they wouldn't have if he could reach a lot higher notes or a lot lower notes. The other thought I had is this great moment on the Jimmy Fallon show, when Jimmy Fallon has Idina Menzel on and they sing "Let It Go," but they are accompanied by instruments from the classroom of, say, kindergarten--so it's not a great orchestration. It's kazoos and weird little, strange, xylophony things. And it's glorious even though it's not particularly symphonic.
Russ Roberts: So, you talk a lot about teamwork and group work--improvisation, collaboration. What do we know about that? One of the examples you give are the people who can solve problems in areas which are not their specialty, because they bring a fresh look to it. Is that common? Or is that just a fluke?
Tim Harford: I don't think it's a fluke. I think you are right to raise the question in a slightly skeptical way. I have to say, Russ: I've been really looking forward to this conversation in general because I've been hoping you were going to clarify what it is that I actually think about my own book. Because there's a lot of stuff in there, and I was thinking, 'This is going to be great. Russ is really going to figure out what works and what doesn't about this book. So, thank you.' So, there's an example I give in the book--I give two different examples. I talk about a British rowing team and I talk about the great Hungarian mathematician Paul Erdős, and this very, very different they have to collaboration. And this British rowing team succeeds because they are unbelievably focused. They are all exactly the same as each other and they don't talk to anybody; they don't take any input from anybody. Even people you would think they should take input from. They just can't--they completely cut themselves off and totally commit and produce this amazing performance that people didn't think they were capable of. So that's a very homogenous team. An isolated team. Erdos, on the other hand, holds this amazing record for collaboration. He is the most collaborative person in the history of mathematics, probably in the history of academia. He has so many joint papers, across so many different fields. He's just a complete poster boy for the art of making connections in unexpected places and drawing inspiration from that. And I think laying out those two--these were extremes of the collaboration story. They tell us what obviously collaboration means a lot of different things. And there's an advantage to being very committed to a team and really understanding that team and working together, and it's all very well oiled and very well prepared. And there's also an advantage to gaining inspiration by working with total strangers in totally different fields. And it depends what you are doing. So the whole chapter is against that background context. But I think that--there are a number of reasons to believe that working with people who have different backgrounds, different perspectives, is generally going to be helpful when we are trying to solve new problems. Scott Page, by the way, the complexity scientist, has a whole book on this, where he really approaches this problem from all kinds of different directions and shows when it's true and when it's not true. So, one basic perspective here is: If you've got, say, four economists in the room; they are trying to solve a problem; and into the room comes a physicist, or a sociologist, or an anthropologist--just anybody. They would just have a different way of looking at the problem. Or you've got four Americans, and in comes a Mexican or a French person. They are going to see the problem in a different way, have insights that help unstick everybody else. If you just add a fifth economist, a fifth American economist, you are not really gaining very much--even if that economist is really good, they probably don't have much to add that isn't really there in the team. That's one of the things that's going on. But there's something else that's going on that's much more subtle, which is that when we feel uncomfortable because there are different kinds of people in the room, we raise our game. The very famous study by Samuel Sommers of jury deliberations, where he simulated jury deliberations with black defendants and all-white juries, or juries where there were some white people and some black people on the jury; and what he found was that the juries--the mixed juries were much, much more careful in really thinking through the evidence that they had heard. They were making a fairer decision; they took longer; they gave people a fair hearing. But this wasn't particularly--you might expect, 'Oh, but of course, because the black people on the jury sympathize with the black defendant and so they raise various objections.' And that's not what's going on. What's going on is the white people on the jury see the black people across the table and think, 'Okay, I should be careful here. I need to justify what I say. I can't just resort to lazy stereotyping and just wave everything through.' So, they raise their own game. And there's another bit of research I discuss in the book, that's not racially charged at all, but it's a similar phenomenon, where you had groups of students trying to solve murder mysteries. And the groups were either three students who knew each other and another student who didn't know anyone in the group, or four students in the group who all knew each other. And in this case what's interesting is nobody in the group had any new information: they all had exactly the same information. It was very carefully controlled. But still, the groups with the three friends and the stranger did significantly better--really did a lot better solving the problem. And a lot of that was people were less lazy. They felt they had to very clearly explain what they were thinking and why they were thinking it, because they had to justify themselves in front of this stranger, who was making them feel uncomfortable. And the amazing thing about that research is, people who were much more likely to solve the problem, the groups with the stranger, didn't realize they were more likely to solve the problem--didn't think they had solved the problem, and didn't enjoy themselves. So you have this situation where you add a stranger to the group: unambiguously that improves the problem-solving performance of the group; and unambiguously the group completely fails to appreciate this. They didn't like having the stranger around at all; they didn't enjoy the experience; and they also didn't understand that that problem-solving capacity had been improved. They just completely dismissed that possibility.
Russ Roberts: Well, I liked the example you gave of the investment clubs made up of friends that didn't do that well because somebody had a pet investment and the friends didn't feel comfortable rejecting it because they didn't want to hurt their feelings. So I think there's definitely some truth to that; and certainly, people who are of different races or backgrounds who might change their demeanor. My view of this is that diversity is overrated, at least in some of these situations. Particularly the example you gave of a Mexican or a French person coming into the room--it really highlights--I'm not sure they'd add anything, but I know they would add a problem of communication. And so, one of the advantages of non-diversity is, 'I know a language you speak'--I don't mean just English: I mean I can communicate with you better. We might have some experience. So to me there's a tension between group-think, which is certainly awful and diversity is a good way to avoid groupthink; but there's also, somewhat akin to the cleaning up of the constantly messy desk, there's a certain cost in dealing with people who aren't like you because you have trouble communicating with them. You don't understand the same social cues. You don't have the same shared language--you know, the physicist and the economist. A lot of physicists I don't think understand much about economics; and I certainly know I don't understand much about physics--so I'm not as dangerous. And the other thought I had--
Tim Harford: Before you express that thought, Russ, I agree with what you say. I think that diversity is overrated by social scientists. Well, I'm not sure I agree: I think that's possible. But what I would say is: Diversity is definitely underrated by practical people trying to get on and actually do things. So, you're absolutely right that having somebody else in the room who doesn't speak the language very well or who--having to communicate with a physicist and, you know, means all the shortcuts that as economists you can use, you have to explain them to this physicist and he or she is saying stuff to you, and you don't really understand, say, and they just have to translate everything: that's all completely true. But I think that people are acutely aware of all these issues when they select themselves into groups. But I think they overrate them--they are too worried about them; and they underrate the advantages that they are getting. So, you may be right that diversity is overrated by social scientists. But I don't think, practically, people seek out diverse situations as much as they should. Maybe I'm wrong.
Russ Roberts: Well, the other part I was going to disagree with--again, some of the claims that the research makes in your book is, a lot of these are about absolutely extraordinary people. Paul Erdős is not normal: he's waaay out in the right-hand tail--in all kinds of dimensions. So, he wrote a different paper each week with a different mathematician. He would, you know, work on problems way outside his so-called area of expertise and find ways to contribute. And I'm just thinking, 'That's Paul Erdos.' If most people tried that level--forget that level--anything close to that level of jumping around, they'd just be a scatterbrain. They'd have trouble doing anything effectively. So, there's a tradeoff there, would be the only point I'd want to emphasize.
Tim Harford: I think that's fair. I mean, there's a story in the book about Erdős moving around a hotel room full of mathematicians, a bit like a cocktail waiter might move around serving drinks--but he's serving mathematical insights. He'd sit down with somebody and have a brief conversation and suggest a way forward, and then he'll move to the next person, the next person, the next person. And then at some point he just calls out in Hungarian and you realize he's also been listening to his mother talking to a friend of hers in the next room in Hungarian and he's contributing to that conversation, too. I mean--yes--that is unreal. The man was a complete genius. And yeah, we can't learn too much from that. But what I would say is that I tried to tell these amazing and dramatic stories because I just think they are fun to read. I think people enjoy reading them. I hope they enjoy reading them. But then I also try to back things up and say, 'In more every day setting, what's the implication of this?' So, after Miles Davis producing Kind of Blue and being very receptive to ambient noise and just doing one take and improvising a lot, I then talk about the experience of people on Help Desks, many of whom are forced to work through scripts because that's the safe option--that's the cheapest and safest way. You just work through a script. But actually when you have someone on a Help Desk who is actually able to improvise, you get amazingly impressed and delighted customers. And we can all do that, because all that involves is the ability to speak the language that the customer is talking, listen to what the customer is saying, and then a little bit of authority to make some decisions. Because after all, you are being hired as a human being, not as a cheap alternative to a voice-synthesizing robot. So, I try to say yes, okay, Keith Jarrett did this amazing thing; Paul Erdos did this amazing thing; but then: Is there an implication for us?
Russ Roberts: Yeah, I don't find that Help Desk example convincing. I'll tell you why. Yes, there are Help Desk people who can improvise beautifully. The reason I think most businesses have a script is not because it's somewhat risky--I think it's incredibly risky to let people improvised. You've got to have a certain incredibly talented level of person. The beauty of the script is that a lot of people can implement it. The challenge of the improvisation is that very few can implement it well. It's true--it's easy to find people who speak English. To find people who can empathize, harder. To find people who can empathize effectively in language, harder. To find people who can empathize effectively and not cost you a lot of money, because of the things you promise in return, much harder. So I think--again, I think there's a little bit of romance about this idea, that you can find some examples of improvisation. My favorite example is the hotel clerk, hotel bellperson who is out on the curb and realizes that the people have driven off without their suitcase because he forgot to load it into their trunk; and he jumps in a cab and follows them to the airport. And he gets rewarded by his bosses for incredible customer service. But if that happened every day, they'd fire him. Because there's too much cost there. So, I don't think you'd want to argue that we should move away from scripts. I don't know--do you?
Tim Harford: I like that example, by the way. It's not in my book but it is a lovely example. Yeah, the guy is brilliant the first time, and he's an idiot the second time.
Russ Roberts: Yeah: the tenth time he drives you crazy. The tenth time you tell him to use his old car, at least, and not have to compensate him for the cab ride.
Tim Harford: Or maybe just put the suitcase in the trunk. No, you raise a good point. I think it's worth asking the question: How much of this is being driven by what companies can measure? It may be--I've never run a multinational company with a Help Desk, so maybe I just don't really get it. It may be that they've done all the analysis and what they've understood is, yeah, they may alienate a few customers; it might be kind of annoying to feel like you are talking to a robot even though both of you know you are actually talking to a human being. But when you do all the math, you get rid of all the risky outcomes and it's super-cheap and these people don't cost very much. So maybe that's it. But I do think there might be a tendency that--that's just how it looks on the spreadsheets because you are not really able to measure how pleased certain customers really are, or how annoyed certain customers really are. And I suspect there may be some of that going on. I mean, a slightly different point but related to the idea that large organizations can sometimes fool themselves about what's going on in the front line is the recent Wells Fargo case. This is a classic--Hayek, I'm sure, would have had something to say about this, this idea that there are only certain people who are there, on the ground, and who can see what's actually going on. They understand the specific circumstances of time and place, in Hayek's phrase. Wells Fargo, this case where, you know, some boss decided--Wells Fargo makes money when they open a lot of bank accounts; starts pressuring employees to start opening bank accounts. Employees go, 'Okay, fine; we're going to start opening bank accounts. We can do that'--
Russ Roberts: We can do that.
Tim Harford: 'We can do that; no problem. We just don't tell the customer that's what we've done.' So, yeah. But I mean, all this raises issues of, sometimes you want tighter control over your employees: that's the wrong kind of improvisation. So, it's not straightforward. But hopefully these examples make people think about what they value and what they actually don't, and what they're willing to pay for and what they're not.
Russ Roberts: The application I really liked of this idea of improvisation versus scripting--the 'I have a dream' example of Martin Luther King is phenomenal. I think the real gold standard on this is Churchill, who, after losing his place in a speech early in his career and ending up standing in silence, speaking in Parliament, and humiliating himself; making people worried he was going to die young and go crazy like his father. He, from then on, scripted every single speech, word for word, and memorized it; and then delivered it as if it were spontaneous. And I don't know if he ever went off script. But that's how I try to speak. I try to write my speeches word for word and then not use them. Which is kind of the example you gave with your notecard.
Russ Roberts: But the more interesting case for me is the example you talk about as a personal interaction. So, we have certain rules of behavior in our personal interactions. So, I say, 'How are you?' and I'm not interested actually; I'm just telling you that I'm alive and I'm talking to you. And you say, 'Fine. How are you?' Which means, 'Okay, I understand that you are talking to me and I'm ready for your next bit.' And there is a temptation, I think, in human interaction to mimic robots, to mimic the script of the Help Desk. And to fail to ask the more, deeper, powerful, inspiring questions of each other, because we act as if we had a script. And I found that part very interesting. I'm not sure how to actualize the better strategy, besides prompting people with crazy questions. But the standard cocktail party sequence of, 'How are you? It was a nice day today. I was surprised it rained. What do you do for a living? How many kids do you have? Are you married?'--all of that. It's a certain dance we have, and it's pretty rigid. And maybe there's something to be said for going outside that dance. And yet when we do, people are often so jarred by it that they step back. What are your thoughts on that?
Tim Harford: It's a difficult thing to do, isn't it? So, I write about the ideas of Brian Christian, who is a very interesting science writer, who wanted to compete in--there's a Turing Test Competition called the Loebner Prize, where computers compete to try to convince human judges that they are in fact human. But in order to implement the Turing test, if you recall the original Turing test: You've got a computer and a human and you have to tell which is which. In a conversational environment. So, to compete in this Turing test, you need humans as well. And they give a prize every year to the Most Human Computer. But they also give a prize every year to the Most Human Human. And so Brian Christian decided that he wanted to win the award of Most Human Human. And that got him started on investigated how artificial conversation works, the history of artificial conversation; all the ways that computers have managed to do it. And to reflect on what that meant for human conversation. And what he discovered--as you pointed out--computers are pretty good at simulating certain kinds of human conversation. Certain kinds of human conversation are incredibly predictable and robotic. So there's the ritualized stuff: 'How are you?' 'I'm fine, thank you.' There's all that. But there's also stuff like, 'Are you an idiot?' The insults. Insults are really easy. If a computer just produces a string of insults, it seems very convincing as a human being. Because insults don't have any memory. They don't refer back to anything else. And sweet nothings--the phrase 'sweet nothings' is revealing. If you start saying, 'I love you. I can't wait to see you. You make me feel so good.' All that sort of stuff, that also can be scripted very, very easily, and can fool people, because a lot of wishful thinking goes on. It's only when you start getting to conversational gambits like, 'When was the last time you really felt afraid of something?' or, 'What's the most difficult conversation you've ever had with your children?' or, 'What's your favorite hobby?' or, 'Are there any hobbies you have that you gave up and you wish you still did?' Or that sort of thing--something that a computer would just: The moment a computer would immediately crunch and crash and burn, that's an indication it's probably not a bad human conversational gambit. So, Brian Christian used these much more tricky, almost random questions. And he would also--just the pattern of conversation was much more--he would interrupt himself. He would interrupt other people. Because real human conversation is like that. And the more unpredictable his conversation became, the more like a human it seemed. The messier it got, the less like a robot he was.
Russ Roberts: Of course, some people love messy conversations. And some of them, as you say, they scare people, alarm them. The best conversationalists I know, my favorite people to talk to, are people I can establish enough intimacy in a short period of time to ask those kind of deeper, probing questions that we normally don't want to talk to relative strangers about. Or even friends. So it just makes you think it's a very--you and I are having a conversation right now, and conversation I think is understudied as a part of the human experience. At least by economists. It doesn't get written about a lot. And I think it's a huge part of our life, and I think it's useful to think about it a little more systematically, or at least more creatively.
Tim Harford: I think so. And there's a point that has been made by the writer Sherry Terkle that modern conversation seems to be now more scripted and more controlled, because it's intermediated through computers--through text messages and through SnapChat and WhatsApp and so on. And her observation--I don't know how true this is, but it seems plausible. Her observation is that a lot of kids who grew up with text-message-based conversation--they actually have a very tidy way of communicating.
Russ Roberts: Yep.
Tim Harford: So, grammar sticklers will say, 'Oh, well, you know, they are not using proper English in these text messages and they are not spelling right.' Blah, blah, blah. Now, that's completely the wrong criticism. The criticism is that these interactions are overprepared and too safe. They don't really reveal what the person is truly feeling. And that the young people she was talking to were afraid of face-to-face conversation. All of them actually said, 'You know what's wrong with real time, face-to-face conversation? What's wrong with face-to-face conversation is it happens in real time, and you can't control what you are going to say.' But of course that's also what is right about face-to-face conversation. It happens in real time, and you can't control what you are going to say. That's why it's so wonderful. So it's a very interesting point to think about.
Russ Roberts: Yeah. And I think--I've pointed it out before--but I think there's an increasing demand for real human interaction in our digital world. Because it's rare. 5013 I also think part of the reason that we've become so formulaic is we're busy. Our time is valuable. Our time is scarce. It feels more valuable than it was a hundred years ago. So, a leisurely conversation where I ask you, 'How are you doing?' and you say, 'Well, actually not so well.' And then you tell me about the last 5 things that went wrong in your life, in detail--that's a harder conversation for us to have today than a hundred years ago. Not just because of the emotional part, but it's hard for me to devote--it's hard for people to devote that much time to an interaction. And we want a short, quick, 'I'm on my way. Got it.'
Tim Harford: Yeah. 'I'm sorry to hear about your troubles, Russ, but you know, I've got this email communicator in my pocket and I could be working through my email right now; it could be [?] talking to me'--
Russ Roberts: Exactly.
Tim Harford: 'Talking to me about life [?].' We need to get better, don't we?
Russ Roberts: 'I have some YouTube videos to watch. Leave me alone.'
Tim Harford: Yeah.
Russ Roberts: Let's shift in gears dramatically here. You write the following:
Apparently it's easy to make the mistake of setting the wrong target. Why is it so easy to make this mistake, though? Why not simply set the right target?
And, I want you to talk about the Basel Accords and what banking has to do with Volkswagens, which was a very nice sequence there.
Tim Harford: Yeah. So, there's a whole chapter on targets on the way that targets tend to misfire. And there's actually a list: British economists produced a taxonomy of all the different ways in which targets can backfire. So: They can be too narrow, too short-term. They can let one part of an organization prosper at the expense of another part of the organization: so they encourage backstabbing. They can measure the wrong thing. There's so many different examples. And I talk about them in the book. But the Basel Capital Accords in particular, I think very interesting history there. So, these were agreements between regulators as to how they were going to regulate banks. Fundamentally they were about how much capital banks should have. Which is important because if a bank takes a particular amount of risk and the bank has borrowed the money that it's using, then the bank can very easily go bankrupt. Because the creditors need the money back at a particular time, and if the bank has made some losses, it can't pay them back, and it's bankrupt. And that's a problem. If on the other hand the bank has raised a lot of money from shareholders instead, it doesn't owe shareholders in the same way. It owes them a share of its profits, as in when, and nothing in particular and no particular time. So you can raise the same amount of money from shareholders or by borrowing. And if you've raised it from shareholders, the bank is safer. All other things equal. Obviously, all other things equal is an important point to note. But--so, how much money banks raise from shareholders is important. And this is--the shorthand, this is capital: how much capital does the bank have? So, the Basel Capital Accords were a series of agreements as to how much capital regulators should insist that banks had. And they got more and more complicated. And the reason they got more and more complicated is because regulators started thinking, 'Well, you know, two banks could have the same amount of capital but one of them could be doing crazy casino stuff and another one could be engaged in very safe activities. So we need to sort of reflect that in our rules for capital.' But, of course, the ways in which a bank could take risks--there are a lot of different ways in which a bank could take risks. So the rules got more and more and more complex over time. Now, I don't know if you noticed, Russ, but the rules were not entirely a success.
Russ Roberts: Yeh. Like, I think that's a pretty well-held view. I think a lot of people think that's the case. They did not do so well.
Tim Harford: They did not do so well. And the really interesting story I tell in the book is about Andrew Haldane, who is now the Chief Economist of the Bank of England. And who was a Senior Central Banker at the time. He made the following observation. He said, 'I've gone back, and I've looked at what the Basel Capital Requirements tell us about, you know, how correlated banks' compliance with the rules is: whether they went bankrupt or needed a bailout.' So you would think that banks that are massively overcomplying with the rules, have very high capital ratios--they should have been at very low risk of going bankrupt. And vice versa. And there's basically no correlation at all. And then Andy Haldane said, 'Why don't we take a look at a much simpler rule, which is just: Did the bank borrow a lot of money? Forget all the risk weighting, forget all the complexity, forget all the bells and whistles. Just ask a very simple question: Did the bank borrow a lot of money?' And it turns out that's a very good predictor of whether the bank ran into trouble. Now, that's not to say, 'And therefore that's the rule we should have in the future.' Because there's lots of other stuff to consider. But it just raises the point that a very complex series of rules doesn't necessarily make anything any safer. So, that's the Basel story. But then I move on to VW (Volkswagen). Remember the VW story. It's almost exactly a year old. VW discovered--cheating emissions tests. And the way that they cheated these tests was to build a car--all cars have computers in now, and the tests are very predictable. If there's a particular pattern of the wheels on the treadmill, the steering wheel turns at a certain time; there's a pre-set series of maneuvers; the acceleration/deceleration--it's very easy for an onboard computer to spot. So they just programed their onboard computer to realize, I mean, an emissions test. And out changed the way the engine is running. It makes it much less efficient; but also a much lower pollution. And then when it's back on the road it will switch to a more efficient mode of running, but also much higher pollution. And it's not totally intuitive why the more efficient engine emits more pollution, but I mean we can go into that if you like. But it doesn't really matter for the point of this story. So, my point is: There's a weird parallel with the latest attempts to keep the banks safe. So one thing the Federal Reserve is doing is applying stress tests to banks, and saying, 'We want to see how you fare in this scenario.' Or 'this scenario.' Let's say, something crazy, like, 'Some major economy leaves the European Union and there's a collapse in the value of the euro, say. Or, Russia defaults on its debt. Or whatever. We'll give you some scenarios: Price of houses in America falls by an average of 30%; interest rates rise to 4%. All these different things. How are you looking now? Is your bank still in one piece under these stressful scenarios?' Well. Curious thing. What regulators spotted a couple of years ago is that banks were buying very focused packages of insurance that would pay off in exactly the scenarios of the stress tests. They had no commercial reason whatsoever, and were in fact probably quite expensive pieces of insurance to purchase. But it meant that the bank could, with a really straight face say, 'Well, you know what? In this stressful scenario we'd be totally fine.' And what's going on under the table is, 'Yeah, because our bet that this scenario would happen would pay off, and we'd suddenly get an extra half a billion extra dollars.' So there's a huge parallel here. You have a very, very predictable test that in principle is very tough but in practice doesn't mean anything. Because you can build systems designed to cheat the test. The only difference seems to be that what the banks are doing apparently is legal. And what VW was doing apparently was illegal. And I'm not quite sure what the legal distinction is.
Russ Roberts: Well, the term used in economics is Regulatory Arbitrage. The idea that you can game these kind of systems. And of course employees can game them in private settings as well. The problem is that people who do that can get fired. And it's unfortunately not the case that we tend to fire the regulators who design those bad systems that get gamed.
Russ Roberts: I want to add one more piece to this, which is the Harry Markowitz story. Markowitz, the Nobel Prize winner. Tell the story of how he allocated his portfolio.
Tim Harford: So, Markowitz was a pioneer in capital allocation models, and this idea that you could put together an efficient portfolio that perfectly trades off risk and return. So, for any particular targeted return you could minimize your risk through appropriate diversification. Or, for any particular level of risk, you could maximize your expected return. And it's all about just getting the right mix of stocks and bonds and different assets with different patterns of return and covariances. So this is Markowitz's model. So, he builds this model--I think it's his Ph.D. thesis; he's later going to win the Nobel Prize. And then a few months later he's given a job at RAND. And he has a pension at RAND; and he has to allocate his pension. You know, he's got this savings account that's going to go toward buying his pension. So, he was asked, 'Well, did you use your efficient, portfolio frontier model thing--all the bells and whistles that won you the Nobel Prize?' He said, 'Oh, no. I put half into stocks and half into bonds.' So, he just--the simplest possible diversification strategy. And that story gets told a lot by Behavioral Economists, who I think rightly point out, when you've got this very complex model--and even the guy who produced the model doesn't actually do it, he just goes 50-50. But the twist in the tale is that a few years ago some economists looked at how well Harry Markowitz's allocation actually works. Or, more generally the allocation strategy where you've got--say, you've got 5 buckets of possible assets and you just go, 'Okay, one fifth in each of the five.' Or you've got 10 possible buckets of assets: Okay, one tenth in each. Just split it all up equally and don't worry. Turns out that strategy works really, really well. And the Nobel Prize winning efficient solution doesn't work very well because we don't actually have enough data to supply all the information we need about the covariances of all these assets.
Russ Roberts: Well, it's a little like--
Tim Harford: So it's a little like--if you've got enough data it works perfectly; and the question is: How much is enough data? About 500 years. If you can observe the market for 500 years, then implement the efficient strategy. Otherwise, this quite crude strategy works well.
Russ Roberts: Yeah. It's a little like indexing as an investment strategy: you know you are giving up something, potentially; but in reality you are not giving up that much. In fact, you might be actually saving. You might actually do better. But you know you are going to save on the transaction costs of paying the manager, in the case of the managed fund. And again, it reminds me of the desk. People often forget the transactions costs of getting to that so-called better solution. So, buying and holding--crude allocation strategies, crude rules of thumb where you leave it alone and don't try to constantly tinker with it--certainly one of the advantages is you spend less time. And it's often--for end[?] money, sometimes; and sometimes you get a better return, too. But the time part is often forgotten. So, on the regulation part, I certainly get the point that surprises--it comes back to our improvisation conversation--surprises are better than predictability in that setting. Might be hard to design a regulatory strategy that's improvisational, given the constraints of the legal system and the way that would actually work. So that's a good idea. And it's also the case that regulators, I think, and those who are regulated like complicated, complex strategies: One, opaqueness serves them well, because people don't have time to look more carefully, and they do because they have a bigger stake in it. But the third thing is, I don't see how it fits in with the rest of the book. Which is: The Markowitz rule of thumb is tidy. It's not messy. The regulation of Basel was messy and it should have been more tidy. How do you think of these kind of results in the framework of the book?
Tim Harford: Well, that--it's a fair question. I can give you a rhetorical answer. So, the rhetorical answer is, effectively, the Basel rules were very highly structured, and got ever more complex in attempts to track the fine details of all of these different risks. So you could describe them as a mess inasmuch as there was a lot of them. That's one way of looking at it. Or you could say that this was an overstructured attempt to deal with a problem, and therefore it was tidy-minded: it was too tidy; everything had its own risk-weight. And actually that wasn't a suitable way of grappling with what's basically a vague and ambiguous and unknowable situation. If you've got vague and ambiguous and unknowable situation, a very crude approach may be better. So, I mean, you could make the analogy with the messy desk: A big part of papers on the desk, in some ways, a very simple rule, like, 'I am just going to, every time I touch a piece of paper I'll put it back on the top of the pile,'--that's the only rule, one rule--is that messy is a tidy--I don't know whether it's an interesting question or not, but that's the analogy that I would make.
Russ Roberts: We're almost out of time. A lot of your examples--not your stories, which are absolutely fantastic, but the research that you tie to the stories--come from social psychology and sociology. And there's been a bit of turmoil in psychology over the issue of replicability. And in fact, reading this story you tell of the research of Diederik Stapel, in the book--and as I'm reading it, I'm thinking, 'This is nonsense. There's no way this is true.' And of course it turned out it was a total fraud. It wasn't just that it wasn't replicable: It turned out he had made all the numbers up.
Tim Harford: Yeah. You saw the punchline.
Russ Roberts: Yeah. I did see the punchline. I'm a good reader.
Tim Harford: The punchline is the guy was a fraud. And you were like, 'Hang on. I see this one coming.'
Russ Roberts: Yeah. But many of these kind of clever results--we won't go into this particular study, it's a great example, these kind of studies that get waved around and put on the front page of the New York Times--at first, at least. But the issue is not fraud, in most of the cases. It's just that these results were cherry-picked. They were done over and over again until they found statistics [?] up against the so-called p-hacking problem that we talked about recently with Susan Athey. Are you worried about this in those fields? Do you think this is a real problem?
Tim Harford: Yeah. I am worried. I think it is a real problem. In fact, I just spoke to--I was revisiting this famous result in social psychology that people called Dennis are likely to be dentists. And this is not true.
Russ Roberts: I'm not going to laugh. I'm not going to cry. Just keep talking.
Tim Harford: So, it's not true. And we've known it's not true for the last 5 years. But people still think it is true. So, the column I'm working on is partly about: How come, even when these results are disproved, do they still get recycled?' So I'm concerned about that. And I'm worried about situations in social psychology. Although, hopefully this is the beginning of a big improvement, because I think they are suddenly getting very serious about better statistical techniques and about replication and so on. And that's great. So, congratulations to them. In terms of the use of these studies in the book: Yes, I was concerned. When you get a really cute piece of psychology based on a fairly small sample--a lot of them are quite small samples--you always worry: Does this really stack up? So, one of the rules of thumb that I use is: Is this kind of an amazingly counterintuitive result that defies all logic, but because p=0.04 I'm reporting it? It's passed the test of statistical significance, so even though it defies credibility I'm going to tell you about it anyway. I've tried to avoid those circumstances. What I've tried to do is make an argument: Start with a story--here's a story about how David Bowie produced the Berlin albums. Then, let's talk about how we perceive that story through the eyes of the people who were there. So, I interviewed Brian Eno. What did Brian Eno think he was doing when he worked with David Bowie? Then we go: Okay, I'm going to talk to some social psychologists and look at the research from social psychology; but I'll also look at research from computer science. So, how does IBM design its computer chips? And I'm also going to take several different pieces of work from cognitive psychology. And maybe even some neuroscience as well. Now, do I think that all of this research is going to stack up? No. I'm sure it will not. But when it's all from different angles pointing in the same direction, then I feel reasonably confident that I'm onto something. The piece of work you cited that you particularly liked, about investment clubs--and investment clubs made up of friends doing badly because they were too concerned with keeping each other happy--and investment clubs made up of people who didn't really like each other or strangers doing very well--I mean, the first question you've got to ask when you see that is: Doesn't the efficient market hypothesis suggest that they should all basically do the same? And so there's--I wonder whether that result really stacks up. But what I do know is even if that result doesn't stack up, I've got another one, and another one, and another one, and they are done by people in different fields, using very different techniques, and they are pointing in the same sort of direction. So, that's my defense. But I think you are right to raise the question. The situation is concerning in social psychology.
Russ Roberts: Yeah. I'm not sure the efficient markets hypothesis would suggest they would do the same. But at least I understand the mechanism by which somebody who proposes a really bad idea that I might have trouble going against them because they were a buddy. So, one of the problems I have with some of the results is that, you know, if my name's Dennis wouldn't it be equally likely that I wouldn't want to be a dentist? Dennis the Dentist--it doesn't pass the sniff test for starters with me. So that's one way I try to fight against my compulsion, as we all have as humans, to confirm our biases. But certainly there are issues coming along the line, I think, in these experimental results--well, they are already changing in the field. It's going to be very interesting to see what happens.
Tim Harford: By the way, have you ever interviewed Uri Simonsohn?
Russ Roberts: No.
Tim Harford: He would be a great guest, because he's a psychologist who has been--his work on Behavioral Economics is very interesting in and of itself. He does lots of stuff on marketing and pricing. Fascinating. But the last few years, he's just published paper after paper, really quite aggressively, taking down bad social psychology. He's written some great stuff. And he'd probably be a great guest. So you should look him up. I appreciate the suggestion. And we've had Brian Nosek on a couple of times, who has been at the forefront of trying to systematically check the reliability and replicability of some of these findings.
Russ Roberts: Before we close, tell listeners what else you are doing, besides writing books, and where else they can find your stuff.
Tim Harford: Sure. Well, besides Messy, which is going to be in the shops any minute now, and people should obviously buy hundreds of copies, I do my Financial Times column. You can find that on the FT website. My own website is timharford.com--that's Harford with no 't'. And possibly of particular interest to EconTalk listeners, a couple of radio shows I do that are available as podcasts. One is called 'More or Less'--the BBC, 'More or Less.' And that's all about the way that statistics are used and misused all around us. So, people can subscribe to that. And the new podcast that I think people might be interested in: 'The 50 Things that Shaped the Modern Economy' is a brand new series. It's coming in the end of October. But look out for it. I'll drop you a note, Russ, when it comes out. Just exploring everything from barbed wire to the barcode to ready-meals, to concrete--all these different innovations around us and the way they made us richer but they also sometimes changed the game and changed who got what and why. So I've been having a lot of fun working on that. And that will be out soon; and both the podcasts, they are BBC shows and they are both free.