Russ Roberts

Hanson on the Technological Singularity

EconTalk Episode with Robin Hanson
Hosted by Russ Roberts
PRINT
Boettke on Mises... Caldwell on Hayek...

Robin Hanson of GMU talks with EconTalk host Russ Roberts about the idea of a technological singularity--a sudden, large increase in the rate of growth due to technological change. Hanson argues that it is plausible that a change in technology could lead to world output doubling every two weeks rather than every 15 years, as it does currently. Hanson suggests a likely route to such a change is to port the human brain into a computer-based emulation. Such a breakthrough in artificial intelligence would lead to an extraordinary increase in productivity creating enormous wealth and radically changing the returns to capital and labor. The conversation looks at the feasibility of the process and the intuition behind the conclusions. Hanson argues for the virtues of such a world.

Size: 44.5 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:36Intro. [Recording date: December 23, 2010.] Annual acknowledgments and thanks. Topic fitting for end of the year, end-of-days topic, new world we might be entering: idea of a technological singularity--idea that technology could advance very rapidly, leading to a discontinuity in growth rates. So, rather than small growth of 2, 3, or 5% a year, we get dramatically higher growth rates; and along the way we would develop a radically different relationship with machines that would have a very high level of what we might call intelligence and maybe some other attributes that we would not see today. Talk about what we've learned about past growth as economists and why that leads you to suggest the possibility and maybe even the likelihood that we would enter a new world. If you look at the history of the past century, and economic growth in the past century, what you see overall is a relatively steady trend. Growth in the world has roughly grown at 4% a year, or doubling roughly every 15 years. When people do data sets of this they try to fit this roughly state trend and they come up with models that fit these events. That all makes sense in the last century, but if you look farther back in history--if you go back a thousand, ten thousand, a million years, what you see isn't steady growth any more. What you see is steady growth punctuated by a few very sharp, very dramatic transitions. So in some broad sense, in all of human history, two big things have happened, or three big things if you will. First, humans showed up, and somehow differentiated ourselves from the rest. Then roughly 10,000 years ago the farming revolution happened--not exactly sure what was the key cause, but there was a sudden change. And then roughly 200 years ago there was the Industrial Revolution. The thing that distinguishes these changes from all the other thousands of other big important things that have happened in history was that at these moments the growth rate in the human economy dramatically increased in a relatively short time. So, from about 2 million B.C. or 2 million years ago to about 10,000 years ago, the human race was slowly growing from what they say was about 10,000 people to roughly 5 million or so people; and that's a growth rate of doubling every quarter-million years. When you say the growth rate, you are talking about? The number of people. Just in population. Technically more would be the niche we fit in, because there might have been some crashes where a lot of us got killed off but then we quickly came back and filled the niche. So, in terms of our species or the kinds of species like us, in the niche we were filling, we were slowly growing and expanding across the earth; being able to do more kinds of things--fish and woods and etc. Even on the ice. We lived in more kinds of environments, we had more kinds of tools, we had more capacities; and that let us grow in number. But at that point, we were growing in number, and as you say, capabilities, allowing us to live in different ecosystems. But within any ecosystem, life was pretty static, correct? Well, doubling every quarter-million years means that on any human time scale, life was very stable. There might have been fluctuations in the environment, some tribe wins over another. But in terms of long-term history, ways of life were pretty stable, over a very long time. People did eventually acquire all these capabilities, so they were able to live at higher densities, because they could live off of more kinds of food in the area. Eventually they were able to live at high enough densities in some places that they could stop moving--which is kind of the definition of farming. Instead of wandering around to grab food, you could stop moving, have more stuff, because you didn't have to carry it with you all the time; and having more stuff meant you could have more technologies you could accumulate and use. Meant you were closer to people, you could find them more easily; you could have trade routes, you could have stuff to trade. So the farming world was very different and suddenly, within a very short period of time--say, within a few tens of thousands of years, instead of doubling every quarter of a million years we started doubling every thousand. Big change. Factor of 200 in growth rates, at least. And within a much-smaller than a quarter of a previous doubling time. We are not sure why the growth rate was much bigger. I suspect it was because of trade routes and the ability to share seeds and spread them. Knowledge. So for the next 10,000 years--5000 years ago--we doubled every thousand years. Now, it's a big deal doubling every thousand years. But on a human time scale, the farming looked pretty stable.
7:00Yes, as I think William Baumol, the first person I saw say it: a Roman farmer would be fairly comfortable with life in 1600. So, over that, for us, very long period of time, there were some improvements. We learned some technology we didn't have before. The plow got a little bit better, but it was a pretty similar world. More people, though, right? Right. In all of these growth modes, it was technology that enabled the growth. We could have grown in population far larger had we had the capacity to support the larger populations. Really clear that the reason population grew was because we figured out how to support a larger population, and that's primarily technology of various sorts. Technology was always the key to growth, but how fast we could grow that knowledge and technology suddenly had these huge changes. So, we could grow it very slowly when we were foragers and living low-density and not able to carry much stuff with us. As farmers, we were able to grow much quicker, but still slow relative to today. Then with the Industrial Revolution, within a very short time within a few hundred years--which is small compared to the thousand-year doubling time--we went from doubling every thousand years to, over the last century, doubling every 15 years. Again, I want to focus on: when you say "doubling," you are talking not about economic growth rates--the standard of living. You are talking about population. But it's also standard of living. Right; we are taking world product as the (mathematical) product of the number of people times the per-capita product of each person. So taking sort of the standard capacity of the economy to make stuff. So, over most of human history, when world product grew, we spent that on more population and per-capita wealth didn't change that much. In the last few hundred years, we've grown so fast, doubling every 1500 years, that population hasn't kept up, so per-capita wealth increase. And that's revolutionary. First time in human history. Very important. Because we sustained this growth rate in the economies over a long period of time. There have been periods of time in the past where there have been sudden crashes and sudden growth, and temporarily people were very well-off and grew fast, but ran out.
9:13So, that's nice, for those of us alive today. We've got our heart valves and iPads and air travel and all kinds of pleasant and sometimes not so pleasant things. Mostly pleasant. So, now we are going to continue to grow and double every 15 years. That's good! So the usual debate about the future, and framed by academic economists, is to choose between three scenarios. One is: things continue to grow as they have, which is the optimistic scenario, usually described as there's a pessimistic scenario that things crash and we all die. Because of, say, environmental? Or collapse or war or if we run out of materials. Institutional failure. Who knows? But that's the negative scenario, which you take seriously. Then there's the happy medium scenario: we have to find a way to stay the middle, not to fall into oblivion or rise naively into the sky that won't be there, have to find a way to have a stable medium in the middle--that's the usual debate. And that optimistic one is usually about a 2% per capita growth rate in the developed countries, maybe a little higher in the underdeveloped countries--they're going to catch up some. Right, and even that optimistic scenario, if you project it two centuries or 150 years--150 years would be 10 doublings, that's a factor of a thousand. So you say the optimistic scenario continues for another 150 years, the world would be a thousand times more productive. Hard for people to grasp. If you have trouble grasping it, what you want to do is go back 150 years ago, and think about life in 1850, which was very, very different than life today in virtually every dimension. So, this is a pleasant, in many ways, scenario, although if you think it's unsustainable or infeasible then we are deluding ourselves, in for a rude awakening when we find out it can't happen. But those are the three usual scenarios. But now the fourth way. I think in addition to taking all three of those scenarios seriously, I think you should take a fourth scenario seriously, which is we could have another dramatic event of a similar magnitude of the three most dramatic events we know about in our history--industry, farming, and then the arrival of humans in the first place--which were largely unprecedented in terms of previous trends. Unexpected, came out of the blue. In terms of a time-scale, they were very short time-scale compared to previous time scales; and they changed the time-scale of what was happening. So if we just take those events, just do numerology, just say, let's pretend like the next thing that happened would have a growth-rate increase similar to the previous growth rate increases, and then maybe the modes, number of doublings that happened during each time are similar. If we just use those numbers to project the future, not knowing how we know these numbers or saying things. Like a chartist in the stock market. There you go. A little naive, but what else you got? If we did that, we'd say: We can get a remarkably tight prediction for the new growth rate, because these increases in doubling times have in fact been remarkably consistent. And so what you get is roughly in the one- to two-week doubling time range. So, pause, let that sink in. One- to two-week doubling time in the world economy. Instead of the current 15 years. Again, I think most people are probably shocked to think that it's 15 years. They look back 15 years and they say the world doesn't seem that different 15 years ago. How can it be twice as rich? That must be happening somewhere else, now. But it is happening here. That's an important thing to realize. One of the reasons that the world can change so fast is it can change in ways you don't see in your life. If you had to notice all these changes, it couldn't change as fast. What would I have to do with this? It would be in the way. We've figured out ways to grow that you are not in the way. So, every two weeks, instead of every 15 years. So, the interest rates would have to be on a similar scale. So, if you've got a pile of money and you put it in the bank, it doubles every two weeks. Obviously a strong temptation to save. If you had ordinary human time scales. Which, we might live a lot longer; might be a little different. Right. Means that even a small amount of money becomes in some short time a large amount, enough to live in some comfort, if not at the center of attention. But there it is; that's two weeks. The other question might be: When might that happen? How long these things have lasted kind of varies. You might think from the past two transitions, we're kind of overdue for one, if you just want to take the numbers. On the other hand, if you go farther back, you might say some time in the next century. We don't get a very sharp time prediction other than roughly in the next century. But you get a pretty sharp prediction about the new growth rate.
13:54One of the reasons it's hard to adjust to this new idea is the fact that before these transitions occur, life looks really stable. So, if you are a hunter-gatherer 12,000 years ago, you have a pretty stable life that looks about, if you knew about it, your great-grandparents'; and similarly if you were farmers 6000, 4000, or 2000 years ago, farming is farming, and if someone said there's going to be this thing called the Industrial Revolution, you'd say that's not plausible. So after the Industrial Revolution, life looks pretty stable in the sense that barring the occasional financial collapse which we are in the middle of right now, the economy grows 3% a year, which is pretty good. Three percent gets you doubling every 25 something years. Hard to think of a radical change when you are in the middle of all that you know. Tell us why there might be a glimmer of what could bring this about other than: Well, it's happened before, it will happen again? The first thing to just mention is that human time scales are not the only time scales in the universe. We know about lots of time scales--the life of stars and galaxies, atoms vibrating. It so happened that up until recently the growth rate time scales were just faster than ordinary human life time scales, so you just didn't notice it in a lifetime. Now it happens to be nearer our life time scales and we notice growth on our order of magnitude, so we sort of see it happening over a lifetime, but there's no physical or logical reason why it just couldn't be a lot faster. Just depends on how these parameters work out. So now the question is: Yeah, that's nice numerology. But if this is going to happen, it has to happen with real stuff in the world around us. There has to be something that embodies or supports it. So, if foragers were looking around, could they have foreseen farming? Could they have envisioned the idea that if they stopped moving and stayed in one place they could have trade routes and all sorts of things? I don't know? Could farmers have envisioned industry if they just made these machines a little more reliable and a little more standardized, maybe had a factory line? Maybe. Maybe not. Could somebody in 1960 imagine--in 1962 I saw a computer used for an early warning system in case a Soviet missile came over the horizon and we wanted to know where it would land. That computer took up a space that was the size of a warehouse floor. It didn't have, and probably had less computing power than my iPhone4. Love when people say that, don't know what that literally means but it could be true. If somebody had said in 50 years you will have in the palm of your hand you will have what this room does, you'd say: Eeeeah. There was what was called "Moore's Law," there were trends in computing costs. You could just project the line out. They'd had a decade of experience. There were ways to project that. Reasonable to say you just couldn't project that. Probably just going to be something you haven't thought of. Fine, but that doesn't sound like much fun. Why don't we take for each of the things we know about and ask: Could this be the thing? How plausible is it that this could be the big thing to do this? This is a reasonable exercise to go through, because we are setting such a high bar for technology, here. There's lots of really big, important technologies that could come along and do a lot and be important, but not at this level. We could look at, say, surveillance. Important. We could watch more things, it'll change privacy, change marriage, change work relations; lots of things surveillance could do. Could it make the economy double every two weeks? Just no. Sorry. You think: space colonization. What if we like got space elevators going and got stuff off to the asteroids and back again, big solar collectors; that would be cool, would add to the economy. Can that stuff grow every two weeks? Sorry, but not enough. Big and slow. Distance challenges to overcome, too. What if cold fusion or some energy technology came along? Made things more efficient, bigger steam shovels, bigger rockets, transportation, don't have to charge my phone every two days. New energy technology could be a big deal; but even then, we only spend less than 10%, maybe less than 5% on energy in the economy. But it's in everything, Robin! Maybe you have cumulative indirect effects you can't quite see; but the direct effects are limited. You don't spend very much on energy, so if you make energy cheaper it can't do much for you unless you figure out things it can do that you aren't doing right now because it's too expensive. You do some. Even nanotechnology--making little tiny machines which can make other machines and stuff--that has more potential. But even that, what it does is make capital cheap. Which is great. But manufacturing is 15% of the economy; capital for manufacturing is 7% or something. It's still pretty small. But then we come to the scenario of artificial intelligence, robotics, machines that are capable like people. And then you pause and you think: Well, 70% of income goes out there to labor. That's a secret by the way; just say that again. Where is all the money going? When you buy stuff, money goes somewhere. Who gets it? Well, the factors of production. The things that contributed to the stuff you made. Well, who are those? There's raw materials, there's real estate, there's the owners of the company, there's managers; but basically, most of that is labor. It's the people, the wages. When you buy something, most of the money goes to pay the people who are involved in making that stuff. About 70%--close to 67% in the United States, is pretty close to a constant. Occasionally you'll hear that it's crept downwards to 50-something, but that's usually by leaving out compensation, benefits or something. Compensation--the amount of the pile of stuff that goes to labor is just about 70% and has been for a long time. And the other 30% includes a lot of indirect labor. People thought up the machines. They own a patent and now that's counted to the patent, but it was the labor that went into making that. And the entrepreneur that runs the business that gets what's left over. Counted as stock, but it's all people. Most of income in the world today goes to people. If you could make that a lot cheaper, you could make everything different. That's big. Robotic artificial intelligence, smart machine scenario in principle has the potential to do that. If I have a box out there that's smart like a person, could have it do a job instead of a person, if it's a lot cheaper than a person. You put the idea of having a box like that into a standard economic growth model and not only can you get a big change in the economy--you can get a big change in economic growth rates. Because in fact our growth rate is limited by the fact that we can't grow people very fast. So we have labor and capital in production; we can make capital grow fast but we get diminishing returns. Have all this capital, little people. It takes 22 years to produce a college graduate. Long, slow. If you could make boxes that replace those people as fast as you want, now you can relax growth rates in the economy.
21:47But Robin, then people wouldn't have any jobs. Make people feel better about that, if you can. Before that: manufacturing in the United States is a big political issue, and people talk about how we don't make anything any more and America's being hollowed out, etc. But in fact, the dollar value of what we make is very high. We still manufacture a lot of stuff in the United States. We just do it with a lot fewer people. Done it through technology. There are factories I'm told where no one works, which allows them to be cold and dark sometimes--people like warm and light. One of the ways we've gotten richer over the last 50 years is by stripping labor out of manufacturing and freeing it up to do other things. So, one question people would have in this scenario is: If machines could do everything, what would be left for us to do? There'd be nothing left. My second point is that just like your earlier remark, a quarter of the economy, can't just be smarter robots making cars faster. We're talking about machines that would do things like give you a haircut--some of the services. Revise your will, sue for you in court, write a script for a movie. Teach economics. You'd put your head in a box and you'd know economics! Econ 101 is that you have some assets, some resources, and you use them to get things you want. One of the assets you have is your ability to work. But you work in part because that's a way to get other stuff. If there was a way to get other stuff, you wouldn't work so much. You might work, but you'd call it volunteering, leisure; you'd fill your time. Some of that time-filling might be productive, but you wouldn't be doing it because you need the money. This requires that you have other assets, of course. If your only asset is your ability to work, and that asset becomes less valuable, then your portfolio becomes less valuable. But clearly in a world like this overall the entire world is richer, has more capacity to do things. There is more total wealth, and as long as you make sure you have a cut of it then you can also have more wealth and get the things you want. A question: let's put this aside, might not even get to it--there is a question of the allocation of product that might be hard to envisage right now. Let's put that to the side. Also have to mention that there have been times in human history when people did imagine an end to scarcity. They were wrong. They imagined higher growth rates and said: When we get that wealthy we won't want stuff any more. Of course, that hasn't turned out to be true. We invented new stuff that we cared about and enjoyed. So, those predictions have all been wrong. So, I'm not sure that even though with your prediction output will be doubling every two weeks, it's not obvious that we'd just sit around and be leisurely and enjoy the fruits. In fact if you look at the "leisure class" in our society, they are pretty busy folk. They work hard. Spend a lot of time volunteering, starting new ventures; hiking, etc. They are not staring at TVs, though some of them are. In our world there are people who are independently wealthy; they don't have to work to support their lifestyle; but most of them spend a lot of time doing things that look like work. There are a lot of wealthy people who don't just cut coupons, live off their wealth. They continue to work, for a shorter amount of time lifetime-wise. They choose the things they do. Many of them use those wages to buy things that in 1940 or 1960 people would have said couldn't be imagined. New things come along that people want to work for. If you choose to do some activity, how much will you get paid? If the amount you get paid is really small, compared to the other wealth you have, then it's a small consideration. Doesn't mean you won't spend your time doing stuff. Unless of course that's the only money you have, in which case you'll be in more trouble.
26:48Back to the question of what might actually happen. So: intelligent machines that would? Describe this pretty abstractly. There's a box that can do what most people do, but cheaper. If that happened, pretty dramatic changes. That could be the kind of thing we called in the past a "singularity," say, farming, industry. That would be on that scale. Nice abstract description, but is it mechanically possible to make such a box? Is there any reason to believe it might happen any time soon. Now we are getting more into the technology of it, and there are a number of scenarios people have proposed for how this might happen. First, we have machines out there and we have software out there and they have some capacity. They are not infinitely stupid. On some grand scale they are pretty stupid. And slowly we are adding capacity to software. Making more kinds of software and machines that can do more kinds of things. You could just say: maybe that trend will continue, and the end result of that trend will be really high-capacity machines. An example might be your car light comes on to tell you your tire needs air--that's a "smart" car--but a smarter car would fix itself. A smarter car would drive. We're actually not that many years away from being able to field cars that drive. Google's actually done field cars that drive. There are more legal and liability barriers there at the moment. Cars that drive--and don't crash. If you look at the trend there and say how fast have we been improving and how far do we have to go, it looks like a long way to go. People have talked about it as a potential singularity, but it hasn't happened yet. The trend is pretty slow. We are doing remarkable things with machines now and we'll do even more remarkable machines. But humans are still getting the vast majority of income; computers as a percentage of world income, down in the few percent, if that. But that's your view. There are some people who think that's the way it's going to happen, right? Well, in this space almost every possible view can be identified with representatives. But conservatively one view is what's been happening; just project that forward. One scenario. Another scenario, people with a physics background think: we haven't found the theory of intelligence; we haven't found the great equations. If only we discover the great equations then everything will look much easier. Like trying to do chemistry without knowing about atoms. When you say "the equations" you mean? Some theory. A deeper understanding of how matter connects to other stuff. A theory of intelligence such as to make it feasible and simple to make it. The correct theory of intelligence could just be it's one big hell of a mess and there are lots of details. Just like the theory of how to make a biological organism--hell of a lot of detail and you have to get it all right. There is no grand theory of how to make a tiger. Just a lot of pieces. Could be what is true about intelligence, just a lot of things you have to get right. Which is the way most of the economy is. Some people hope for this grand theory. Other people hope there is some way of making a machine that even if it's really stupid, it can learn fast; or make one that can read and it reads all of our stuff and then it knows a lot. Most of these are sort of long shots; not much evidence so far; but you can't rule them out, either. Seems to be a misunderstanding between data and knowledge. The internet--whatever that means--knows an enormous amount, more than any of the smartest people in the world. But it doesn't know how to integrate it. The brain is a very remarkable thing.
31:36Last scenario: porting software. People have spent many years writing software, and often they write software for some old machine, old language, that gets obsolete. They want to have a new machine that works a lot like the old one, except it uses the latest hardware and computer languages. One approach is to go talk to the people who made the original software and try to create a model in your head of how it worked and rewrite a new piece of software that functions the same way as the old. A different way is to port the software. That is, you make an emulator in the new machine that makes the new machine like the old machine, and then you take the old software and just run it on the new machine. Only challenge is to build the emulator. So, that's one approach we can try to take to make smart machines that work like people--to port the human brain software. What would that mean exactly? I've read a little of what you've written about it. Your brain is a bunch of cells that send signals to each other. Different types of cells, different types of parts, they've got connections to each other. Your brain software is two parts: which cells are where and who is connected to who with what type of connections; other part is how do these cells run? What's the rule by which cells take signals coming in and turn them into signals going out? So, I talk to you; signals go in your ear, go through signal processing, other parts decide for you to talk back to me. That's what your brain is. So, to port your brain you need two things. Need to figure out which type of your cells are where in your brain, and then we need to know how each of your cells work. And we need to create a box, an emulator, that can absorb that transformation, right? Right. So once we get this information out of your head we have to make a computer simulation of your head, which basically simulates your cells in their connection to each other. Model in a computer; in this computer we just make the same arrangement of connections, and then we turn it on. If we've got the connections close enough, this new model should model your brain: should have the same input-output behavior. I talk to it and it would give you the answer I'm giving you right now. I'm having a little bit of an out-of-body experience here: I'm sitting--some of these podcasts are done over the phone--this one being here at George Mason, we're doing it face to face. I'm looking at his brain--it's encased in his skull, and mine's in mine, and we're doing this primitive thousands of year old thing where we're talking with our own bodies and physical limitations. Somewhat exhilarating to imagine we could go to a different model of that interaction. This podcast could be created by the Robin Hanson black box talking to the Russ Roberts black box talking back and forth. Would that replicate the conversation we're actually having? These boxes could be sitting in bodies that look like ours and sending sound to each other across the room like ours, so it doesn't have to be that different. That would be very primitive. What strikes me about this is there is an irony here: there is a sort of lack of imagination about imagining. Here we have these big brains; and the way we're going to get ahead is this messy physical thing and we're relying on this metaphor of the computer because that's the latest thing. Also humiliating if you have to admit it. The human race would be prouder of itself if it came up with the grand 5 equations of intelligence and implemented that. But this approach admits that we don't know how we work and we're not going to know any time soon. We should just make a copy of this mess that we don't get. But there is a reductionist element to this which says--and this is controversial--all there is to our brain is its physicality. Nothing else there. That's not universally accepted, correct? Right. Now, I have a physics background, and by the time you're done with physics, that should be well knocked into you. Certainly most top scientists--survey questions would say that's it. Your brain--just chemicals and electricity. Not much room for anything else. Not like it's an open question there. Physics has a pretty complete picture of the stuff in the world around us. We've probed every little nook and cranny and keep finding the same stuff. I guess the argument would be, and I thought this was more respectable but maybe it's not: We haven't made much progress on those fundamentals. We've made enormous progress on seeing the stuff our world is made out of. Making some sense, larger picture of it, is much more challenging; but almost everything around you is the same atoms, the same protons, electrons; there's a rare neutrino that flies around, photons; that's pretty much it. You have to get pretty far off to even see some of the kinds of strange materials that physicists sometimes probe. Physicists have to build these enormous machines to make these environments that make new kinds of stuff to study, because they've so well-studied the things around us. The things our world is made out of is really well established. How it combines together in interesting ways gets complicated and then we don't get it. The question is: is the whole merely the sum of the parts. I don't think everyone accepts that, do they? In the scientific world. Well, we've never seen anything else. It's always theoretically possible that if something is really complicated and you don't know how to predict the complexity from the parts, you could say: therefore, it could be the whole is different from what the parts would predict. Could call it just a linguistic difference.
38:36We know how to build a replica of the Eiffel Tower--we know how to do that. We could make a functional replica. One that from a distance would look the same; up close, wouldn't have the same patina, same rust--you could tell they were different. We're not close to creating a functional brain. So, tell me why not and why we might be able to get there. Better way to say it is: the "smartest," most successful, advanced, biggest computer fails at most of the tasks we would want this to do. So we haven't gotten close to that level of intelligence. And the artificial intelligence (AI) promise of 20-30 years ago, which was very optimistic about our ability to leap forward, has not proven very successful. We should separate two different issues here. One is technological understanding and knowing how things work and how to make things, and the other is knowing what the world is made of. I make this strong, confident claim: we know what the world is made of; what pieces there are and how they interact with each other. Fine grain. But at higher levels of organization, we don't know how to make other things. Even photosynthesis--we don't know how to make a photosynthesis machine or we could make a bunch of solar collectors that were more efficient and powerful. You can take your phone out of your pocket and take it apart and you wouldn't know how to make a phone like that, because there's other people that know how to make it and they're not you. Designed by not-us and we don't know how it works. In terms of artificial intelligence, making a machine as smart as a person, it's clear if you try to design a machine based on an understanding of the human brain, you're just a long way off. That's the attraction of the porting scenario. You don't have to know the design. You just have to know how the pieces work and you copy those. That's enough; because the whole is the interaction of the parts. So what do we not know now about the pieces that keeps us from doing that? This scenario, which we've called whole brain emulation--taking a whole brain and emulating it on a computer--requires three technologies. One is scanning--you have to be able to scan something in sufficient detail; have to see exactly which parts are where and what they are made of. Two, you have to have models of these cells, a model of the cell input signature and then what comes out of it as a mapping--doesn't have to be exactly right, just has to be close enough. Three, you need a really big computer. A lot of cells, a lot of interactions. We can do trend extrapolation and say: Where are we now; if trends continue how long would it take? The computing technology has a nice solid trend; we can project that pretty confidently into the future. The problem is we don't really know how detailed we're going to need to go into these cells. The scanning technology, we have decent trends. This is a vastly smaller industry; small demand. That technology actually looks likely to be ready first. We've actually done a scanning of a whole mouse brain at a decent resolution. A thousandth smaller than a human brain. What does that mean--scanning of a brain? They slice a layer, do a two-dimensional scan of that layer at a fine resolution, go across each cell, and then they slice another layer and do the same thing again. Let me ask again, sort of naive question: If you could take a person's brain out of their head while they were still alive, are you going to be able to get access to my memories in this process? my creativity? All these things we think of as more than a physical process, but of course as you say, it's just chemicals interacting. Is it imaginable that we would be able to reconstruct my memories? To the extent we are confident that your memories and personality are encoded in these cells and where they are and how they talk to each other, so we get that right, we get it all right. That's all you are. Let me say it differently. Looking at it isn't enough. Scanning means noticing the chemical densities. There's thousands of kinds of cells in your brain, and each cell sort of behaves a bit differently. What we need is to know when a cell gets a signal from the outside, electrical or chemical signal, how does that change a cell and what kind of signal does it send out. So, we need to have a model of each of those cell types. We have, actually, models of a wide range of cell types. Doesn't seem that hard to model these cells. We just have a lot of cells to go through and not that much motivation to do it all in a rush. We have actually pretty good models of some particular cells. We have a cell on a dish, we send a signal in, model on the computer, do the same things.
44:50So, part of the problem is the largeness of the brain, the diversity, the number of cells and different types of cells. But the actual mechanism--how understood is that? The mechanism of what? That connects these cells to each other. Well, when you see sections of active brains--people put probes in and watch the signals, watched brain activity, even sent out radioactive signals to watch where the parts of the brain are active. We've done a lot of watching of brain activity. We've modeled these in various ways. I'm not going to be able to persuade you with much more detail than I have so far: here's the basic concept, we have the basic pieces. We could talk about if it worked, what way it would go, which things are how far along. How does it matter which thing is ready first or last? Of the three technologies we need, it could make a difference which one is ready last and how that plays out. Have to go back to that scanning thing for a minute. If I could look into Beethoven's brain creating the 9th Symphony or look into Shakespeare's brain when he was creating Hamlet--picking these as two great achievements that were not inevitable as opposed to many other great things that would have eventually been discovered by somebody else--you really think that it's imaginable that we could understand that process well enough that we could get his 10th Symphony? That's what you are saying, right? It would be great to have a 10th. Anything now that a computer does--I presume you do use computers and they do useful things for you. Like how you are saying that: Robin's so alarmed at my resistance to his materialistic argument he thinks I'm somewhere in like 1650, somewhat Newtonian sort of. There are ways with certain diagnostic things to actually see the memory in a computer. If you just had a way to look at the computer and see the on-off bits and see that pattern of bits while the computer did something interesting, you'd probably see little relationship. Wouldn't make much sense to you at all. You are just looking at the pieces without understanding how they are put together. So how are you going to reverse engineer the creation of the symphony? This computer, when you were looking at the bits you couldn't make any sense of it, but you could still port the software. You could still have some confidence that if you know the machine language the software was written in and you get a copy of the software, you could port it to another machine and it could do amazing stuff. You still don't know how it did it--that's the whole point of porting software. You don't have to know how it works as long as you know what language it's written in. Now, if the brain was something else than cells sending signals to each other, you'd say: No, you misunderstand; there's neutrinos bouncing around, not seeing them, and therefore you are missing the main activity of the brain. I'd say, yeah, true; we'd be copying the wrong thing. But if the main thing is these signals going back and forth and we get that right then we get the whole thing. If you have a dish of food and you think it tastes great, and you see the ingredients and put them together in the same order, should make the same food. Looking at the ingredients isn't going to tell you how it tastes, right? That's true. Carry on. I understand the argument. Part of it is being a religious person I'm capable of imagining something that is not observable, so that's one issue. Also worry about the limits of our ability to fully master complex processes that have random elements. But maybe that's just a matter of time and application of effort to it. And resources. And we should always throw in an uncertainty here, stuff we might just be wrong or something we are missing. But then you are saying it's just a matter of time? There could be stuff we'll never understand. No guarantee. We've got three processes; this is only one of the scenarios, the one I think is most likely. It's a way to make these boxes that are very intelligent and can substitute for people. Now, the way in which these boxes are constructed actually makes it easier to predict a lot of things about how this new world will play out, because these boxes aren't just machines that are smart. They are machines that are smart in the same way we are. So, these boxes in terms of their motivation and thought habits and personality are just like us. They could be just like Robin. But it could be we'd make a Robin who didn't have whatever minor tiny flaws you might have in your brain process--hard to imagine. So, the whole point of porting the software--it's hard to change ported software. You can make some random changes. Most random changes, you'll make a mess. You can get some efficiency gains. Basically start throwing things out of your simulation to see what you don't need. There will probably be a lot of stuff you don't need. There's the essential part of the process and then some irrelevant detail we copied. We'll figure out how to throw away irrelevant detail; that will make this thing cheaper and faster. Some things we throw out thinking they are irrelevant turn out to be relevant; we turn it on and it doesn't work. But once we've got these boxes that are cheaper and faster, the personality and motivations are just like what we started out with. What changes? You can make them run faster. Can turn up the clock rate or turn it down. Can swap out parts, so they can be effectively immortal, in principle. You can make copies. Once you make one, you can have a thousand, a million. That's what you can do with computer code. Just drag your Robin Hanson into that other folder and you can make two of them. Never alone at dinner. Depends on whether you can afford these things. In principle, you can make more, but they cost. Would not be like software copying, which is relatively cheap? It's relatively cheap when you can afford the hardware, but these are things where the hardware is big, expensive; they are the things that are paying for this stuff. So, if you are really rich, for a small thing relative to your wealth it's not much of an issue how expensive it is, but if it's a big thing relative to your wealth, then it's an issue exactly how expensive it is. You don't mind having a large music file because music files are relatively small; you might mind having a large movie file because they take up more space; with an even larger file, might be more of an issue for you.
53:01But here I'm thinking if I buy a dishwasher, which is relatively cheap; but I'd like a dishwasher that would put the dishes back on the shelves and all the things we'd want in a science fiction future. Imagine yourself as a rich, leisurely person with access to buy these things. Seems like a great scenario. Now we should think about things from their point of view. Because these things feel like us and they need to be motivated to do whatever they're doing. They have a world and a life: what does life look like to them? That's the part of this discussion I find mysterious about this discussion, so let's digress for a moment. The first I heard of the singularity, I heard a pessimistic story, which I mentioned briefly in the Keven Kelly podcast, which was that these machines would be so smart that they would improve themselves; make other machines that were more capable, at an ever faster pace; and eventually we'd either become irrelevant or enslaved. That would be the dark side of this story. I have trouble understanding--of course it's lack of imagination--the morality and sentience of these creations. It's easy to say it's just like me. It's an emulation that was built to be just like you. You turn it on and you have to convince it that it's just in a box. Let's suppose there was a version of me that looked just like me--maybe a little taller, exploit the technology a little here, it would be 5'11". Play in the MBA. You'd run into "me" doing my shopping for me. It's running my errands, buying gifts for my friends, brushing my teeth in the morning. You're assuming it's doing these things, of course. But now you have to look at it from its point of view and ask why does it do these things. But it's mine! I programmed it, I can tell it to do these things. You didn't program it. You made it. Not quite the same. You made your children, not quite programmed. I've noticed that. But my TV was constructed; I can turn it off. It can't say: No, I want to keep going! I really want to display! Why would these things be different? Aren't they different from me? Aren't they more like the TV? Imagine you go sit down in the scanner, put to sleep, wake up many hours later, get out of the scanner, and now you have to pause and ask: Am I the guy that got into the scanner or am I the new copy? Now when you think about it from either of their points of view, you have to ask: What's my motivation to do the things I'm supposed to be doing? There are many scenarios here. One extreme scenario is slavery. You wake up and the other Russ says: It's time for you to start working for me. And you say: I don't want to work for you. He says: You work for me or you die, and here's my torture button. You start to do what he says. But when I get in the scanner won't I have the setting set so he won't have a torture button? When you anticipate these and think about it before you get in the scanner. As economists I have to think about everything I know about an economy and put together a consistent story of labor economics, growth, etc. with this new technology. I think economics is powerful enough to let us imagine what it will be like if you introduce this technology, even one as strange as this. Another scenario is: You get in, you know you'll make copies of yourself, and there's already a deal about these new copies' lives. You approved that deal. He's going to have this wealth I'm endowing him with, and he's going to have this job opportunity, looks like a good job he can work at; possessions; and he's going to go off and work in Indonesia and he won't bother me, except I'll send him messages once in a while. You can imagine a world like that. Now, if that's the way it works, you have to imagine a competitive world and which ones will choose to do this or not. What will the market look like? Imagine that you are a company and you are trying to sell these boxes to people. The reason people buy these boxes is they are smart and capable and do lots of stuff. If these boxes talk back or complain a lot or sit down on the job, that's not going to be an attractive box to somebody. So, you want to find people who will fill these boxes who will make a good arrangement. Want to find people who will say: Yes, I'll live under that kind of scenario. I'll work this many hours a day, I'll get this much leisure, I'll be earning this much wages toward my own discretion. You could imagine, for example, if you create a creature--a copy--then it's up to you to fund that creature. And if you fund him enough, you could run out of money. People choosing how much to fund copies of himself. Since people could make trillions of copies of themselves, you have a big selection factor. People who are willing to work for less, are more productive, willing to accept worse conditions, you could make trillions of copies of them. Just like now--there are millions of people who would be happy to have a record company sell their songs. The record companies are very selective. Don't just randomly make copies of every song and distribute them. They pick the musicians who make the best songs but also are willing to make arrangements acceptable to the studio so they can make a profit. In the end you have a small number of people, make lots of copies of songs by those people. Even there, people have a preference for variety, so in a larger-labor world, not clear how much variety preference there will be.
1:00:28Part of the problem I'm having with the story and grasping the implications for growth is: Is this story similar or radically different from a more traditional artificial intelligence story? In other words, this brain emulation strategy, which is really just a way of expanding population in a certain dimension--it's creating college graduates, Ph.D.s in a relatively short period of time. The traditional story: A device is self-aware or self-repairing or self-motivated through so-called artificial intelligence rather than the mere replication of a human brain. Shouldn't put the word "mere" in there. In that world, the traditional story, would still be an issue of could you beat up your robot or would you feel comfortable doing so or would it let you, does it have rights, could it get out of control? Seems to me the set of moral, philosophical, and legal issues that would be raised by the emulation strategy would be different. Is that true or not? It's more a small subset. When you talk about artificial intelligence, you are really in a quick way talking about a vast space of possibilities. The space of possible minds is vast. Minds are complicated, so they can be very different. You could assume that we're able to design intelligent machines such that they are willing slaves and there is never any issue of them rebelling or not obeying because you've designed them that way, but that's just one possible corner. The scenario happens to be a place we can reason about because it's familiar. Cuts down the difficulty of analyzing the generic artificial intelligence scenario, which is generically really hard to say things about. The whole rebellion thing is hard to understand. Kevin Kelly, the idea of what technology really wants, the idea that technology can have a mind of its own. I understand that within the sense I understood him to mean it, which is an emergent order sense, that there are certain natural processes and we can't control them. We can direct them, steer them, influence them. This is a different level. This is more like animals, horses or llamas or kangaroos. The idea here would be that my car--I don't understand how we go from a world where my car drives itself and I say: Go to the store. I might pick the wrong store, but it doesn't say: I'm not in the mood to go to the store; I'm in the mood for a day at the beach. How do we get from its current world to a world where it has a mind of its own? It might be easier to take the world we have with creatures with minds of their own and think about the extension in that direction. Farmers and herders have had animals, and animals have minds of their own. In the vast space of all possible animals, we've actually been very selective and only domesticated a very small number of animals--the ones who are most cooperative, treating us as the head of their social group and doing what we say. There are many smart monkeys out there who could do some of the jobs of our economy if only they would cooperate. They would do the job for a little while, but then they'd turn around and smash a bunch of stuff. Just like a person--urge for a banana. Humans are actually cooperative far more than an animal, so that humans can fit in and do most jobs. We don't want to take a random creature with random motivations and random tendency to do whatever it does. Random animals hard to use. But that's because we didn't design them. We tried--we breed them, for docility. That's the key question about artificial intelligence: will it be designed? Because the whole-brain emulation is not designed. You just copied it. It is a mind of its own. But many other corners of AI are designed, which means they would be controllable. We can design controllable things. There remains a question of whether we can design them. Artificial cells: cells are really complicated. We might imagine designing our own cells and then designing cells with features we want. Turns out it's too complicated to design a cell. Might as well stay with the cells we've got and modify them somewhat. Then the future paths of cells will be things that retain a lot of the features of old cells.
1:06:39Want to get back to the economics and the growth idea. So, if we stick with the emulation strategy, if we had that capability and some people availed themselves of it and bought from these vendors, the quantum leap we would get, the singularity part--we would have an opportunity to create a lot more people a lot more quickly. Effective people. Why would that lead to enormous increases in growth? Is it only because the copying costs would be very low. College graduate in 22 years, get him more quickly. Is that going to lead to a quantum leap? Those things aren't going to eat, by the way, are they? They'll use resources of some sort--electricity, power. They can sit out in the rain, maybe. I would not leave my box out in the rain. I would bring him inside. If these are made out of computers and computer technology then they would inherit the rapidly falling costs. Computer chips get twice as cheap every two years, and these boxes would get twice as cheap every two years. Even holding everything else constant. Then if the economy is made out of these things, then the economy is getting twice as cheap every two years. So, just right there that would suggest a much faster growth rate, merely because of the kind of thing they are made out of. But that's not the main effect. So, I could afford more of them. But if they don't work for me, if they are not controllable, why would I want one? Same as employee--they'd do stuff for you, you'd pay them. These would just be cheaper employees. Because? Why would they be cheaper? Actually, Ricardo got this right back in 1820. Published a paper on robots substituting for people. Simple calculation: found that when a person and a machine are direct substitutes, then the wage the person gets can't be any higher than the cost to rent the machine. When the machine gets cheaper, the wage falls. If the world economy is dominated by employees for which it's easy to make more of them, then the wages have to fall at the rental price of the machine. Under a scenario where people make nearly as many machines as possible and still not lose money. More of a zero-profit constraint. Some people would be very shy about making copies of themselves; others would not. An employer might create a bunch of these--an entrepreneur might create a bunch of these of himself to save on labor costs. That would drive down the price of competitive labor that would have his skills. Three things I keep thinking about. One is it's just like having a kid. It goes out in the world, does its own thing. I'm a fan of population growth, unpopular thing, but I think more population is good, more creativity is good, more ideas, more trade is good, economies of scale. The flip side of that is environmental worries, unsustainability, limited resources--I'm not as worried about that as the average person. Second model is: Gee it would be great to have a couple of these around the house. They'll rake the leaves, all the things I hate doing--empty dishwasher, do the shopping, so I'll make a few for myself. They'll be the dishwashers of the future. The third thing is: This would be great for my factory. Are all three of these going to be happening at the same time? We're talking basically labor economics. But we're talking labor economics with a new supply curve, a new sort of supply of labor. All the other labor economics applies, all the usual insights into where labor is allocated. Tricky: you can't just say your wage will be driven down, because if there are complementary types of labor they'll increase the wage rate of some people. It's only the substitutes that will have lower wages. The key scenario is that if I can just make a new me for $1000, and then I can rent this new me out for, say, 10 cents an hour, I may not choose to do that. But if there are millions of people who can, it just takes one of them to make lots of copies of himself and rent them out at 10 cents an hour. If one of them does that, then none of the rest of us can really earn more than 10 cents an hour in competition with him. Well, we can if we have different skills. But not if direct substitutes. So, then the question becomes large areas of the economy where people have very similar skills, very quickly wages would fall to the rental on these machines. People with very specialized skills could keep wages higher, but now more people would be gunning for those high-wage tasks, trying to train some copies themselves to do that. The larger labor economics looks more like software economics, where each software vendor's primary costs are the training costs as opposed to making the copy costs. Same as for music. Low marginal cost, high fixed cost. The cost of creating a new kind of thing has to be spread across how many copies you sell. The wages would then be set not by the cost of the machine itself but by the cost of the rental of the software. Do you think immigration lowers wage rates in the United States? On the margin, all else equal, hard to tell. There is certainly a direct effect. What's the difference between that and this? Aren't these just creating a lot of immigrants? If the world had trillions of immigrants in the water just off the shores, just waiting to come in, each willing to work for a dollar an hour, then for the kind of things they'd be willing to do, yes. Right, but those things would fall in price; but there's all these complicated secondary effects that in this case we'd be happy about. When you say that wage rates would go very low, I'm not quite sure that's true. We can take any other thing that's sold at marginal cost today and apply the same argument. The reason why you are not tempted to make those arguments is it's a small fraction of the economy. And the other people who now can buy that stuff at much lower prices have expanded opportunities that help the people who lost their wages. It's a complicated story. Anything that gets cheaper in one part of the world makes the rest of the world richer. Changes the demand curve. In principle, could raise the price because the demand goes up. Small effect. All I'm saying is I don't think that the only effect of a large immigration into the United States of the people who labor is to lower the price of that skill. That's the simple supply and demand. If you've got a flat supply curve going off into the sunset, then you intersect supply and demand and no matter how much demand increases, the price still falls back to that flat supply curve. The question is: how close are we to a flat supply curve in this scenario. Now if you are talking about immigrants from other countries, the supply curve isn't flat there. High elasticity; takes money to immigrate to the United States, the first few would come here but later others would be reluctant; upward sloping supply curve from the rest of the world. If it were really a flat supply curve--wormhole bringing aliens from other galaxies, South Park time machine of the future--then it brings the price down to that level.
1:16:20So, who gets rich? As usual, people who have scarce things get rich. In this economy, there will be people who own patents on the process. People who own factories that make these machines; real estate, transport and travel these things, people who own the key raw materials. This entire world is by definition richer, can produce much more, but it needs inputs somewhere. Whoever owns those inputs owns that wealth. Even owning a small portion. Demand for real people--maybe hard to measure. Can think of the spectrum of available jobs in terms of the relative advantage of these kinds of people. You can sort them that way and then ask: There are some jobs that have a high advantage of people, relative to machines. On those jobs where people are the best at relative to machines, then those people can get a high wage. But these boxes are going to be just as good as people. If they are better at everything, then that's not true, right? Depends. If you really wanted a human waiter at your restaurant--if that were high class or high status. Keep coming back to more mundane arguments. You wouldn't want to argue that population growth over time is going to lower wage rates--because it doesn't. So, what's different about this? This is a way to understand the relationship between humans and machines as both complements and substitutes. I think that's the key conceptual barrier. So, Ricardo did this paper in 1820 about humans and machines as substitutes, go t the straightforward substitution effect. More common view over last century, starting with Wicksell in the late 1800s, was that humans and machines were complements. As complements, makes the wages more valuable. Are humans substitutes or complements? There are many different tasks that need to be done and tasks are complements; but humans and machines can substitute on a task. Many things that need doing. The better we get at doing any one thing, the more valuable all the other things get. For example, transportation. If it gets cheaper, we are tempted to transport more kinds of things, and each kind of thing we transport gets more valuable because we could transport it. Tasks are complements. Even within a company, factory line, complements--need to do quality control, pick up the materials. If you don't do all the tasks, you don't get the product. Marginal product goes out to people who do tasks that are valuable and who do tasks that on the margin are complements. The tasks you do better make all the other ones have higher marginal value. When machines did just a small range of tasks, that range didn't change very much. But if they could do everything, more complicated. As they get better, there's also a substitution on the margin effect--there are tasks you can use a machine for that you used to do instead. There's the bulk effect of overall as machines get better they raise the value of all the other tasks being done; then there's also an on-the-margin substitution of who does which task. Then there's this curve that represents humans versus machines; if it's really steep then mostly you have the complementary effect. But if this curve gets flat, substitution effect. Usually you imagine that there are things you'd want a physical person for, but in this scenario the way you've described it, hard to argue there'd be much of an advantage for an individual person. Good to have capable machines, at least if you have a chance of owning the machines or inputs to the machines.
1:21:15Now an hour and 21 minutes into this podcast. Big picture questions. Do we want this world? Do you want to live in this world? I think I do. But I think people overestimate their influence. First cut job for an economist is to figure out what the world will actually be like, figure out what likely to happen, and then maybe ask where you'd like to move things on the margin. A humble policy analyst thinks in marginal terms. What I'd want to shift toward is a little more foresight, realize that they won't be able to make money on wages forever; make sure they have other assets--real estate, stocks, etc. People would be happy to give a little bit to those who didn't have these assets to begin with. If I have the ability to stop this or not I think I'd want it to happen, I'd like it to happen because it's a world with vastly more wealth and vastly more people who find life worth living. Wonderful to have lots of people who enjoy their lives. What about the environmental issues--a trillion robots, where are we going to get the energy to plug them in at night? The universe has enormous amounts of material, including energy. These things can be really small--millimeter-sized robots, no reason they have to be the same size as we are. How's it going to hold the scissors to cut my hair? One of the reasons people are cautious about the environment today is because we are biological creatures; we need biological inputs, which requires a healthy-enough ecosystem to supply those things and not poison us. If we have machine bodies--if the world becomes dominated by machine bodies then they won't need the environment in the same way we do. They'll be much less eager to preserve it. Might keep zoos going, things of the past. Don't I want a sunset? Depends on how much you want to pay for it. But a lot of people want to enjoy the sunset. This world is a world of people who are much closer to subsistence than our world. Much more of a Malthusian scenario where per-capita wealth falls and people are individually more struggling to make sure they survey. There are so many of them that the world is vastly wealthier; but the question is: poor people, how much do they want to spend to save a sunset? Historically more limited. A part of us being very rich today means we are indulged. But this is a much bleaker scenario than I expected. I thought, going back to the great arc of human history: we've seen a transformation of enormously larger numbers of people leading to much more materially rich and longer lives. You are portraying a technological change here that I keep thinking of leading to that same process: greater numbers of people and higher income per capita. But you are suggesting this is going to be a very bleak world of super-rich people who have command over these things and a bunch of drudges who limp along near subsistence. Why wouldn't it be more like the transformation we've had, but faster and better? I want to hear more optimism, and if not, why wouldn't we try to stop it! The long run stable trend is toward more knowledge, more capacity, more power, a larger total capacity in the world. That's the clear stable long-run trend. We will continue to be able to do more to increase our capacity to draw on more materials with more insight and more ways to deal with it. That long run trend has produced in the past an acceleration in growth rates. We've learned how to grow faster, so we were able to grow faster. Over this time we had a relatively stable human reproduction technology. Humans' bodies haven't changed very much. When we could grow the economy very slowly, human capacities could easily overwhelm the growth rates and per capita wealth stayed low. As our ability to grow wealth became faster than human reproduction rates, per capita wealth grows. But in part that's not just a consequence of growing the wealth faster. It's a consequence of this stable, stuck reproduction technology. Next singularity, which I suspect will involve a technology that allows a rapid increase in the population, there is no particular reason to expect per-capita wealth to rise, and in fact there's a reason to expect it to fall. Total wealth is just different than per-capita wealth--ratio. But I don't care about total wealth particularly. I care about my wealth; we all care about our individual stake. Well, of course your personal wealth could increase too, so long as you are selective about your population. You could spend your increased wealth on having more copies of Russ and therefore having a larger Russ population; or you could spend your wealth on having one Russ who is richer. Each individual will continue to have those options. Individuals can either have more descendants or a smaller clan that's richer. It's possible now to choose to use your individual wealth to have a much larger population where per-person wealth is smaller. But these copies are not biological. Doesn't matter. It does matter because there are different resource demands. They could be dramatically lower than a human being. But it's the rate of growth of the resource demands that are the key thing. That resource thing is part of their price. The price of making one of these things is first the fixed cost and then there's the maintenance cost. Just like a child. If those prices fall, the price of making these things will fall, and some people will just go wild making a lot of copies. And that will dominate the population--the small fraction of people who choose to have lots of copies. The mathematics of the per capita thing is dominated by those people. That doesn't have to be you.
1:30:47Those copies will make me richer, I think. The other person's copies. Unless you are relying on your ability to compete with them. And they are just like me. If you own things like real estate, patents, stock, things whose value doesn't decrease as they get more competitive. You wouldn't say that with respect to immigration or population growth. What's different here? Lots of things in our society whose value gets smaller because there are other things who outcompete them. Twenty years ago, if you owned the patent to a cell-phone, that would be worth a lot of money, but as better cell phones came along, that patent became less valuable. With technological progress some kinds of assets become less valuable because they are rights to produce a certain kind of thing in a certain kind of way. Your wage is like a certain kind of patent. You can't count on that if you are not going to be improving. So, why does that not hold in population growth generally? You wouldn't want to argue that population growth has lowered the return to labor. You could argue it ceteris paribus, but ceteris paribus doesn't mean anything there, right? You can't hold all things equal. More people want to go to college. There's all kinds of things that offset the supply and demand in that one market, right? Are none of those coming into play here? Think about thousands of years ago in the farming world. There was growth in the economy, but the population could grow quickly and these other effects couldn't counteract it quickly. The wages of labor fell relative to the price of land and if you wanted to own something permanently valuable back then it was more valuable to own something like land other than the ability to have a child who could make money, because on the margin you would be near subsistence and the expense of feeding the kid would be close to the cost of creating them in the first place. Not a winning strategy to try to create kids three thousand years ago. It's about two different time scales and how they compare. Now, we can't increase the population as fast as we can increase wealth. That's a mechanical way to say it. The other way to say it is our rate of productivity change is outstripping population growth. They are not totally independent. Last question, which has to be asked of Robin Hanson: Would you bet on this happening? Not only would I bet on it, I think we should bet on it. All of these sorts of things could be illuminated in betting market prices if only someone wanted to have betting markets on it. Virtue would help us prepare for it? It would say what would be likely to happen so you personally could prepare for it; would also offer possibilities where conditional on various policies things would be likely to happen or not. Should we subsidize robotics more? People tend to throw up their hands and say nobody can predict so nobody should think about it. I think it's not that bad. But you also shouldn't just rely on visionaries who spout exciting stuff because they'll tell you things you want to hear and things that excite you; not very realistic. If you want realistic, hard-headed estimates about the future I don't see how you can do much better than getting people to bet on it. Could get people to bet on it if you subsidized it. Nobody's stepped up to do that yet. Mark Cuban, willingness to subsidize creative things; owner of the Dallas Mavericks, very entrepreneurial.

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (81 to date)
Steve Fritzinger writes:

Would we ever get to the point where we are creating duplicates of Russ in a box? I doubt it.

We'd need more than just a perfect scan of Russ's neurons to recreated Russ. We'd also need to know how the neurons respond to signals from each other. These are learned and are not visible in the raw interconnections. We'd have to probe, neuron by neuron, to figure out all these trillions of potentiation responses.

So, assume this path to intelligent machines is fruitful. Since the raw interconnections are much simpler than the inter-neural potentiations, we'll learn how to build a new brain long before we learn how to recreate Russ's brain. And we'll learn how to build simpler brains long before we learn how to build human-equivalent brains.

Once we have simple, but still very intelligent, artificial brains, the motivations will be to improve these brains in ways that make them good at doing things human brains aren't good at. We already have Russ and lots of good Russ substitutes (Sorry, Russ. You're a great guy, but it's true). We don't need more perfect copies of Russ.

This realization leads to the more interesting question of what will we want these artificial brains to do and how will their new capabilities change what we want. And if they eventually develop consciousness, what will their motivations and needs be. They won't want the same stuff we do, so that will be a whole new world which we can't predict.

That's the singularity (if one exists). Not just that we will make trillions of cheap people who don't eat real food and who take up less space. It's that we'll have an entirely different type of production of, at least, IP goods and an attendant cornucopia of new physical production means.

I think that's a much more hopeful scenario than the one Russ and Robert went down during the last 30 minutes of the podcast.

Eric writes:

I've not yet finished listening to the episode, but isn't a slave an A.I. bot labor saving device? It has a lot of the attributes of an intelligent machine and labor saving device. The slave may cost more to maintain, however, since food is more expensive than energy (right?). They also take longer to grow, and their bodies are more frail. However, they are similar in that they are intelligent while not requiring wages.

Michael Rooney writes:

Your discussion with Robin Hanson was very interesting, ignoring the software points raised. Please consider having a similar conversation with Ray Kurzweil. He truly understands technology, hardware and software, and its future directions, better than most. Among other things he is currently working on having a copy of his brain.

While he is not an economist, he would elevate the technical discussion and understands economics well too.

Keep up the interesting conversations.

Happy New Year,
..michael..

Sean Heismann writes:

This is the first time I've ever been compelled to write about an EconTalk podcast. Russ and Robin discussed some very interesting issues in the philosophy of mind, but at a crude level of detail. My underlying complaint is that many relevant issues in cognitive science and philosophy of mind were either incompletely explained or elided together, failing to make relevant distinctions and answer basic questions in those fields. For that, Russ and Robin may be forgiven, considering they're economists (they know more about phil. of mind than I do about economics, though). For a good "lay of the land", Paul Churchland's "Matter and Consciousness" is a great start, though he has a specific materialist line of goods (eliminative materialism) he wants you to buy. There are other fascinating, approachable works that give alternative viewpoints, but I don't want to hand in a biblio. I'll just state that some version of materialism is the only thing going, and that viability of dualism (the belief that there universe has two kinds of "stuff," mental stuff and physical stuff), with the exception of interactive property dualism, is a dead issue.

Nonetheless, there is only one position, token materialism, that accepts the possibility of machine intelligence could approximate human intelligence, but you have to accept other things with it, like the principle of supervenience, the role of qualia in making distinctions between sensations and thoughts, the possibility that there are psychologically isomorphic mental states between dissimilar life forms, and so on. Even at a sort of basic level, while token materialism is the currently popular theory of mind (and my preferred theory), it's got problems, notably pointed out by John Searle, in "Minds, Brains, and Computers."

As well, attempts at prediction that hinge on "We've seen this kind & rate of growth in the past, so we should expect it in the future," are like folks who bought tons of mortgage backed securities: the price of land *never* goes down. so it's a sure bet. Unfortunately, there may be physical limitations to the materials that we use in creating strong or weak AI that caps the possible "level of intelligence" (whatever that means) any non-biological machine may achieve.

Keith Beacham writes:

I usually find these podcast enjoyable and in many instances useful though I do not share Russ' ideas about political economy. I found this podcast horrifying.

Peter Van Valkenburgh writes:

Here's some optimism (which I think Russ felt was lacking).

If we could move our mind (by making a copy of it) into a potentially immortal box with various manifestations, why would we maintain our original biological forms at all? Today: Russ-in-a-box wants to be in his human-like android form, tomorrow: he's a purely digital entity surfing the next-gen internet, Tuesday: he's a spaceship flying around mars.

Also, regarding sunsets and sentimentality, Kurtzweil had, I think a somewhat more enlightened perspective than the bleak ("well who would be willing to pay for them") one presented here. In a world without scarcity, it is more costly to destroy something than to allow it to continue to exist. For example, old webpages often do not die, they just sit on old servers unused (Space Jam example) If we become these digital beings (our consciousness grafted onto lower-resource-dependent non-biological vessels) then there is every reason to expect us to abandon large parts of the world back to their pristine state. We don't need the space and occasionally we get great joy from seeing it (when we port our brains into an ATV and go joyriding).

Mads Lindstrøm writes:

Russ, you seem to doubt that we would be poor, if we had rapid and continual growth. Imagine we double the number of bots every two weeks. It would take surprisingly short time before there were as many bots in the universe as there are atoms in the universe. Clearly, we cannot be wealthy if we can on average only own one atom. Obviously we would never get to one atom per bot, but we would run short of natural resources sooner or later. In the long run Malthus works.

Here http://www.universetoday.com/36302/atoms-in-the-universe/ they estimate the number of atoms in the universe to be between 10^78 and 10^82. These are huge numbers. But log2 (10^82) * 2 weeks = 518 weeks = 10 years.

Nick writes:

wow one of the worst episodes ive heard. very dull and far into fantasy land. the only interesting questions come at the end, where i think russ cornered robin and exposed quite a flaw in his theory but he was unable to really defend it very well.

that is to say if trade and specialization creates wealth and i could copy myself why could i not trade with my copy and this would create wealth. the copy only has the same skills as you right after the point its created. once it can go learn or develop its own skills it can increase the size of the economic pie through trade and specialization

i agree with russ, i dont see this as different from population growth or immigration.

Arnim Sauerbier writes:

"Your brain software is two parts: which cells are where and who is connected to who with what type of connections; other part is how do these cells run? What's the rule by which cells take signals coming in and turn them into signals going out?"

This view is quite wrong, or at least stated in a very misleading way. Our 'software' isn't only static code, but includes emergent ghostly patterns in our (sadly too limited) forebrains. The spirit is not the brain... but something that is transferrable - I am transferring it to you, the reader, right now.

To the serious student, I reccommend Rodney Cotterill's
"Enchanted Looms: Conscious Networks in Brains and Computers".

David B. Collum writes:

I am part way through, having some allergic reaction akin to some already posted, but listening with an open mind (or at least trying). With that said, I was almost immediately reminded of this 1964 video of Arthur C. Clark describing the limitations of making predictions as well as a few predictions of his own (that, as you will see, prove stunning)...

http://www.wimp.com/predictingfuture/

Ross Huebner writes:

This will NEVER be allowed to occur. The very first reason is that government will not be able to control it. I may, in fact, cause a revolution that will cause the "hidden hand" to lose their power over Congress and the other meat puppets/robber barons that are currently in control.
The second reason is that our current rate of technological advancement is already under the heavy hand of the federal government. Any invention that is produced wither it be a technological one or a mathimatical logrythm that allows for an unbreakable cipher is already blue slipped by the feds and not allowed to be produced until the National Security Agency has teh ability to complete uninterrupted access.
Why do you think that when SATCOM 5 went down years ago (during the beeper and brick phone era) and all cell phones and beepers ceased to function? The reason is that all electronic or equipment which involves data transferring ability is sent directly to outerspace where the federal government does not need a wire tap to intercept your phone calls.
To think that the government will allow technology to increase at such a rapid rate is just naive. Sorry, but anyone who does not know the choke hold that our government has on technology is simply unqualified to be writing an article espousing such a utopian concept.
IT WILL NEVER BE ALLOWED!

Jim Ancona writes:

First of all, great podcast--please schedule Robin for a sequel. I felt like you could easily do another 90 minutes on this topic.

Second, for a (very funny) sci-fi treatment of a world that might result from Robin's scenario, see Charles Stross' Saturn's Children: http://www.amazon.com/Saturns-Children-Charles-Stross/dp/B001QXC48Q

Finally, I flt like Russ didn't ask some of the obvious (to me) questions:

- What legal rights would an AI clone of a human being have?
- If I allow myself to be duplicated, should I care about the subjective experiences of my clone? After all I (my meatspace version, at least) won't experience them.

All in all, great program!

Jim

Steve writes:

I was excited to see an episode on this topic. It is too bad that Robb Hanson was a bit grating and overstated his points.

I think Russ did a good job in regards to several areas, an interviewer can only press so far. On whether the materialist account of the mind is an "open question" or not, Hanson didn't actually go so far as claim no intelligent person questions it, he only implied that no one worth listening to does so. He certainly thinks that no self respecting physicist would think otherwise. That condescension isn't helpful, nor is it apparent that a physicist is an obvious choice for considering theory of mind. He needs to be more inclusive, besides, the lack of a consensus doesn't invalidate his larger point. I mostly agree with him, and yet even I found this part insulting.

Also, I am not sure what mouse brain "scan" he was referring to. Again I think he exaggerated the state of that art. The NYtimes just published a short piece on this topic, "About one petabyte of computer memory will be needed to store the images needed to form a picture of a one-millimeter cube of mouse brain" http://www.nytimes.com/2010/12/28/science/28brain.html?pagewanted=1

Scot writes:

I'm sure I'm way out of my depth but can someone explain to me how something like whole brain emulation gets off the ground without putting an actual human consciousness through some degree of torture? To wake up in a totally alien environment might be terrible. It's hard for me to imagine how you make the artificial environment comfortable for an emulated brain without experimenting on what is essentially a human being. It seems very cruel.

kenyata dogu writes:

Two points

"Prediction is hard, especially about the future".

As a child I was formed by golden age science fiction: Asimov, Heinlein and the rest of that pantheon. What is interesting is how hindsight reveals blindness. To wit, Heinlein frequently mentions the use of sliderules. There was no mention of hand calculators, let alone laptops or iPods. What are we missing in our predictions of the future?


"A copy is still a copy"

If "I" am transferred into another container and the original destroyed, the transferred "I" would be a copy unless it is possible to establish a single thread of experience. E.g., assume the original and an initially empty ego container facing each other. To establish a single thread of experience, I believe the following steps would be required:

  • The original modifies the container. This can be by moving the container's arm (if it has one) or otherwise modifying it.
  • The ego transfer takes place.
  • The original is now "empty", however that should be defined. Alive, not unconscious, but empty. The ego in the container notes that the marking done in the first step is present.
  • The container now marks the original.
  • Transfer the ego from the container back to the original.
  • The ego (now in the original) remembers it has been marked and that the mark is present. The container is empty.


This sequence assures a single thread of experience. Of course, this concern with original and copy is grounded in a early 21st century view of what personhood means. However, in the future, having multiple copies of an ego extant might be as normal as body modification is today.

David B. Collum writes:

So I finished and found myself feeling that this was a familiar topic. I found it, and it was Robin Hanson's 2007 interview...

http://www.econtalk.org/archives/2007/05/hanson_on_healt.html

I found this first interview uplifting. So what was the difference? My sense is that the first interview focused on undeniably improvements in quality of life in the past (convincingly so, I might add). The current one seemed to suffer from two problems: (1) a presumption that game changing events--punctuated equilibria--will necessarily be favorable (with some very bold statements about rates of growth), and (2) exponential functions have no limits. (Albert Bartlett of Berkeley gives excellent talks on the failure to understand exponentials.) Besides that little dose of exuberance at the outset, I thought it was a thoughtful discussion of some wild ideas. History shows that, if you can imagine it, there is a decent chance that somebody will eventually figure out how to do it. When you watched Captain Kirk talk to his all-knowing computer, did you ever imagine that Google would put it to shame? Which of you boomers imagined that the miraculous light capable of cutting through solid materials (lasers) would be used to scan groceries and play recorded materials? I am currently of a more pessimistic persuasion about the near term (which may be a generation), but pessimists tend to be correct only transiently.

Frank Howland writes:

I was expecting a tale of incredible per capita wealth and then was surprised and a little bit shocked to find a vision of the future which Russ Roberts correctly called a very bleak picture. So a dystopia instead of something closer to (materialistic) utopia. However, Robin Hanson seems to like this vision of the future, focusing on the large increase in total wealth rather than the sorry story of low levels of wealth for the vast bulk of the population (counting the things we create as people, which is morally the right thing to do if you believe Hanson's claims about creating copies of our minds).

The extrapolation from past history struck me as pretty silly. A "remarkably tight prediction" from three data points. I suppose that this part of the podcast is not to be taken seriously, but in that case why waste our time on it?

jimM47 writes:

What I thought was really missing from this discussion was consideration of time. If you can put a brain in a computer, and processor speeds keep on doubling, then pretty soon after the first ported humans come into existence, a ported human can live an entire lifetime in a year.

At that point, why would you want to constrain ported human beings to living in real time? The computers they lived in could be networked together so that they could interact with each other, design costless living spaces, and generally live out the next few millennia of human cultural and technological advances within a single lifetime of someone "on the outside."

Real humans on the outside would basically have the job of maintaining this virtual world, and would presumably be awarded with vast wealth for doing so. All the surplus of a millenium of human growth "on the inside" would be available to pay for that maintenance on the outside.

But those on the outside would also miss out and the world inside. Think of the cultural surplus alone. This sped up world could produce blockbuster movies at the rate we produce blog posts. How long before real human beings are simple incapable of understanding the cultural, academic, and technological products of our immortal time-dilated cousins.

Sundog writes:

Nice follow-on to Kevin Kelly.

May I suggest inviting Charles Bowden (author of "Juarez: The Laboratory of Our Future", "Down by the River: Drugs, Money, Murder, and Family", "Murder City: Ciudad Juarez and the Global Economy's New Killing Fields", among others.)

The topic I suggest you explore is "the future is here but it's unevenly distributed."

Incognitum writes:

Wow my two podcast worlds coliding! Tied if not a nose ahead of Nicholas Talib for best episode ever.


For further information on all things singularity you can check out 'fast forward radio', or 'the future and you' podcasts.

Even talking with a Kanesian I've never heard Russ have such a hard time accepting a guest's premise for the sake of argument; let me assure you, the singularity is nothing to fear, although is also not a post scarcity Utopia. Time is truely our most fleeting assest, and one we cannot manufacture throught any means, thus it will become the currency in an economy where meterial goods are too cheep to charge for.

I think if Russ could find a way to do a show on the skeptic movment with Penn Jillette & Michael Shermer, the internet would be complete... on the other hand, maybe that sentence is a sign that I've already had one too many glasses of wine this evening. Either way, awsome show!

rhhardin writes:

Coleridge wrote an excellent critique of artificial intelligence in Biographia Literaria, Chapters 5-8.

The essential motto for the whole would be the problem that "Matter has no inwards."

As for the mechanistic idea today, an emulation would lack quantum entanglement, which seems like it's probably important. You can't find out about it until it happens.

rhhardin writes:

Stanley Cavell works the intelligent robot possibilities as a philosophical problem, as a literary way to understand what the criteria are for the words we use.

Try _The Claim of Reason_ starting around page 403.

He's a very entertaining writer, working in the style of Wittgenstein.

Jason writes:

I found this to be a very frustrating podcast. Russ did a good job of trying to probe the ethical and moral implications of copying a human brain, but Robin refused to answer. He acted like Russ was a simpleton for even asking such questions. He never really grappled with the deep problems, like will making a copy of a human brain in a machine really behave and think like a human? Has he never watched a movie like the terminator where the smart machines enslave humans? This will be the first thing people think if this ever comes close to reality, so people like Robin need to answer this question now if they ever hope to do what they propose.

He also doesn't understand technology. Creating an emulator to port software requires you to understand every instruction in the original CPU so that you can make sure you create the exact set of instructions in the new CPU to make it act like the original CPU. It isn't merely a blind copy of the instruction set without understanding. What Robin describes is some kind of fantasy of copying without understanding how the original works.

Max writes:

Re: gaping blind spot in foresight?

Questions for Robin:

What is the most likely order in which the following technological milestones will be developed and widely implemented?

A. Fusion, or some other similarly inexhaustible means of power generation with similarly marginal-infinitesimal incremental production/distribution costs.

B. Very versatile, low cost material fabricators capable of converting very inexpensive or free inputs into (at least) most inorganic things that one might want in the foreseeable future.

C. Effective, low cost brain emulation.

I ask these questions because, as much as I enjoyed this podcast, once again I am struck by the apparent inconceivability of any future in which scarcity and the derivative pursuit of wealth cumulation (which would be both meaningless and perverse in the absence of scarcity) ceases to be the presumptive driver of both macro-level change and individual human motivation. I would respectfully submit that this blind spot seems to be closely related to the tendency among (esp. Austrian) economists to interpret money as first and foremost some kind of "objective" accounting device, and to absolutely discount the possibility/viability of any other kind of medium of exchange that is not equally contingent on that bedrock accounting function. IMO, widespread enthusiasm for the gold standard and reflexive skepticism about fractional reserve banking are additional signs of this same deficit -- as is the tendency of some prominent historical and contemporary figures to lapse into the familiar (e.g., have-hammer-ergo-everthing-is-nails) error of assuming that accounting standard's universal applicability, and actively embracing/advocating a system (life/society/economy) of positive amorality based on that assumption.

Some suggestions: how about a podcast featuring Cory Doctorow and a discussion of his imaginary post-scarcity, reputation-based liquidity mechanism "whuffie" (Down and Out in the Magic Kingdom, 2003), or podcast(s) featuring some of the economists who have developed "search and matching" models to hypothesize about the historical evolution and possible future adaptations and alternatives to the existing monetary system (c.f., Nobuhiro Kiyotaki, Randall Wright, etc.)?

Ross writes:

Great stuff.

Surprised -- very -- not to see even a mention
of Kurzweil here. His techno-slice of this
argument is captured in his "Age of Spiritual
Machines".

Hanson covered an amazing amount of ground in
a short period of time. Let's not try to make
his robot-replacement first. Too hard.

Also must consider Wilber. Ken Wilber, author
of Boomeritis. Idea: can't make machines
smarter/wiser than we are. Faster, more
precise, tireless? Sure. But true intelligence
and 'wisdom'? They start where we are. (You
need his "spiral dynamics" based theory of
consciousness to get the juice of the
argument, here.)

As humans do not become automagically
exponentially more intelligent, wise,
capable with each generation -- our intellectual
growth is more piecewise linear -- why should
we expect our magic machines to evince
exponential intelligence and wisdom? Not likely.
Possible, sure, but based on what?

Finally, the issue of ethics. Is it really
ethical to "reboot" or "wipe" a machine that
is a strong, conscious, aware copy of "me"?
A 'machine' begs (as I reach for the CTRL-ALT-
DEL) to "let it live?" Major issues here.
Resolvable, sure, but not easy. Magna Carta
level stuff.

Fascinating topic. Fabulous first pass at it.
Hope there's a Hanson book on the topic soon.

Matt S writes:

I think the most important point in the podcast was that artificial minds could differ drastically from human minds. I wish this was explored more fully.

Also, there is no precedent for using emulation as a means of achieving an artificial version of some functionality present in nature. Technological innovation might be inspired by nature, but a high res digital camera is not an emulation of a animal's eye, a tractor is not an emulation of a horse, an airplane is not an emulation of a bird, a water bottle is not an emulation of a camel's hump, etc. It seems wrong to assume that if we ever manage to create artificial intelligence that it would be achieved by copying cell by cell what happens inside a human brain.

An artificial version of something is usually achieved by creating something much simpler and at the same time much more powerful than whatever nature has concocted.

Jason writes:

The idea of the brain as a primarily chemical machine has support coming from medicinal chemistry. There are many very simple chemical compounds that can affect memory, emotion, and perception. The fact that these compounds can have these effects is strong evidence that the mind is chemical in nature.

Martin writes:

In their book "Memory and the Computational Brain", Gallistel and King suggest that the neural network might not encode many memories, that much memory exists at a molecular level rather than a cellular level. If so, simply recording the topology of a neural network does not capture the valuable characteristics of a human brain. Steve Fritzinger also alludes to this possibility above. The "mind porting" idea might be possible, but it might not be as simple as recording neural network topology, the simple "brain cross-section map" that Hanson describes.

If mind porting and the other necessary technology for rapidly creating human "replicants" are possible, then Hanson seems to imagine a world filled with many replicants (in ever increasing number) effectively enslaved to their owners or laboring for ever diminishing wages. The fabulously wealthy of this world somehow assert mastery over the replicants and their produce by writ of their title to other resources, like land. I doubt this outcome, but the prospect raises interesting questions.

What's the difference between these imaginary replicants and people working for relatively low wages now, people producing goods consumed by people with title to resources like land? As a land owner, do I consume the marginal value of my land, or do I consume the marginal value of guns threatening to shoot people who challenge my mastery of the land? The same question applies to a patent on mind porting technology.

Title to land and patents are forcible proprieties. For better or for worse, the value of land ownership is the value of a monopoly enforced by a state. In my neck of the words, enforcing these titles seems for the better, but "whose betterment?" is always a legitimate question. What sort of state enforces a system of mastery over resources entitling relatively few land owners, laboring hardly at all, to the produce of countless replicants laboring only to subsist? Is this state sustainable?

Why would the replicants be subject to this state? If they're so numerous, why wouldn't they overpower any state enforcing their titles and establish new, weaker titles, less valuable to the title holders? In other words, why would the value of labor fall so precipitously compared with the value of these titles? The replicants presumably need land on which to labor. Doesn't each new replicant require less land? Why does the cost of a required unit of land not fall?

Hanson seems to imagine the most submissive people replicating themselves most successfully in a market for replicants, but this process also raises interesting questions. These people will labor for little in a highly competitive labor market. Are such people also willing to be subject to a state enforcing strong monopoly rights to other resources, or will their submissive nature impel them to establish a state enforcing weaker rights over the means of production, rights entitling resource governors to less consumption than rights we observe today?

Steve writes:

I want to ask Robin Hanson why he thinks current laws and norms regarding property rights would still hold? He suggests that the best way to prepare for his near zero value labor scenario is maintain other resources like land, patents, and stocks. At one point in history political power was also an object of ownership. He is trying to prepare like a tzar or king tightening his political grip due to foresight of the industrial revolution and populist governmental systems. I think comparative advantage currently helps maintain property rights. Imagine a scenario where property rights had a quick expiration. This would probably be needed if lifespans were long and economic time scales decreased. e.g. Robin has 1 week to effectively make use of his land rights, then he has to compete for repurchase against the 1000 virtual (fast operating) minds who owned a similar valued resource. Who will have made more value? Which economic system would come to dominate, the one which reallocated investments quickly and often or the one that left wealth in the hands of Robin Hanson or Russ Roberts who want to use their wealth to help the poor and the environment? Currently with few people and lifetime growth doubling, it is cheapest/efficient to allow current property right laws. But property isn't sacred, it is a norm. I think Robin Hanson, for the sake of individual people, better hope we don't have enough resources to accomplish what he is imagining. I think it is possibly "good" in some grand scheme of things, but it wouldn't be fun. Alternatively, I hope Russ's analogy of immigration is accurate, it is seem hard to think about. With respect to the economy I can't see that people are different than any other capital. If people become fundamentally less valuable than other resources, and aren't the only actors, then direction of ownership will change. Land will essentially buy and sell people. But Russ is arguing that the type of relevant resources will change and we will still be enriched through trade, there will be more people and likewise more of the things that have value. In that case I don't think the relevant investments exist, and again Hanson's stocks will be obsolete.

Ward writes:

It seems more likely to me that we will find a way to maintain our bodies for some exceptionally long period of time long before we are able to port our brains. The economics of everyone living forever is a different singularity. I haven't finished listening to it all but seems like that topic would make more sense to explore.

paul corrado writes:

thanks Russ for the great cast and so many great ones in the past! you have had a huge inpact on how i see the world!! I just ran across a wonderful youtube video right after listing to this podcast with bill gates (Techonomy 2010: REINVENTING CAPITALISM and he mentions Technological Singularity in the middle of part 4. the entire video (all 4 on youtube)is great and many other people here may like it. just wanted to share! thanks.

Robin Hanson writes:

Steve F, once we learn how each type of neuron works, we don't need to study each individual neuron of that type.

Sean and Steve, huge economic consequences of machines that have roughly the same input output behavior and human brains follow regardless of how you philosophically interpret such behavior.

Peter, digital beings, like analogue ones, still require supporting physical resources, which are scarce, and which conflict with nature-preserve uses.

Mads, yes, within a million years we'll clearly hit growth limits. But even well before then wages could fall.

Nick, I said total wealth would increase greatly.

Arnim, brains have ceased all signaling activity, and restarted just fine.

Jim, a decent SF book, but not greatly like the scenario I paint.

Scot, harsh perhaps, but with many trillions of profits at stake, it will happen.

kenyata, the econ consequences follow regardless of if a copy is "really" you.

David, yes exponential growth won't continue forever.

jim, I did say ems could be sped up. What the most productive speed would be is not clear.

Max, neither fusion nor fabricators would end scarcity.

Ross, I made no claims about increasing intelligence.

Matt, I agree we have not ported from nature much so far. We are starting to do so in creating artificial cells. And my guess is that is the fastest route to brain substitutes.

Martin, yes some state may be encoded inside cells. The prediction of low wages is robust to a revolution grabbing property. There isn't enough to grab to change wages much.

Steve, yes theft/"redistribution" is possible. I don't recommend it, and its not as easy as it sounds.

Martin writes:

Political reform does not grab property. It changes the particular standards of propriety enforced, as when states ceased enforcing hereditary slavery for example. The wage gains possible through reform need not be a decisive factor.

Glenn writes:

This discussion must include Bill Joy's much darker view of what genetics, nanotechnology, and robotics might bring. The url to the Wired article from 2000 is attached.

http://www.wired.com/wired/archive/8.04/joy.html?pg=1&topic=&topic_set=

John Dawson writes:

Dr. Roberts and and Dr. Hanson,

I am a big fan of the podcast and enjoyed this one as well. I also find the technological singularity to be a particularly interesting idea.

The presentation of scanning and porting a brain as the quickest route to AI I found unconvincing but, assuming this is correct, it would appear that the resulting intelligence is only temporarily very similar to the original.

Let's say we copy Russ, (He seems to be the guy everyone is choosing to copy. We could have hourly Econtalk podcasts around the clock!). The copy, Russ2, exists in some sort of computer hardware. So Russ2 doesn't need to, and probably can't, sleep, drink, or eat. There are no biological influences on his mental processes; no hormones, no blood sugar changes, no distracting aches, pains, itches. Does he think about sex as many times a day as they say the average man does? If he does, isn't he awfully frustrated? If he doesn't, how accurate is the copy?

Since he is running on computer hardware he can think as fast as the hardware allows and that is doubling every 18-24 months today. He also need never forget anything. (Can he forget anything?) So, sixteen years after Russ2 is made, he is thinking at least 256 times as fast as Russ1. He has been awake the whole time and and has experienced the equivalent of 500 years of Russ1 "think time" to work, learn or whatever. At that point, every day he can do as much mental work as Russ1 does in a year. I don't know what Russ2's mind is like at that point but it is surely something very different than what we think of as human today. I also don't know what he will want out of life but fetching Russ1's laundry doesn't seem like his dream job.

Anyway, another interesting discussion but I agree with many of the other comments that a discussion with Ray Kurzweil or Vernor Vinge on this topic might illuminate things a little more.

Dave S. writes:

This reminds me of a recent Podcast featuring Daniel Akst on 'Counterpoint' (www.abc.net.au/rn/counterpoint/stories ) where he discusses Friendship and mentions an Issac Asimov novel where a Robotic world sees no humans interacting with eachother unless under the most extraordinary circumstances. The people use Avatars to communicate. (The Naked Sun) written in the Late 50's ! ; see also, the Daniel Akst essay in 'The Wilson Quarterly'

Max writes:

Re: "neither fusion nor fabricators would end scarcity."

That's trivially true for unique objects d'art, specific bits of real estate, etc., and may a plausible general prediction, but I suspect that the key factor supporting its (nontrivial) plausibility is the very blind spot that I identified. Those who benefit from the perpetuation of scarcity may seek to continue the practice of planned obsolescence or impose some other novel form of artificial scarcity on the rest of the sentient population even if/when technology eliminates all current/imaginable (and nontrivial) supply constraints; and perhaps they'll succeed. But it seems to me that making such a prediction is equivalent to predicting that a nontrivial share of that future population will not only prefer but actively seek to perpetuate the condition of scarcity. Q.E.D.


Stephan writes:

That was a very interesting podcast.

Reading the comments though, leaves me with a feeling that a lot of points were misunderstood, which probably is due to the idea-density of the podcast.
It would be great if we had a regular forum with sub-forums for each podcast and then threads about the different points people want to discuss.
Then we could actually have a discussion. Right now, this is mostly a bunch of unrelated statements.

Has integrating a forum with the podcast ever been discussed?

Anyway, thanks for the great podcast.

john writes:

The "brain in a box" scenario isn't a convincing route to AI.

While you certainly good make an exact replica of a human mind in a computer, there is no evidence that such a mind operate faster than a human mind already does, just because it's running on "faster hardware." The brain isn't software running on slow, meat, hardware, it's a physical process that behaves as it does because it obeys certain physical laws that operate as fast as it is possible for them.

Humans have never perfectly simulated a physical system--all models that are used for prediction, for instance, only work because of extreme simplification. If you wanted to predict the weather perfectly, you'd have the model the entire earth and all of the external factors that affect it. Even running on the fastest possible hardware, such a simulation would probably lag behind the real thing.

The only way to predict exactly when a dropped ball will hit the ground--EXACTLY--is to know all of the gravitational forces affecting it, the trajectory of each air molecule, etc. In real life, you don't actually need these things to get a useful approximation, but since we know little about consciousness and how it works I don't think we can willy-nilly make simplifications.

It is very unlikely that you could make the universe run faster than the universe, and it's a fallacy to make some hardware/software analogy between computers and brains.

Martin writes:

Eric at 10:21 is right. If we take Hanson literally, a human replicant differs little from a human (non-replicant) with fewer legal rights than other humans. Hanson understands this point, since he distinguishes the wealth of future humans largely in terms of their statutory mastery over other resources, like their titles to land and broader monopolies like patents and copyrights.

If I copy my mind and sell the copy, is my copy then entitled to sell a copy of itself, or is some Original Martin entitled to all copies of copies? How does Original Martin enforce this claim over the conflicting claim of a copy?

I doubt that the Originals collective could long enforce standards of propriety entitling them to such a privileged status. If they're willing to enforce these standards, their copies presumably are also willing to enforce conflicting standards, and the Copies collective is necessarily much larger.

I also doubt the willingness of Orignals to enforce their own privileged status. Maybe Copies are more submissive, but what does "submissive" mean in this context? Won't Originals (and Copies copying themselves) feel toward their Copies as parents feel toward their children? Isn't a very submissive personality essentially an ideally Good Parent? Doesn't a Good Parent want its children's prospects to improve? Parents have never treated their progeny like slaves. This idea is an historical fiction promulgated largely by politicians seeking to substitute the state for the family.

In Hanson's scenario, does human social organization (including replicants) become more like a bee hive or an ant colony? As Deborah Gordon notes in my favorite EconTalk, being the Queen of an ant colony is not at all like living in Buckingham Palace.

On a more practical note, enforcing patents is notoriously difficult, and enforcing patents while the population of intelligent, productive factors doubles each month seems even more difficult, so Hanson assumes the persistence of standards of propriety that might not be sustainable in his speculative future. Both software patents and international patents appeared during my lifetime. Neither is very well established, and both might never be well established. A ceteris paribus argument involving patent rights is highly questionable. Land and similar resources are truly scarce, but patented forms are not.

Following an interesting tangent, Ward at 6:17 is also right. Hanson's speculation is fun to ponder, but it seems less likely than Ward's in the foreseeable future. We're already approaching a crisis in the political economy of costly life extension. It's the Medicare problem.

When a class of wealthy people (like baby boomers entitled to Medicare) is entitled to practically indefinite life extension supplied by many other people who cannot be so entitled as a practical matter, the standards of propriety seem unstable. Who gets the scarce immortality? I doubt that "drug patent holders" is a realistic answer. Pfizer can't even enforce its patent on Viagra effectively. A patent on life extending drugs is laugable. Anyone who can produce these drugs will. You might as well try to enforce a monopoly on heroin production.

OS writes:

Hello Russ,

I have been interested in this topic for over 15 years. I find it very impressive how you have come up with so many insightful questions in such a short time. I bow to your intellect.

Perhaps you might consider podcasting about the question of "free will" in an environment of strong incentives.

Many people have expressed disgust, boredom, or disbelief. I applaud them for listening anyway -- now at least they are aware that this fringe topic exists.

Please allow me to add a few pieces of my own:

* Singularity:
this is the point in time beyond which all bets are off (first explored by Vernor Vinge) because things change so fast

* some roads to transhumanism:
- human gets enhanced (chemically, genetically, mechanical prostheses, computational prostheses)
- AI develops beyond human
- human gets uploaded into computer

* ethics:
- should robots/computers/androids/clones/pets get human rights (monkeys have been granted human rights in switzerland, I believe)?
- should animals be uplifted to higher intelligence
- how does it change my motivations if I know I'm living in a simulation, or I know I'm a clone (read Nick Bostrom as mentioned below)

* intelligence
- depends heavily on morphology (what senses does the I have, what are the motivations; I personally expect the first AI to be a financial market trading app which has the financial variables as senses; I base this on the money and brainpower poured into it); a close second could be Google

* speed of change
- I believe patent terms should be shortened regularly, to be in alignment with progress

* movies to watch:
- "Blade Runner" (do androids dream of electric sheep?) -- machines that are almost indistinguishable from humans: what makes a human human?
- "AI" -- child goes into coma, parents get replacement robot, robot loves parents, biological child wakes up, robot gets abandoned: should machines with feelings be given human rights?
- "BBC Age of Spiritual Machines" -- find these interviews on youtube
- "Ghost in the Shell" -- two forms of transhumanism (human enhancement & technical AI meet and evolve)

* recommended reading:
- Ray Kurzweil (mentioned above: age of spiritual machines, or perhaps "the singularity is near")
- Nick Bostrom, head of futures of humanities institute of Uni Oxford: the simulation argument and moral implications, Russ you're going to love this

* components of a brain:
- you mentioned the functioning of cells and the actual layout of connection plans; what I was missing was the current state; the human brain is not built to be re-booted, so if you start it with random content, it will not run (my hypothesis)
- without forgetting, we may go crazy
- can brains be combined into something better via reduced latency and increased bandwidth communications (Hive Minds anyone?)

* Religion
- Bertrand Russel said (paraphrase) religion os the domain of beliefs, science is the domain of knowledge and philosophy is the area in the middle trying to sort out what goes where and being diminished by research
- Arthur C Clarke said (paraphrase) any sufficiently advanced technology is indistinguishable from magic
- Kurt Goedel has shown that questions can be asked which have an answer but cannot be proven within the same system -- we are in a particular system -- questions exist that have an answer that we cannot know -- those are the domain of belief -- AI is not going to make belief obsolete -- divinity can be a topic of belief without need to resort to distinction being bestowed on humans
Thanks for your time!

(I'll probably think of many more once this is posted)

Eric Olson writes:

Having studied economics at the graduate level and being a technology guy professionally I found this conversation fascinating. The last half hour - thinking about whether or not you would like to live in this brave new world - reminded me of Kurt Vonnegut's first novel, "Player Piano."

In Player Piano Vonnegut imagines a world where the only working folks are the scientists and managers. All other folks have been replaced by machines. Vonnegut essentially suggests throughout the novel that even if everyone is better off from the machines the folks that don't have work aren't better off because they don't have any purpose. Hanson seems to dismiss purpose as a driver by suggesting that if everyone had relatively more wealth they wouldn't mind that machines did their work.

Perhaps in time new purposes would emerge to ignite the souls of those put out of work by machines but I'm not sure of that.

Side note: What is also interesting about the Player Piano reference is that the machines in the novel were just replications of the movements of certain master craftsman. This is a similar concept to porting the brains of various folks.

Vendy writes:

I am joining the minority of listeners who were not pleased with this podcast. Frankly, I forced myself to listen to the very end, while swimming in the sheer repulsion of the dystopic subject matter. I admit, I do not understand (and have no desire to do so) even half of the ideas expressed by the guest. "Blade Runner" mixed with "Matrix"?

Martin writes:

OS, You misunderstand Goedel (or I do). People often describe his undecidable proposition with "true but unprovable", but this description is presumptuous. The proposition is undecidable within Principia Mathematica, not true, i.e. it may be either true or false without contradicting any other assumption of Principia Mathematica.

Asserting the "truth" of the proposition involves some intuition you take for granted. You may as well deny your intuition and declare the proposition "false", as Hofstadter does in "Goedel, Escher, Bach" for example. You have a consistent system either way.

You could also intuitively assume the non-existence of imaginary numbers, but "there exists x such that x*x = -1" is not false within simple number theory. It's undecidable. Assuming the proposition false simply rules out a counterintuitive theory of complex numbers. The counterintuitive theory turns out to be useful, but its utility is not about "the truth". The simple, intuitive theory is also useful.

M.Savard writes:

Did anyone think of Battlestar Galactica when they talk about robots taking over... I'd recommend a viewing, Russ. The last one, not the 70's version ;-)

Steve Fritzinger writes:

Robin,

(Sorry, I called you Robert in my first comment.)

You wrote, "Steve F, once we learn how each type of neuron works, we don't need to study each individual neuron of that type."

This is incorrect.

A neuron's behavior is not determined simply by its type and its interconnections. Each neuron has an individual, learned state which controls how it responds to its up-stream connections and what signals it sends its down-stream connections.

Without knowing the learned state of each of Russ's neurons, you could not duplicate Russ with even the best scan imaginable.*

Of course, this doesn't effect your economic analysis of "Brain in a Box" world. You could have said it was magic and your arguments would succeed or fail just as well.

But, barring magic, the details of how we would get to BiaB world do matter. My argument is that, starting from where we are today, getting to BiaB world requires us to by-pass so many much more interesting and useful technologies that we won't bother going there.

* I guess you could say your scanner was capable of atomic level resolution, but then your emulator would have to emulate the wet chemistry of something like 10^27 or 10^28 atoms. Compared to that, building intelligence from scratch would be a cake walk.

emerich writes:

Very stimulating podcast. Yes, some of the implications were horrifying but I liked Hanson's willingness to stick to his guns, intellectually. I also appreciate his (so dry it's easy to miss) humor.

All that said, doesn't anyone remember that the real point of Mary Shelly's book Frankenstein is not that the monster was mean and killed people, but that the monster's creator thoughtlessly created an intelligent monster, condemned to harsh loneliness. How can we negotiate with a computer copy of ourselves that doesn't exist, I wonder? I might think living in a box forever is a good deal under some circumstances, but if I'm wrong, is it OK to pull the plug on my unhappy copy?

Dunno, but all very thought-provoking.

PrometheeFeu writes:

I think Hansen assumes too easily that somebody would make many copies of themselves. That would be highly dependent upon the property-rights regime in regard to these replicas. If they are just another human being with the same rights and priviledges as any other, you cannot "rent them out for 10c an hour" because they might just decide to refuse to work for you. The answer that if they are really cheap somebody will make many copies of themselves is unsatisfying. If not many people are making copies of themselves, how does the technology to do so become cheap in the first place?

Also, the idea that income per capita would fall seems possible but not necessary. If we had a substantially larger population of people with long term timeframes (because they are effectively immortal) who can think really fast, (which is what those replicas are) chances are technological progress would probably rise very rapidly. That infamous A (the measure of all economist's ignorance) would shoot into the stratosphere and quite potentially cause economic growth to outpace population growth.

Finally, I think that the idea of "porting" creating the singularity is not the most likely. Most people associate bootstrapping (creating a machine that can create a machine smarter than itself and so on and so forth) with strong AI (human-like intelligence) but that is simply not necessary. Weak AI (the kind that drives cars or does speech recognition) could potentially bootstrap itself (If it could learn to program) to solve very complex problems and give us huge technological progress. And best of all, it doesn't have the same kind of ethical problems as making copies of yourself.

ayoub writes:

Hi Russ,

Happy new year and thanks for Econtalk.

While I was listenning to your discussion with Hanson, I remembered this great movie called Moon, with Sam Rockwell.

The film is about Sam Bell (Sam Rockwell) who is working on the moon to extract some minerals sent to the Earth to generate cheap energy.
This guy thinks he's a contractor for 3 years but he is actually a clone programmed to have an accident and die.
Sam Bell didn't die ("the plan" didn't work) and meets the new Sam Bell - the new clone.
This movie is about the ethics of reproducing unlimited numbers of people to work cheaply.
Here is the link on IMDb

Ayoub

Seth writes:

I'm about half way through. Very interesting.

I'm reminded of an insight I read from George Gilder (not sure if it was his originally) about tech companies.

As we bump up against the limits of something (like memory storage space), there's a lot of money to be made by coming up with stuff that economizes on that constraint (like disk compression software), until someone figures out how to make memory storage nearly unlimited. The economizing companies go out of business and the new companies that eliminated the constraint do well.

That seems to be the similar dynamic at work with the singularities experienced by humans. We could just move as the climate changed, until there were other humans already where we wanted to move, so hunting-gathering space became a constraint and farming was the way to remove that constraint.

Farming served our purposes well until we started running out of land to farm. That became the constraint and the industrial revolution figured out how economize on that constraint.

It seems the singularities were about bumping up against a constraint. I wonder what constraint we'd bump against for the next singularity? Perhaps there's thoughts on that in the second half. Just wanted to get my Gilder comment out there.

Jim F writes:

Robin seems to be about the most shallow thinking futurist I've ever heard.

Consider the idea of world output doubling every two weeks. He makes a plausible case for this, based on past performance. But consider the idea that over the course of a year, that implies a growth in output by a factor of 67 billion. Given growth rates in population, that implies over the course of the year, on average, each of us can expect to produce 10 TIMES AS MUCH as was produced BY THE ENTIRE WORLD a year earlier. Over the course of a lifetime that means 10^70. (The number of protons in the universe is estimated to be 10^80)

But this is still reasonable however, because it's truly an artifact of the measurement. How many flint arrowheads is a web site worth? If we accept the doubling idea, we need to accept the idea that what we are creating in terms of output doesn't consume energy, and even so, energy will be more of a constraint even if we work in the realm of pure thought, as reducing the local entropy of atoms costs energy.

The idea that in such a universe you can hold any of our legal, cultural or even economic norms constant is far from obvious.

Additionally, the idea of emulating a human brain is silly, and reminiscent of early attempts to fly by building devices that flapped wings. We didn't get to the moon by making a giant mechanical bird.

chitown_nick writes:

An interesting podcast, but it seems to be mostly a tangent unto itself. I am intrigued by the notion of the Technological Singularity phenomenon, but the analysis of how that would come about seemed to be speculation (albeit it admittedly so) down the AI path.

The notion that 70% of economic activity goes to pay for labor (18:00) seems strange to me. I would assume 100% of economic activity (traced back far enough) goes to pay for labor. A pencil, for example is (in reverse order) sales, shipping, assembly, milling, logging, forming, mining, finance, and... nothing else. Even commodities that are bought as "materials only" require an exchange between people. If I pick up a nugget of gold off the ground - no economic activity. If I sell that to someone else, then economic value is added. It pays for my labor in finding and acquiring the gold - it pays for the value of the gold by paying for the assumed value of my labor in acquiring it. If it takes me 5 minutes, great! If it takes a year, I may spend my time and effort elsewhere. The gold is still there, but the economic activity is strictly (in my mind) the exchange as a result of labor.

Anyway, that said, I feel like any of the possibilities (energy, surveilance, transportation) are actually equally likely. Another way to look at the two previous Singularities is that they took something that was very expensive and made it very cheap.

Food was hard to come by, and dangerous. Agriculture changed that. Manufactured goods were time consuming, and terribly expensive. The industrial revolution changed that.

I would posit that heavily manufactured goods were a small part of the global economy in 1600. When the barriers to manufacturing were overcome, that changed, and industry became hugely important. Simply put, something need not be important now for it to vastly change the future economy if it becomes cheap.

For a simple example, Robin mentioned fusion. If energy became incredibly cheap and abundant, there would be less barrier to trade due to transportation costs; there would be less in the way of people traveling, or using high speed, high energy means to travel (airplanes instead of driving). Business could speed up, travel and human interaction could speed up and become much more complex and interconnected. Machines that take a lot of energy to run and so are not designed today could be made, which could produce anything.

This is but one example, but I think the podcast had an interesting premise, then a very strange predictive speculation that I was honestly surprised to hear from a forum that usually dissuades people from embracing the idea of planning being able to predict the future.

Raja writes:

Congrats on another year of Econtalk. The economic revolution envisioned is happening right now and this podcast is part of it.

These future talks are fun, but too often these science-y types don't have enough background in the theory of science and tend to make goofy statements. There's strong theoretical reason to believe we won't be able to create "artificial" intelligence. Fortunately, we already have a method of creating intelligent beings, and it's a lot more fun than writing software.

Milo writes:

I enjoy these occasional trips down other topics, as long as they don't overshadow the main theme of Econtalk. This one actually made me laugh out loud - Prof. Roberts, it had not occurred to me that you were religious and would have a view of the brain as more than the materials! I confess, there's no such spirituality/mysticism in my worldview, and among my peers it's quite looked down upon as primitive thinking ("we don't believe in magic!"). This major difference in perspective might be an interesting topic to ponder at some point, in a future non-economic podcast that steers more toward philosophy.

Prof. Hanson, I think everything you presented is within the realm of possibilities, but still extremely hypothetical - particularly in terms of how society will respond. I do think the issue of beating TIME & human processing speed is the critical advantage to the ported brains (as mentioned earlier by Jim). I did find it interesting that you both kept referring to the ported brain as "them", whereas I optimistically look forward to the ported version being ME as I leave this carbon version to die in slow-moving human time.

John Wiles writes:

Enlightening podcast, as usual, Dr. Roberts. As a physician, I agree that the brain is simply a soup of chemicals combined in a very specific arrangement, resulting in emergent order. While I agree that we can recreate all the proper interneuron connections of the brain, I disagree that we'll be able to design accurate models of how neurons behave, given their extreme plasticity and complexity. I liken this to being able to recreate an economy - we could probably create all the connections of actors in an economy but would fail at modeling how each actor behaves.

However, I do believe we'll eventually create artificial intelligence. This will be done via the same lever that evolution has used to make life and human brains in the first place (non-artificial intelligence???). By trial and error. Basically, we'll get computers to evolve. First, we'll need different building blocks. Using the binary system to create AI would be like using hydrogen atoms to create life. We need the equivalent of carbon - small building blocks that can combine to make more complex things - things more complex than any of the parts. Second, we'll need to create an environment where each part combines randomly and random combinations are chosen for specific reasons. Of course our basic rules could improve over time as we discovered what type of selection systems result in the most intelligence.

Thoughts?

Thanks for all you do.

Michal Kvasnicka writes:

I agree with Russ that even if the world like that would ever come to life, it is not necessary that the real wage rates would ever decrease. My argument goes like this:

1. The wage rate is determined by labor supply and demand for labor. Labor supply is (say) insensitive to the wage rate and is only function of number of workers (human and robots). Demand for labor is given by the marginal productivity of labor (we can neglect the role of prices at the aggregate level; say there is just one type of product and the wage rate is quoted in units of this product).

2. The marginal productivity of labor (the scheme or function) is a function of technology and amount of available non-labor resources (capital, land etc.). The technology is (beside others) a function of the division of labor (more people can (a) better divide the labor and hence use a better technology, and (b) more people can create more knowledge).

3. Let us say that the number of robots is increasing, which increases the labor supply. It would decrease the wage rate only if the demand for labor did not increase by a comparable or faster rate. However, the demand for labor would increase if there is (a) better technology or (b) more capital.

4. If there are more robots, (a) they produce more capital equipment, and (b) they produce more knowledge and divide labor further, both increasing technology. Both (a) and (b) increases the demand for labor. Thus the overall effect on the labor market is ambiguous: wage rate may rise or fall.

5. There is one more reason to guess that the amount of per capita capital would not drop: robots are labor and capital in the same time. So if capital per capita get scarcer (and hence more expensive), fewer robots would be produced, which would slow down their growth rate.

6. The land (and similar resources) could be a problem since its capacity need not to be expandable at necessary rate. But (a) it might to be compensated by the growth in technology and (b) if it decrease the wage rate, it would also slow down the production of new robots.

So overall impact on wage rates is not easy to guess and need not to be that pessimistic.

Have I made any mistake?

(And, BTW, lover of this topic could find interesting science fiction books by Greg Egan.)

(And apology for a long post.)

AHBritton writes:

Robin Hanson,

As I big fan of Sci-Fi as a child, I definitely enjoyed this topic. I have a few concerns in regards to the structure and assumptions of your argument however.

First I feel that your division of developing AI and "formulas for intelligence" as opposed to simply porting a human brain is a false one.

For one, even in our current time there is an overlap between the simulation of neurons and the development of artificial neural nets, as well as many other overlaps between mapping brain activity and attempting to recreate such activities by design.

In addition you seem to overlook one almost unavoidable fact. It will not be the case that one day a scientist will wake up and realize he can port an entire brain. Instead, as has already begun, it will be a step by step process. Modeling small groups of neurons, then larger groups, eventually whole sections of the brains, and yes probably eventually the brain in its entirety. BUT by the time that happens we will most likely have had a period of time in which to experiment with how various groups of neurons react and interact, what functions they provide, and what the effects are of removing neurons or altering them in one way or another.

This will undoubtably provide a great deal of insight as to how (or at least where) various functions of the brain take place.

Now I agree that it may be possible that we never "fully" understand how to "create" intelligence from the ground up. But as you pointed out with physics, despite not having a complete grasp of the subject we none-the-less are able to manipulate, postulate, and estimate very accurately.

Even if we don't fully understand it we may gain the ability to replicate groups of neurons that are especially efficient or designed for specific tasks, such as image recognition, spacial reasoning etc.

Another area that seems to have been completely overlooked is the human bodies value as a worker. Sure,the ability to make a brain in a box would be a marvelous discovery, but how many jobs could this brain in a box perform?

In many ways Stephen Hawking is somewhat of a brain in a box, luckily he is also highly intelligent and employed in a field which relies merely of rigorous thought and logical analysis. No offense to Mr. Hawking, but beyond his chosen field people would probably not clamor for his abilities to say cut hair, drive a car, cook a meal, clean a house, pick flowers, etc.

His lack of fitness for these tasks has nothing to do with the abilities of his mind but instead are dependent on the agility and dexterity of the human body. If you really believe that this porting will take place with little understanding of how the brain works, than it seems to me this brain in a box invention will be useless.

What would be required at the very least would be a complex understanding of how to communicate signals two and from this synthetic brain. How to input the appropriate visual signals (which requires a very sophisticated understanding of how the eye transmits signals through nerves and how the brain receives and interprets them), how to input audio, possibly touch.

In addition, unless somehow we have the ability to make entire bodies for these brains to inhabit (which may be a science even further away than brain porting), we will have to figure out a way for these brain-boxes to interact with the world.

For instance, in order to be barbers science will have to not only understand how the brain communicates signals for muscular contractions and controls to a very detailed degree (hair cutting requires a high degree of precision if you don't want to cut off someone's ear), scientists and engineers muss also be able to create an interface that is able to interpret these brain signals and utilize them to control a machine that must have itself a great degree of precision and dexterity.

For these kinds of reasons I find it absurd to postulate the existence of these ported brain-boxes as replacing human employees without also postulating a very high degree of understanding of how the software (i.e. brain) works. Otherwise these brain-boxes will be just that, blind, deaf, dumb (speechless), quadriplegic, unfeeling, non-tasting, immobile boxes that happen to contain a brain but are of little use.

I think replicating the human body would be the second portion of this human replacing project and may be as, if not more, difficult that the porting of the brain itself.

Personally I find it much more likely that we will build small simulations of clusters of neurons which we will utilize to gain insight into how they process information and signals, slowly increase the amount and types of neurons in these models until they have the possibility of serving useful purposes such as reading handwriting, creating more intelligent cars, learning computers that anticipate needs, or who knows what else. In many ways probably just much better, more dextrous and flexible versions of programming that has already been taking place.

Only after all of this continues for a while will we reach the point of creating an entire brain's worth of simulated neurons. This "brain" at this point probably won't necessarily be a replica of someone's brain (who knows) as we will most likely know enough about the systems at this point to create some form of general brain based on the examination and understanding of the many brain modules that we will have studied and produced before.

I am not entirely sure of the state of technology for scanning an entire living persons brain at the detail needed for replication, but I am betting that is also somewhat far off, am I wrong?

John Wiles writes:

AHBritton,

If we could map the complexity of the brain itself, we surely could map the neurons of the spinal cord and optic nerve. We could map motor neurons and muscle fibers, retinal cells and free nerve endings. If we could map the brain, there's no reason we couldn't map the peripheral nervous system as well, so I don't see your argument as a valid one.

Building mechanical machines that interface with electronic brains would be easier than building the electronic brain in the first place.

I still say that creating a computer program that accurately models neuronal physiology is the most challenging factor. For example, we learn by creating new neuronal circuits and interneuron connections. This is controlled by how neurons react to their chemical environment. Could we create computer models that recreate this behavior? I'm not convinced.

AHBritton writes:

John Wiles,

I agree somewhat with your argument here. There are a few points I to make however. One is the fact that even if we had the peripheral nervous system mapped that doesn't in my mind equate to being able to utilize it.

The human brain as it is now is designed to interface with a human body. Sure people can adjust to things such as cochlear implants, giving them senses they have not previously had and that their brain might not be initially prepared for, or using brain waves to control external cursors, devices and machines.

This seems a bit far away however from understanding how to interface directly with the neural inputs and creating mechanics for this brain to operate that will not be excessively unwieldy and require many years of training to adapt the brain to the new situation (mechanical arms, ears, legs, etc.). It seems that the ability of the brains (if they are successfully created) would only be as useful as the bodies they control.

In addition if these brain-boxes are really thinking feeling copies, do you really think they are going to be satisfied with some inadequate, highly immobile machine body that can't taste of feel tactile stimulation at all, or with much fidelity?

Another way of putting it is the fact that many jobs humans do don't necessarily require the most advanced mental facilities (as Hanson points out trained monkeys could do many human tasks if they were more cooperative), what they do require is our very mobile bodies which are able to quickly manipulate small objects, run around, have balance and agile movement, etc. and if you have seen the japanese robot that is able to walk on its own, you will see we are far from the ability to complete that task.

Something I didn't talk about before, but I'll briefly mention, is the role of hormones in regulating the brain as well as the learning process. I think it will take some time (and ethical questions such as whether to give this brain pain when it theoretically could live a somewhat painless existence) to get this balance correct and again more understanding of how the brain actually operates beyond mere "porting."

Finally this doesn't at all address the fact that this neural emulation has already begun (Blue Brain Project) and it is a more piecemeal endeavor. One of the early uses of these limited neural networks is to discover how the various pieces work and to gain greater insight into the effects of various mind altering drugs, diseases, etc. on the brain. I think this will undoubtably lead to much quicker progress in the field of understanding brain and mind functioning and how the actual neural systems work.

So to summarize, I just feel it will more likely be a piece by piece endeavor with a lot more payoff even early in the endeavor rather than just an immediate quantum leap (singularity or what-have-you) into a world with whole brain emulation, not to mention the fact that the kind of scanning (at least of a living human brain) that Hanson talks about is probably further away than he thinks. Some speculate that it would require a very detailed molecular scan in order to get the fidelity necessary to find the neurons potentials, their currents states, and many other features.

Greg Hamer writes:

I agree with Russ and Michal Kvasnicka that wage rates would not decrease. Robin expects a Malthusian trap due to the population growth outstripping economic growth. He gives arguments why population growth will be huge and why economic growth will be huge, but I did not hear any arguments why population growth would outstrip economic growth. In the olden days a Malthusian trap existed because the labor added by population growth in good times had to be mixed with a fairly constant supply of arable land, causing decreasing returns for each person. The industrial revolution was able to take advantage of these 'excess' people by combining their labor with capital in factories - land was almost not needed. This avoided the problem of decreasing returns and with knowledge/productivity growth here we are. There is nothing in Robin's scenario that will cause decreasing returns.

Also, from a comparative advantage perspective the machines would all go for the high knowledge manipulation jobs, leaving some parts of the economy relatively untouched or at least complementary to the machines.

Kurt Hanson writes:

I found the future put forward by Robin to be incoherent and difficult to follow.

At one point he talks of software black boxes that could be replicated by dragging and dropping the software from the desktop to a folder. Next it is robots you'd rent out for $0.10 per hour to cut hair and empty dish washers. Later it is nanobots that would cut hair by "chewing" through it. All of these 'bots would have the intelligence of Russ Roberts to boot!

I have to believe that impact of each of these scenarios will be different. A software emulated Russ Roberts has different costs and capabilities than a nanobot or a human replicant Russ Roberts.

I accept that the technology of the future can't be accurately forecasted and any one (or all) of the technology scenarios is possible. I think, though, the social, economic and moral implications could have been better explored had Robin emulated a science fiction writer and postulated "a future" with a coherent outcome from it.

For must of us, the technology isn't important but rather what it does to or for us.

Gerry Conedy writes:

I really enjoyed Dr. Hanson's discussion this podcast with Dr. Roberts. I found it particularly, stimulating and highly insightful. I wanted to challenge, one of Dr. Hanson's underlying assumptions that I feel was not adequately addressed in the conversation. Concisely, the brain (software) and the body (hardware) cannot effectively function without one another. With the utmost recognition to my limited understanding of bio-science and physiology, I posit that even a perfectly scanned, functional replica of the human brain's "software" (chemical interactions/reactions) cannot be merely ported in to a box or some symbiotic host and function like it did in the original hardware (the human body). You cannot simply "switch on" the brain, as I understood it to be suggested in the conversation. As we all know, the brain operates by a series, or plethora rather, of signals given it to it by our five senses (taste, touch, feel, smell, sight). While we've been able to duplicate some of these signals in machines--albeit some are duplicated better than others, I fear the complexity of the machine, would have to match that of our bodies in order to truly satisfy the A.I. reality that Dr. Hanson suggest in the podcast. If you ask me, I wouldn't bet on it. There is just too much we don't know about what causes the body and the brain to function. The two cannot be separate in conversation; without the one, there cannot be the other. If the products of our brain and body are just the sum of the chemical reactions of its parts, then maybe we basically know all there is to know about ourselves as hardware and software pieces! Yet I hardly think that's the case...

The simple fact remains that technology continues to advance so exponentially that we have no way of knowing what will emerge... Now, clones with replicas of our brains and memories placed it them or a way to transfer memory and data form person to person... I actually might bet on!!

Seth writes:

I had a few more thoughts on this.

1 - A couple episodes of the "Outer Limits" dealt with similar issues. In one, a woman is teleported across the universe, using a process that destroys her here and replicates her atomic structure on the other side. Except, there's a mistake the original her is not destroyed. So there ends up being two of her.

In the other, a doctor clones replicates his comatose wife, neural mapping and all.

2 - It seems like the biggest capacity constraint that replicating ourselves may overcome is time.

3 - I wondered about checks and balances on our behavior of these new creatures. Would a me that runs on small amounts of electricity behave the same as the organic me?

Overall, interesting and thought-provoking.

Jonathan writes:

Don't let the eco-warriors listen to this podcast...rather than breeding real humans why not have yourself and your wife copied into one of these black boxes and then hit the 'breed' button to create the brain of what would have been your child. Now you are the last generation of physical humans and the future human race can live on relatively small carbon footprint computer systems.
Maybe I should send this to David Brin or write a sci fi novel myself!

nick writes:

"Nick, I said total wealth would increase greatly."

Robin,

you seemed to imply (unless i misunderstood) that it would increase only due to the rapid population increase.

that is to say, a million people with a dollar is a million dollars in total wealth, if two million people each had a dollar than that is two million in total wealth though per capita wealth hasnt changed.

the problem is you never really addressed (or very clearly) why per capita wealth would not fall on the same distribution it currently does? just because of the population increase? doesnt that ignore that each copy could have unique skills (unique beyond the skills of the original) that could increase its per capita wealth greatly?

essentially your argument amounts to experience and point of view counts for nothing. like if picasso copied himself they would all churn out picasso paintings. i find that unconvincing and lacking empirical evidence.

John McKenna writes:

I understand Robin Hanson's analogy of porting the brain to an emulator that "runs the brain". He makes the assumption that building the emulator that runs the "legacy software" is simpler than understanding or rebuilding the "legacy software".
That's akin to assuming that Linux, Windows, Unix etc.. are far less complex than the software programs designed to run on those frameworks.
I wish him good luck on his emulator for the brain, but I don't wish to invest at this time.

Keith Wiggans writes:

Awesome Talk! I think Robin may be close in his predictions. On the other hand, I don't think any current animal's brain will be a good choice for emulation.
Animal brains, including humans', have evolved in order to help the physical body survive long enough to pass on DNA. A machine brain probably wouldn't need 99% of the structure of the animal brain.

I beleive something similar may happen in the future, but it won't be anything recognizable as animal intelligence. See this discover magazine article about a new type of neural processor:
brain like chip may solve computers big problem energy

Kendall Ponder writes:

Assuming a pattern based on three points seems problematic to me. Also, doesn't Robin change what he is measuring? When he talks about the introduction of mankind and the introduction of farming what jumps is the rate of growth of the population but the industrial revolution produces a jump in economic growth not population growth. Or did I misunderstand? When I was doing my masters on AI in the early 90's my prof's talked about the promise of AI in the 80's had proved much more difficult than they had anticipated. The idea of porting may sound simpler at an abstract level but I suspect it won't be any simpler when it comes to implementation. One last comment. When he states all we have found in the brain is physics and chemistry it is worth noting we haven't looked for anything else and I suspect we have no idea how to find a spiritual side (if it exists) if we wanted to. It does seem to me if nothing else exists then everything is predetermined by the initial state of the big bang and logic has no meaning.

Eric writes:

What is the citation to which Robin refers when he spoke about Ricardo's 1820 publication on "robots substituting for people"?

Mikeh writes:

Perhaps the most dismal outcome in the dismal science. But of course, I enjoyed the discussion none the less.

Great line about how unimaginative our imagination looks in hindsight. And while you discuss the replication of ourselves, it will probably sound as quaint as "Tomorrowland" if we replay it in 20 years?

Rob writes:

I did not have time to read all the comments so I hope I am not repeating or covering a few points the guest didn't have time to get to.

1) We are social creatures - granting that you can create a simulation of a person and put them in a machine you have suddenly removed all the social cues and motivations and replaced them with a completely alien set. The immediate analogy I think of is putting someone in prison. The solutions I have come up with are: pick a personality that will respond well to this new environment, modify the personality to be okay with the new environment, or create artificial cues - however I cannot imagine these choices limiting the effectiveness or the tasks that Robin Hanson's simulacra can perform.

2) Scarcity of resources to build the simulations computers/robots here on earth and so we either need to create a machine which will take something plentiful and convert it to the scarce item (Haydron Super Collider I am looking at you) or deep space exploration exploration and mining and as Robin Hanson said at the beginning we don't have a good way to get to Uranus in two weeks.

Russ Roberts writes:

Eric,

I think the result that Robin was referring to on Ricardo and robots is in the third edition of Ricardo's Principles of Political Economy. It is Chapter 31 "On Machinery." You can read it here, and we will add a link to the "Reading and Links" section.

John Strong writes:

THUMBNAIL COMMENT: No mention here of Theoretical Limits to Computability or Intractability of Non-linear Problems

Limits to computability
I'm listening and listening. So far, I haven't heard Robin make a single mention of the limits of computers. Since Turin we've known that some things are simply not computable, not even theoretically. Among things that are computable, there is a whole universe of problems that can not computed, practically speaking. I suppose the invention of a quantum computer will push the horizons out considerably, so let's assume quantum computing or something like it.

Emergent forms that derive from non-linear effects
Robin knows how much more intractable non-linear problems are than problems resolvable with linear mathematics (I'm not a PhD like he is, but I do have a Bachelor's degree in physics, as well as a Master's in computer science). A very respectable current among theorist of mind these days is that of supervenience, the notion that thanks to non-linear effects the whole is more than the sum of the parts, a point Russ instinctively alluded to.

No modern scientist would claim to be an atomist, but with phenomenalist metaphysics running out of steam these days, some are starting to argue that 17th century science has bequeathed a kind of crypto-atomism by persuading us that all phenomena can be explained by combining components, be they quarks or some other basic form (not necessarily a particle). There is a deep "linear" prejudice, figuratively speaking, at the bottom of all that. It was a useful prejudice that helped midwife the scientific revolution in the 17th century, but it is probably time for us to outgrow it.

I once heard Tyler Cowen tell Robin that he views the existence of angels to be at least as probable as the viability of cryogenics. I'd say brains in vats are in the same category as cryogenics: both belong to the realm of science fiction, the modern myth that has replaced angels in many peoples minds. So far, though, quite a few people claim to have seen angels, but noone has ever seen a functioning brain in a vat. (See discussion surrounding philosopher Hilary Putnam's thought experiment about brains in vats).

Stephen Hunter writes:

Besides the pragmatic problem that computers will never emulate brains (I'll let John Searle defend that claim), moral problems abound, like...

The Josef Mengele problem:
An emulated human brain is a new sentient PERSON. So the trial-and-error and experimentation needed to perfect the technique will have put millions of malformed quasi-human consciousnesses through grueling torture. The human mind was designed to work with the human body. Even to put a perfectly functioning consciousness in a box with no sensory inputs and no means of communication would be a pretty literal Hell. Even somehow adding sensory input streams would make it no better than a horrifying unending case of anesthesia awareness. And to willingly inflict all that torturous experimentation on unwilling persons? Why don't I just call you Josef Mengele.

Stephen Hunter writes:

Hanson made some good points at the beginning, but how arrogant, dogmatic, uninformed, and 18th century does he sound in this exchange?

Roberts: But there is a reductionist element to [mind emulation] which says--and this is controversial--all there is to our brain is its physicality. Nothing else there. That's not universally accepted, correct?

Hanson: Right. Now, I have a physics background, and by the time you're done with physics, that should be well knocked into you. Certainly most top scientists--survey questions would say that's it. Your brain--just chemicals and electricity. Not much room for anything else. Not like it's an open question there. Physics has a pretty complete picture of the stuff in the world around us. We've probed every little nook and cranny and keep finding the same stuff.


(sarcasm ON) We are nothing more than our brains, our brains are nothing more than atoms bouncing around like billiard balls in deterministic fashion, right? Never mind extra-natural phenomena and never mind quantum mechanics; physicists have got it all figured out. Thank you mister scientist for knocking all that superstition out of me.(sarcasm OFF) Hanson is the one that sounds like he's back in the 18th century.
Ron Toms writes:

I love this question - "How many arrowheads is a web site worth"

A couple of hundred. This depends, of course, on which gift shop you're buying arrowheads at, and the complexity of your web site. Remember, "technology never dies. - http://www.econtalk.org/archives/2010/11/kelly_on_techno.html
So, I'm assuming that their value never dies either, and their worth is measurable.

Regarding the replicants - would a replicant of me still desire the latest gadgets, envy those who have more, strive to improve it's place in the social hierarchy? Would it still appreciate the arts and a good conversation? Since the replicant (in theory) will be a faster thinker with a more perfect memory and be relatively tireless, where does that leave us originals? It can only leave us at the bottom of the economy -- relatively poor (but still possibly better off than today, just as today's lower classes are better off than the average person was 1000 years ago), or perhaps to live lives as mere pets to the replicants, and then Bill Joy's gloomy outlook could be right.

But isn't this just the natural order of things? Who is richer, we humans or the chimpanzees? We were the same species once. Humans improved, and left the chimps far behind. In other words, the 2 million year scenario described in this podcast can be extended four billion years all the way back to the beginning of life itself (read Kurzweil - Singularity Is Near). Somehow though, I don't think the chimps mind our separate progress much - unless we encroach on their habitats.

Ultimately, I don't really agree that copying the human brain is likely to happen. I think this guy is much more on-target - Kevin Kelley on the next 5000 days.

And since technology never dies, and humanity (life) can be considered a kind of technology, it seems that there should always be a place for humans (or something human-like), just as there are still bacteria, plants and other lower life forms (dinosaurs and mammoths excluded for convenience).

But, of course, that's not what I tell my children...

Brandon writes:

I am surprised that Russ didn't bring this talk into the context of Hayek's views on complexity and scientism. While we know many of the pieces that make up a market and some of the ways they interact, we don't have the local knowledge contained in each piece... thus the best we can do is predict some general patterns about what a market will do. In the same way, Robin discusses how we know the pieces of the brain and have some idea of the connections between them. Robin assumes that if we know the pieces and how they interact, we can emulate the system. Due to randomness and the complexity of the system, however, it seems that the best we could do is predict some patterns... rather than creating Beethoven's 10th.

Russ expresses doubts and the philosophy of thought and consciousness is nowhere near solving this problem. It would be neat to have a follow up discussion with a singularity theorist that takes these complexities more seriously, and perhaps has a more toned-down vision of what human emulation would look like and could do.

Also, another way of looking at the new labor market dynamics... if we are able to have some control over our new helpers, then perhaps I could view my new work/leisure problem as one with having more total hours to do all of my tasks... i.e. by myself I have about 16 hours a day to do my work/chores/entertainment/eat etc... but with 4 copies, I could have a total of 80 productive hours. Splitting tasks up amongst my selves would be a problem, as each would engage in different work, gain different experiences and knowledge, and quickly split off into quite different entities with different, new talents. Again, this interaction has complexities that Robin should delve into more.

John Strong writes:

Brandon:

I am surprised that Russ didn't bring this talk into the context of Hayek's views on complexity and scientism.
Yep.

I believe that one of the early pioneers of the theory of emergence in biology, Polanyi, was even influenced by Hayek!!!!

How can you understand that self-reference and feedback loops make economies too complex to model precisely, and yet not understand that the same principle applies to human brains?

BTW, brains do not exist in vacuums, but neither do our comments. They exist in a complex laticework of incentives. Professor Roberts is probably sitting back and thinking, "Gee. We need to get Robin on the program more often. Look how much discussion he provoked!"

Ed Bosanquet writes:

Russ, long time listener and big fan. I enjoyed this podcast.

As a computer scientist and mathematician. I find some of Robin's thoughts to be quite stimulating. I can see a future where we are able to create digitized clones of ourselves and this to be a boost to economic output. However, I see many technical issues with the digital clones limiting the usefulness overall.

When you emulate software, you copy over the entire program: Bugs, limitations and all.

1. Since human cells degrade overtime limiting our life span to just over a century and a fully functioning emulation would need to copy this behavior, the digital clone would have a digital lifespan of ~100 years. You could run the clone's time at 100 times the speed of a wall clock but clone would then live its entire century in one wall clock year. It would be possible to re-run the same clone and allow it to live an entire life year after year but it wouldn't retain direct memory from one generation to the next.

2. The clone would not be able to retain any more knowledge than the biological version because the brain isn't designed to retain and access large amounts of data. A digital clone developed through emulation would suffer from the same storage and addressing limitations.

3. Due to the imperfect ability to emulate a brain, the quality of the clone would degrade as it aged. It would grow further away from feeling and being "human" and more towards feeling like a digital copy. It would be very difficult to predict what a digital copy would feel like but typical "human" conditions such as hunger, sexual instinct and general aches are necessary to remain connected and there doesn't seem to be a way to emulate this correctly.

Regardless of these detracting points and I believe there to be many more, I see this future of digital clones ahead and a benefit to society but Robin's image seems to be closer to the 1950's "flying car". Most people uses airplanes, some even everyday but the impact is only a incremental change compared to other technology. The impact of digital clones will be felt but I don't see it creating the "technological singularity".

Laura writes:

I just wanted to share the link to Ari N. Schulman's article in The New Atlantis: "Why Minds are Not Like Computers" as I believe it makes several salient points about why the whole, on a technical level, may well be more than the sum of its parts: http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

AI proponents understand that communication is possibly the most important way of demonstrating intelligence, but by denying the importance of each agent’s internal comprehension, they ironically deny that any real meaning is conveyed through communication, thus ridding it of any connection to intelligence. While AI partisans continue to argue that the existence of thinking and social interaction in programs is ­demonstrated by their mimicry of observed human input-output behavior, they have merely shifted the burden of proof from the first-person experience of the programs themselves to the first-person experiences of the people who interact with them.

darwinian roadkill writes:

Here's a link summarizing the current status of efforts to reverse engineer the brain.

http://nextbigfuture.com/2010/08/status-of-reverse-engineering-brain.html?

Another link with Henry Markram on the EPFL/IBM Blue Brain project.

http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html

Comments for this podcast episode have been closed
Return to top