Richard Jones on Transhumanism
Apr 4 2016

transhuman.jpg Will our brains ever be uploaded into a computer? Will we live forever? Richard Jones, physicist at the University of Sheffield and author of Against Transhumanism, talks with EconTalk host Russ Roberts about transhumanism--the effort to radically transform human existence via technology. Jones argues that the grandest visions of the potential of technology--uploading of brains and the ability to rearrange matter via nanotechnology are much more limited and unlikely than proponents of these technologies suggest. The conversation closes with the role of government in innovation and developing technology.

RELATED EPISODE
Robin Hanson on the Technological Singularity
Robin Hanson of GMU talks with EconTalk host Russ Roberts about the idea of a technological singularity--a sudden, large increase in the rate of growth due to technological change. Hanson argues that it is plausible that a change in technology...
EXPLORE MORE
Related EPISODE
Gary Marcus on the Future of Artificial Intelligence and the Brain
Gary Marcus of New York University talks with EconTalk host Russ Roberts about the future of artificial intelligence (AI). While Marcus is concerned about how advances in AI might hurt human flourishing, he argues that truly transformative smart machines are...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Nonlin_org
Apr 4 2016 at 10:23pm

AlphaGo didn’t beat the world champion, but the designers of AlphaGo did. AlphaGo is just a tool like a hammer or electric motor.

“uploading” a human consciousness to a computer – is like saying pictures or voice recordings of people are people. Not even a clone is same as the original – just ask the identical twins. Do they have the same consciousness?

“unit of computing in biology is not a neuron. It’s a molecule.” – this sounds like a strong, original argument but can be countered by the example of microprocessors – sure each one is slightly different at molecular level, but for all our needs, they are identical.

“…because the thing has evolved. No one has designed it…” – why would one exclude the other? AlphaGo mentioned earlier was designed to “evolve”. Are all Brits indoctrinated in this “flat earth” Darwinian nonsense?

“cryonics and radical life extensions” – good luck with that, but – aside from the technological challenges – you have to ask: “why would future generations bother with you?” You’re not that interesting.

Regarding the “virtues” of government “leadership” you have to ask: “what would have happened in an alternate history?” Most likely we would have had the same developments sooner or later and without all the destruction caused by various wars. We can look at USSR and see exactly the “virtues” of a strong government. And no, the Russians are not an inferior race to the Brits.

jwysocki
Apr 5 2016 at 7:10am

You mentioned briefly healthy longevity/aging. It would be interesting for you to discuss this further. Dr. Colin T. Campbell, Dr. Michael Greger, Dr. Dean Ornish, Dr. Caldwell Esselstyn. Tom Rath or Dr. Ken Dychtwald could generate an interesting conversation. All of these experts would argue that the diseases of aging are preventable. As I’ve aged and started to develop some of these illnesses, I decided to try to prevent these problems that I see my parents and friends have developed (diabetes, high blood pressure, heart disease, arthritis). I’ve been able to get healthy through a combination of exercise and diet. It makes me wonder why others don’t do this and why governments don’t promote it more. Rath has been motivated to do this because of genetics as he explains in his book “Eat Move Sleep.” Campbell who did the China Study discussed his frustration with the lack of progress in “Whole: Rethinking the Science of Nutrition.”

Hans-Christian
Apr 5 2016 at 12:31pm

Very interesting episode! A few comments on drivers of innovation and technology.
1) Major innovation was traced in the episode back to wars and public funding as evidence for it not beeing natural. But is war or similar disruptive events not almost bound to happen to humanity during our evulation?
2) Brian Cox (physicist) in the series Wonders of life (BBC) talks of these traits of humans to use and develop tools to survive linked to our curiosity. That we are an aggregate of organisms that try to survive and prosper and that we among other way experience this as curiosity. I might be paraphrasing etc here.

Does not these points together give some sort of natural drive behind innovation and technological innovation?

It would be great if you could get Brian Cox on the show for a followup!!?

Don Rudolph
Apr 5 2016 at 1:07pm

If two amoeba ancestors could think and talk they might ponder a transamoeba existence, which would be us. I think we are on a transhuman path the only question is will it happen through technology and happen in hundreds of years or biology and happen in millions of years.

I would be curious how the atheist or agnostic has a religious component to their worldview. If religious thinking was applied
to economics it would go something like this. There is no such thing as an emergent order or markets, the world has a secret central planning committee that we can’t see, and it is through them that we receive abundance and variety.( No offense intended to anyone with religious views.)

Don

Todd Kreider
Apr 5 2016 at 11:33pm

1) Congratulations on the 10th anniversary. Of course I’ve disagreed at with Russ at times which is a reason it is still my favorite podcast and in 2016 there are now many excellent ones out there as well like Freakenomics.

2) Thanks for interviewing those interested in self-driving cars, A.I., super intelligence, etc. Roberts started doing this before it was on almost any economists’radar – except for Robin Hanson. The “Robin Hanson Effect”…

3) It was good that Russ stated Moore’s Law correctly – the doubling of transistors on a chip rather than the just as important corollary that is usually stated with respect to computer speed Moore’s Law is about to end around 2021/2022. My problem is that the guest only mentions some vague biological computation as succeeding Moore’s Law in the 2020s. While possible, Kurzweil is right that there are many possible replacements including “stacking” “3D chips”, “nano tubes” and quantum computing. Heck if I know.

Intel stated in 2009 that it thinks the exponential computing curve (not Moore’s Law) will continue to at least 2030. Probably, but not certain. Also, the “social construct” argument is really weak tea. I agree that there is no way Kurzweil (or me with my “coming black hole between 2040 and 2060” argument in 1989 – same as The Singularity minus details) can “know” that this will continue into the 2030s and 2040s but the curve is clearly likely to continue at least beyond 2022. [more later]

Chris
Apr 6 2016 at 10:46am

My main thought after listening to this is that we seem to lack a good theory of technological development. Why do some technologies develop much faster than others? How do we even define speed in a meaningful way? Twice as many transistors on a chip every 18 months decade after decade is very impressive and all that but why do we care? Because greater computer power has been useful to us, ok, but how do we define useful anyway? And why has Moore’s law happened, but airplanes have not got twice as fast every 18 months? What are the fundamental differences we can point to such that we can say why this or that technology moves at a certain ‘speed’?

Until we can answer this I don’t see how we can point at any particular technology and predict its future development. All we can say is ‘well its behaved like this up until now, so maybe it will continue to?’ and then shrug.

Phils
Apr 6 2016 at 11:08am

The vague definition of transhumanism offered by Jones makes it hard to understand what he is arguing “Against”. I don’t think one could have done much better than vague, since even self-identified transhumanists come in a wide variety of styles. But then what is Dr. Jones’ objection? While some people might have self-interested agendas, the very nature of transhumanism, or more generally futurology, means that its adepts are reduced to philosophizing and dreaming about the future. With few exceptions, they offer no precise predictions, make no promises or guarantees one way or the other, and promote no agenda except a perhaps exaggerated optimism about future technology. After reading this, I still don’t know what it is that transhumanism is guilty of, other than wild speculation. That is all it promised anyway. Dr. Jones himself is guilty of rather uninhibited speculation, by offering guesses about what will be possible or not 100 years into the future, a ridiculous time-span in light of the history of technology (or history in general).

As notedin other comments here, the “social construct” argument is particularly weak. First, I am skeptical that advocates for Moore’s “Law” would insist that the word is used in any scientific sense if they were pressed on that point. Second, hard science is chock full of “laws” which hold only conditionally, statistically, or in some transient regimes (e.g. the ideal gas law, or even the more fundamental second law of thermodynamics) . Third, almost every single “law” in physics (and certainly biology) is “emergent”, in the sense that it is an approximate, though often highly accurate, description of macroscopic behavior resulting from the collective behavior of a large number of microscopic degrees of freedom whose small-scale dynamics is typically intractable.

The discussion about the “unit of computation in the brain” was especially disappointing. Overly detailed analogies between the brain and computers are detrimental to our understanding of both the brain and computers. Yes, the raw processing power of the brain is still ahead of computers, but the architectures are completely different. Our brains have negligible short-term and working memory in comparison to modern computers. Both of these are known to be associated with cognitive ability in humans. The connections in our brains are also extremely slow compared to those in a computer. Our brains and computers are such different objects that any comparison must be viewed very critically. Certainly rough order of magnitude computations comparing the number of neurons or “units of computation” are almost completely devoid of interest or meaning.

The associated idea that a “necessary condition” to simulate the operation of the brain is to have a complete mapping of all connections in the brain is predicated on an unreasonably strong definition of “simulating”. We don’t know how much of our experience of consciousness and self depends on our memories vs. the rest of our cognitive abilities. What we do know is that for many tasks that seem cognitively demanding to us, it is possible to achieve near human and sometimes superhuman ability by using completely different computing “architectures” and learning strategies than those in our brains. Ruling out the possibility that some form or remnant of a human consciousness could be “run” on a computer, especially given a 100-year horizon, can only be done based on pure guessing.

jw
Apr 6 2016 at 12:28pm

Another excellent podcast to celebrate the 10th anniversary. Congrats!

– We are producing 7nm geometries and researching 5nm geometries for silicon. At that level, you are playing with less than 10 atoms to make a transistor. Yes, there are ingenious ways to work around the quantum limitations for a while longer, but Moore’s Law (which he never intended as a “law”) IS ending. Even if not by technological limitations, then, as discussed, by economic limitations.

– The human Go player consumed about 100w per hour while playing the game (with about 40wH of that used by the brain). The computer that beat him used in excess of 500,000 watts per hour. It was also targeted to that one application and one opponent, just as Deep Blue was targeted to beat Kasparov in chess. It is a neat (and very expensive) parlor trick. It is not scalable.

– We are more that 15 years into Bill Joy’s prediction that AI will dominate in 50 years, and as discussed, the vast majority of people use all of the technology available today for texting and watching cat videos. On the other hand, even with billions in IT spending and billions in PhD’s, the Fed can’t manage or even forecast its way out of the swamp it has created with its models. Economics (and the human action that drives it) is FAR more complex than Go.

– DARPA and the DOD were mainly customers of emergent technology, although they did originate some. They also funded and originated a vast number of completely wasted initiatives – the unseen costs of defense oriented technology planning. And they completely miss major breakthroughs (see Hedy Lamarr).

Andrew McDowell
Apr 6 2016 at 12:45pm

I think this talk would have been improved if the speaker had taken some of the time he used to compare ideas about the singularity with religion and instead described and commented on a standard description of the Technological Singularity, such as the first paragraph of https://en.wikipedia.org/wiki/Technological_singularity. I think this contains ideas about the effects of the application of AI to technological research which the speaker did not mention.

I think that DARPA has a distinctive approach to research funding which is worthy of interest in its own right, and which makes it closer to “bottom up” development than many other approaches. There is a contemporary comparison of Darpa with (EU) Esprit at the start of https://aclweb.org/anthology/H/H91/H91-1007.pdf. DARPA has typically given researchers an comparatively unusual amount of freedom in what they do and how they do it. It has also run challenges, or competitions, rather than picking winners – e.g. https://en.wikipedia.org/wiki/DARPA_Grand_Challenge. Is a DARPA challenge qualitatively different from https://en.wikipedia.org/wiki/Orteig_Prize or https://en.wikipedia.org/wiki/X_Prize_Foundation because DARPA is funded by the government?

jw
Apr 6 2016 at 3:10pm

jwysocki,

Campbell’s “China Study” has been thoroughly debunked (here) and Ornish rests on a single study of 21 people that has never been reproduced. Nutrition is not simple.

As for longevity, it turns out that longer lived people die of the exact some things that normal people die from, just later. Scientifically, there are some interesting things going on with mTOR, but in general, don’t smoke, get married, wear seat belts, don’t eat sugar, drink alcohol (but not too much) and have good parents.

Remember, after you have kids and raise them, a lifespan of somewhere between 30 and 40 years, nature really doesn’t care if you live any longer. Old age (let alone retirement) is a fairly recent human construct.

Todd Kreider
Apr 6 2016 at 8:30pm

4) I stumbled on the possibility of a Singularity when around 14 years old in 1984 and could easily see, along with everyone else, how computers were becoming more powerful since the late 70s. At one point, I took a pencil and started extrapolating computer speed that doubled every 18 months and saw that the numbers were getting really big around 2030 and even bigger by 2040. That is, from my 1984 perspective.

This wasn’t transhumanism at all to me then I had never heard of the word until 2004 after stumbling across Kurzweil while looking up a prediction I made in 1996 on mild brain repair (2016 to 2019) – this might be a few years off . Increasing computer speed would come up every so often in high school as well, but I was always told Moore’s Law will stop in X years due to the “heat problem.” OK, but what if we get around the heat problem? Kurzweil once mentioned that a certain personality type “gets it”: It isn’t related to IQ or social status but those who will consider looking at an extrapolation and considering the conclusion despite seeming absurd. He added that they don’t think that much about what others think of them. That is, not seeking popularity. That’s a plausible theory.

But even when it really hit while day dreaming in a “math physics” course in 1989 where I connected a ramp up in my head with the chapter title “Residues, Poles and Singularities,” in the text in front of me that “It gets kind of scary, steepness wise, (in my mind a mountain, no longer just a hill) around the 2040s…” That is, from a 1989 perspective, life in the 2040s or 2050s would not be recognizable if Moore’s Law continued. (I had relied on clock speed and didn’t realize until 1998 that those may stall MIPS (millions of instructions per second) would continue and that was what was important according to a friend who clued me in.

In my version of what I’d call “a black hole” , I was agnostic with respect to A.I. and still am. The point was that incredible computer power, “thinking” or not, will radically change the world. But just maybe. So there are many types of Singularity types out there where mine seems to have been more Vernor Vinge in 1993 (“all predictions break down between 2005 and 2030”) than Ray Kurzweil, minus the A.I.

[more on The Rapture of The Nerds]

jw
Apr 6 2016 at 10:22pm

Pretty much everything you need to know about Moore’s Law (here).

Geek out!

Todd K,

Extrapolating is easy and usually wrong. Even Moore corrected his forecast only ten years in. Others in the articles above predict that Moore’s Law will continue, but will have to “morph” into something else. Sorry, no morphing allowed. We may find other ways to propel computing power forward, but when you only have a few atoms to work with, quantum weirdness starts to assert itself and Moore’s “Law” hits a limit.

Intel spends over $5B for a 14nm fab and each chip costs over $40M in masks plus hundreds of millions in engineering costs. Very, very few companies can continue to play this game. Looking at it another way, Intel is on an ever increasing R&D spending treadmill. If they ever get off, their market cap collapses.

I can’t predict what will happen better than anyone else, but the future will be interesting.

Robert Swan
Apr 7 2016 at 12:16am

Richard Jones’s view of the bounteous gifts of central government puts me
in mind of one of the marketers at a software company I used to write code
for. Nearly every day he’d be throwing half a dozen of his “ideas” at the
developers. Some ideas were off the planet, others not ridiculous, but not
spectacular either. He was very happy to claim credit for any new features
that bore a resemblance to one of his many suggestions.

Likewise, if government meddles in everything, it’s not hard to make a case
that it’s the prime mover of everything.

To cite Unix as an example was rich. Unix was developed by two very smart
guys (Richie and Thompson) working on a computer game in their “spare
time”. The government never asked for it. Neither did Bell Labs. I must
admit though that “spare time” does seem to be part of a government
employment package.

Russ: “non-traditionally religious people–that is, people who consider
themselves atheists”? Mealy-mouthed to say the least, and maybe
wrong-headed. Take a sport analogy. One person supports the Red Socks.
Another follows the Dodgers. Another, calling himself a (baseball)
atheist, follows the Broncos. But, believe it or not, there really are
people who follow no sport at all.

In asserting that an atheist such as me has a religious component in his
worldview, if you are using “religious” in Sam Harris’s sense (believing
things for bad reasons
) I’m quite happy to plead guilty. But if you’re
referring to a deity and/or miracles and such, then no; they’re not there.

Chris
Apr 7 2016 at 6:37am

On the subject of government spending and innovation, Mariana Mazzucato has written a book called ‘The Entrepreneurial State’. I did not think it was the best written or argued book to be honest, however I think the case is still made.

I would like to hear that EconTalk episode as I think I could find probably find the truth between her advocacy and Russ’ inevitable skepticism.

Scott Singer
Apr 8 2016 at 9:13am

[Comment removed. Please consult our comment policies and check your email for explanation.–Econlib Ed.]

Todd Kreider
Apr 8 2016 at 11:42am

jw,

You say extrapolating is easy to do, and I agree, but since the mid 1980s I realized almost nobody does this.

My first prediction was explaining in 1985 in a Western Civ. class why the Soviet system would end within ten years and Russia would become a democracy. I had read in a computer game magazine a blurb that used, cheap American computers like the Commodore 64 were becoming popular in the Eastern Bloc. So I was positive (maybe shouldn’t have been!) that they would stream right into the Soviet Union, keep improving, and allow people to secretly communicate much easier in order to somehow end the system. Almost everyone thinks you are crazy when you insist things like that.

I agree that you can’t assume an exponential will keep going, which is why when the “black hole”, a.k.a Singularity, idea really struck me harder in 1989 than around 1984 when I mostly dismissed it after reading the grains of wheat on a chessboard fable, that Kurzweil uses to try to convince it will happen, I still didn’t think it was inevitable.

As for “morphing”, computers used vacuum tubes then transistors. I of course new that earlier mainframes were slower, larger and much more expensive than the PC I started using at age 10 in 1980 but didn’t realize until I first heard Kurzweil in 2004 that it was a smooth precursor to Moore’s Law. He and Hans Morvec even take it back to 1890 in terms of the computational power of a voting machine.

I thinking “stacking” would be a morphing from Moore’s law, just in a new dimension, but as I wrote I don’t claim to know what a computer in 2023 would look like.

Since 1989,I have thought that people would begin to have brain implants from 2020 based on what was to me a very short but convincing blurb in OMNI magazine that summer, and it freaked me out for a day or two. I thought “This is worse than nuclear war.” I asked a couple of comp sci students where I was doing a summer physics project: “Why isn’t anyone talking about this?!” One guy laughed, “You are worried about life in 30 years??” and I replied, “Yes!” Well, Kurzweil was talking about it then but this was still pre-internet.

I later shifted that to 2030 – keep pushing that brain implant date out! – and then read Kurzweil thinks it will began with nanobots into the brain from “the late 2020s”. Another prediction from 1989 was that people would be far smarter in 2089 and wished I was studying quantum mechanics then and might actually understand it. Kurzweil is much more extreme, arguing “people” will be trillions and trillions times smarter. You know, because one trillion just isn’t enough for The Kurz.

So my take is that we already have tens of thousands of people with primitive brain implants to control Parkinson’s disease and recently a large bionic arm for a quadriplegic woman who now can do much more for herself. I don’t know about “uploading the brain into the cloud” but will listen to the debates.

jw
Apr 10 2016 at 6:21pm

RS,

I obviously can’t speak for Russ’ views, but here is my take on it:

So stipulated that an all powerful being creating the universe is an infinitely improbability.

The only alternative is the current scientific one (taking a don’t care position is simply uninteresting) where one must believe:

– An infinite number of universes are created infinitely often for at least the last 13.8B years (possibly infinitely), of which ours is just one.
– These infinite universes have infinite sets of physics, with an almost infinite number of them collapsing due to instability.
– Our universe exists only because of an almost infinitely (10^200) improbable fine tuning of physical constants.
– Each time a sentient being collapses a quantum wave function (infinitely often times all the humans that even existed, our universe splits into two equally probable universes.

So it comes down to: I have faith in my infinite improbabilities and you have faith in your infinite improbabilities.

jt
Apr 10 2016 at 8:33pm

One point on the gov. funding of research. Given the immense amounts of resources put towards these projects I wonder if there might have been similar, or even better, results had those same resources been left in the hands of the private sector. Somewhat in the same vein as Russ’s point on the cost vs. benefits. But this gets at the fact that technology might have advanced FURTHER if resources had been spent more wisely.

Michael Vincent
Apr 12 2016 at 1:58pm

In this episode Mr. Jones states that we are 100 years away from significant advancements… Please have him check out the folowing video on the LCLS -II microscope – https://www.youtube.com/watch?v=t7jUZwhZdd0

Absolutely amazing view into molecular interactions he was saying were too difficult. New scientific breakthroughs are happening every day at a rate hard to imagine. This is the first step!

As far as uploading our brain, we have to consider that we are not only our brains but the whole package of systems that make up who we are minute by minute. It is hard to imagine how consciousness could be achieved without the connection to the physical world. Our brain is mostly the data storage area that the rest of our being taps into to create our perceptions and our perspective about our perceptions.

Robert Swan
Apr 12 2016 at 7:35pm

JW:

Apologies, I didn’t mean to reignite an “is God necessary” debate.

I remarked because of Russ’s extraordinary wording — as if he believed there was no such thing as an atheist. Certainly there are some people calling themselves atheists who seem to believe they are Mr-Spock-like (or Homo-Economicus-like) rationalists, but the one does not require the other. I am just as irrational as the next man, but an atheist nonetheless.

On your specific point, while I agree that “don’t care” is unedifying, I equate an answer involving a deity with “don’t know” which is the form I prefer.

Brad Eppes
Apr 13 2016 at 3:25pm

With regard to the technologies that came from government, I think we need to remember one thing.

They were boons specifically because the money was being spent on the pursuit of a legitimate function of government (usually defense).

So I spend X billion dollars on defense, I get the defense I wanted from that expenditure but I also got these new materials or ways of making computers or whatever. And of course with defense we were engaged in highly incentivized competition (innovate or be conquered).

This does not then justify saying that the government is your first port of call for further innovation. Spending money on government programs for the sole purpose of gaining innovation is not a good idea. Governments at a fundamental structural level are terrible places to get that.

But in pursuing legitimate functions they operate at scales and in ways that businesses do not, so it does provide the opportunity for the occasional boon that could not come from the private sector or couldn’t come as easily (I think, for example, that it would have taken us a while longer to get the internet going as a private thing for example without the starting point the public sector provided).

So it gets back to, let governments do what they do well.

Chris Hibbert
Apr 15 2016 at 1:34am

The discussion reflects a pretty fundamental misunderstanding of Moore’s Law. No one claims that it’s any kind of “law of nature”; it’s an observation about many things that are happening in the world. Kurzweil’s point in presenting many examples of processes that are improving at exponential rates isn’t that everything improves at that rate, or that it’s inevitable, but that there’s something common about these kinds of curves across the computing industry, and in many other areas of the economy as well. (Medical imaging, flight, etc.)

No one is centrally directing the processes that lead to these exponential increases, though there are many people who have noticed the regularity and use it to guide their planning for developing or consuming the products that follow these paths. But the underlying technological forces that lead to this kind of curve pre-dates any individual noticing the effect or publicizing it. The same kinds of curves can be found in agricultural productivity (at a much slower growth rate) or computing and air travel as far back as anyone is willing to measure. The generalization is that in some fields, intense competition leads to continual development, and once you find the right metric, consistent growth over a long term is fairly common across industries.

There have been claims that Moore’s law as applied to integrated circuits is on it’s last legs for at least 20 years, but for the last 50, whether you are a consumer or producer in that industry, you’d have been better guided by planning that some innovation would make next year’s products significantly faster, cheaper, and smaller than last year’s. Rather than focusing on the segment of the curve that Moore first identified, if you take a longer view, there’s every reason to expect that competition will continue to pursue advances, and no reason to expect that the curve will change significantly in the next few decades.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:33Intro. [Recording date: March 17, 2016.] Russ: Yesterday was the 10th anniversary of the first EconTalk episode. We've been going for 10 years now in counter time. I want to thank all the listeners who have been with us, particularly those from the beginning, those who have gone back to the beginning, and been patient with my interviewing skills as they have grown over the years. This will be episode number 523 [Correction: This will be episode number 519--Econlib Ed.] And my guest today will be Richard Jones.... He recently released an e-book, which is our topic of discussion for today. The title is Against Transhumanism. That book is available online without charge. We will link to it. Richard, welcome to EconTalk. Guest: Thanks. A pleasure. Russ: What is transhumanism? Guest: Well, transhumanism--it's a bunch of ideas, really that there are more than one definition of. But it's a bunch of ideas that really concern the idea that technology is accelerating so fast that it's going to change not only our way of life, but what it means to be a human. So, transhumanists are associated with ideas about technology leading to advanced artificial intelligence, about it leading to great advances in medical technology that will eventually mean the end of aging and death essentially. And technologies like nanotechnology conceived of in a very radical form that will essentially eliminate any kind of material scarcity. So it's a kind of belief package that comes together to think that technology is advancing so fast that it's going to solve these problems that humanity has been worrying about for some time; it's going to kind of create a new era in history: transhumanists talk about a singularity, which separates the knowable world that we live in now with some transcendent world in the future where everything has changed as a result of these accelerating technologies. Russ: It's an interesting week[?] to be talking about this; and these are issues that have come up on EconTalk a number of times, in a bunch of episodes. But this week, AlphaGo, which is a computer program to play the game of Go, beat the world champion Lee Sedol four matches to one. It's also a week that Moore's Law, some people are saying, is coming to an end. Or at least slowing down. Which would be a bleak forecast for the optimism of the transhumanist folk. Before we get into the details of whether the transhumanist vision is realistic or not, or whether it's a good thing: You tie the belief in the singularity and transhumanism to religious apocalyptic beliefs that go back to the Middle Ages. That's interesting. Why does it matter? And what's the connection? Guest: Well, it is interesting. The connection is actually pretty obvious when you read what transhumanists write; and in fact in some cases it's made pretty explicit. Certainly in Ray Kurzweil's work it is absolutely explicit. One of his famous books is called The Age of Spiritual Machines. So it's not surprising. So I think it taps into a long-held tradition in Western thought about this future that's coming along in which we have a kind of all-wise intelligence looking after us, all our material scarcity issues are solved, and all of that kind of bodily pains and [?] have gone away. So that's the background. People talk about the singularity as the 'rapture of the nerds,' Ken MacLeod's marvelous phrase. It's a marvelous phrase because it's very insulting but actually contains a real kernel of truth. If you trace back the history of thought, and I trace it back in the book in two directions: One, in one direction, through weirdly, [?] British Marxist scientists in the 1920s and 1930s and notably Desmond Bernal who kind of came from a Catholic background and sort of combined his Catholic upbringing with his Marxist convictions to plot out this future transformed world. That of course plays into I think what is widely thought in certain interpretations Marxism itself was a sort of secularization of religious traditions of Apocalypse. And there's also this interesting connection through the Russian Cosmists who emerged out of some [?] Russian Orthodox thinkers in the 19th century but actually informed the kind of futuristic thinking that underpinned, that the early pioneers of rockets and the Soviet space program. So, culturally it's fascinating. Does it matter? I think it does matter. Understanding the history of our ideas is important. Some ideas--just because these ideas have roots in a particular variety of Christian thinking, even a particular variety of Marxist thinking, doesn't mean they are necessarily wrong, but one needs to understand that these are not new ideas; people have thought them before; and it's not obvious that they are going to be any more right this time around than they were in the past. Russ: Well, Marxists often say that Marx was such a far-sighted that his predictions haven't come true yet. So, that could apply here. Maybe not. As you concede in the book, it doesn't mean that these predictions could not be true. I would just add, being a religious Jew myself, I have a lot of respect for religion; but I've found that when you tell non-traditionally religious people--that is, people who consider themselves atheists--that they have a religious component to their worldview, they don't take it very well. They get extremely insulted and don't like it. So I suspect that claim of yours, or point of yours, that parallel that you've pointed out, I assume people have reacted badly to that. Guest: It varies. In the past they have. I should say: I'm a vicar's son, too, so you know, it's a heritage I'm familiar with and respectful of. But I think the key point is that the things that drive some of that thinking--there is an element of wishful thinking, I think, particularly in some of apocalyptic strains of thought that run through. It's not just Western religion, but Western religion in particular for a long time. And I think if one doesn't recognize that strain of wishful thinking, one can be misled. Russ: Yep. I think it's healthy to keep that in mind all the time.
8:43Russ: So, let's get to the more technical side. You say that the idea of transhumanism is associated with three technological advances that are perhaps accelerating. They are radical nanotechnology, radical extension of human lifetimes, and the inevitability of radically-improved artificial intelligence, or AI. How do these work together? And then we'll talk about why you are skeptical about, actually, each of them. Guest: Well, radical nanotechnology: The idea of radical nanotechnology, I think the best way of thinking about it, it's the proposition that one could digitalize the same way that--we're used to the idea that we've digitized music, we've digitized vision in the sense of films and suchlike. So, it's the idea that we can reduce the material world to software. Because if you can reduce the material world to software and you have some interface between the software and the external world--an assembler that can make things--essentially you can make anything. And you can go beyond that, because then you have kind of complete control over the material world, and in that way, incidentally, having abolished all forms of scarcity, then one can go on to intervene in biology at the most fundamental level and in that way overcome all the shortcomings of biology. Like, dying, for example. And you can also then create computers of the most immense power. Of course, it works the other way around. You need the power of advanced computing, if you like, to do all this stuff that you need to do to control the material world in such a way. So, these things are all mutually reinforced. And in the vision of transhumanists it's that mutual reinforcement of control of the material world, control of the biological world, control of the digital world that come together to create this kind of transcendent event that people the Singularity. Russ: And, to be--we're going to get into why you think the more radical visions that people are having about the potential of these technologies, why you are skeptical of them. But certainly we see--today, 2016--steps toward that. We see 3D-printed stuff that is very quickly becoming quite complicated an interesting. We see, as I mentioned earlier, a computer beating a human being in a game of Go that people thought might not be amenable to artificial intelligence. So many things that people said weren't going to happen. Things like various forms of facial recognition. They are making huge strides. So, the trends all look promising, don't they? Guest: Well, they do look promising. And in a sense what's annoying about transhumanism to me is that it sort of co-opts the actual achievements of technology, but that it uses them as evidence for these, what I think are actually fundamentally wish-fulfillment fantasies. The fact that the medicine advances is fantastic and to be encouraged. One is really pleased about the progress that is being made. One is also kind of daunted to some extent by the scale of the challenge. So, in medicine, there are many big challenges that remain unfulfilled as yet. In information technology, we've just lived through an extraordinary period in which Moore's Law held. That has been astonishing. Not totally unprecedented. But a vaster[?] piece of technological development that we've seen before; and that's been transformative. But all of these things have happened because, you know, the circumstances behind a particular technology came together. Much effort was put to make them happen. I guess what I worry about with transhumanism is, from looking at the existing technological breakthroughs that we are seeing, not really appreciating what it's taken to make those happen; and then assuming that technology is an autonomous force that will just continue and deliver this, this marvelous future--that's a thinker's kind of pernicious worry about it. So it's a--it is a funny thing, transhumanism, because it rests on correct observations about the power of technology to transform where we've got to now. But I think the conclusions it draws from that in terms of the direction, the direction and inevitability of future technology, I think they are pernicious. Russ: Well, let's start with--and I should mention, by the way, that Moore's Law is the--I'm reading now: "That the number of transistors in a dense integrated circuit has doubled approximately even two years." That is, that computing power per square centimeter seems to somehow, seemingly like a law, seemingly like a natural process, just improve continuously. And, as you point out, that may not continue. And--there's certainly nothing inevitable about it akin to, say, gravity. But let's move to nanotechnology, per se. Guest: Can I just stay with Moore's Law a moment, actually? Because I think it's really interesting, and really telling. Because it's talked about as a 'law.' Kurzweil generalizes it to say there's a general exponential law of accelerating everything. But Moore's Law--it's a very interesting thing. Because, it isn't a law. It's actually a social construct. It actually was quite an interesting social innovation that made Moore's Law happen. Moore's Law, it's a self-fulfilling prophecy in the sense that it's a way of organizing the actions of many innovation actors, you know, through software and hardware. It's a way of getting lots of people to work together to a kind of common external timetable, to make, to kind of fulfill the prophecy. So, in order to get those gains in computer power, those reductions in size of transistors, you know, many different companies, speciality chemicals, companies, equipment manufacturers, the people who are bringing it all together, semi-conductor companies like Intel--they all have to work in a coordinated way to this roadmap that actually underlies Moore's Law. So, Moore's Law is not a law at all. It's a social construction. Actually a very interesting and powerful social construction. And it is coming to an end. It's coming to an end partly because the physics is getting much more difficult. But actually as much as anything it's because the economics is getting much more difficult. Russ: Well, that's what I was going to ask you about. It's an emergent phenomenon that we've given a name to. Which may give it some impetus of its own. But no one is trying to fulfill Moore's Law. People are trying in general to "do better, make more money, express themselves"--all kinds of complicated things. And the result has been something we've given a name to, this social construct called Moore's Law. There's no reason to think it won't continue. But there is reason to believe that it could get better--that computing could get better, as long as incentives are there for that to happen. We could stop those incentives. They could stop on their own, through reasons like physics and other things we don't control. So, I think it's important to think about technology as an emergent process rather than a directed process. Although there are of course parts of it that are directed. Guest: Well, I think if you look at the international roadmap for semiconductors, actually that was--there was no central agency that created it but it's a rather instinct/social process. Rode down[?] this is the new, the technologies that have to be developed; these are the new materials that have to be developed; these are the new equipment, pieces of equipment, the [?] equipment, that sort of thing. It was actually quite a deliberate piece of coordinated action to get everything to come together to deliver Moore's Law. So I'm not sure I completely agree with you that it was entirely emergent. And it wasn't a single--you know, there was no single corporation-- Russ: It wasn't top down. That was all I meant by it. It wasn't directed, literally. Guest: Yeah. Russ: There may have been some coordination. There are many things that happen in the market that look coordinated that aren't coordinated. That are signals by prices and other things. This is a case, maybe it's a little bit of a mix. But as you point out, and as I mentioned earlier, it seems to be coming to an end.
18:11Russ: Let's talk about--just stick with nanotechnology for another minute or two. It's a really beautiful idea that we could, maybe reorganize molecules or matter itself to do whatever we wanted. It's sort of a radical reimagining of the constraints of reality. Is there any evidence that that's going to be possible? And if not, or if not right now, why do you think it won't happen in the future? Guest: Well, there's a very good piece of evidence that at some level it is possible. But I think that piece of evidence has been misinterpreted by many people who are transhumanists. The evidence, is possible, that is biology does it. So, you know, if you've got a cow in the field, a cow in the field is a machine for taking bits of grass and converting them into rump steak. And that's quite a significant transformation. It's taking the atom's molecules for grass and it's rearranging them in very sophisticated ways to make some structure whose blueprint essentially is laid down through the genome of the animal. So, plants, animals, all of us--this is actually what we are doing. There is an element to which you would think the biology does constitute the software control over matter, in some restricted sense. Russ: It's undeniably true, right? Guest: That's right. Russ: A child grows up, is--that a child grows up or that a calf becomes a cow--forget the complicated part about the rump steak. Just growth in life--a tree coming from a seed is an absurd bit of magic. It's clearly a remarkable set of processes that lead that to happen. Guest: That's right. And so, looking, going down to the cell biology, you look at the ribosome--the ribosome is the molecular machine that reads the code of DNA (deoxyribonucleic acid), reads it from RNA (ribonucleic acid) in fact. And from the digital code on the RNA molecule that it reads, it converts that into a particular protein molecule. So that is genuinely an example of software-based construction of a atomically-precise product, a protein. So, that's a remarkable--it is astonishing and amazing and it's beautiful science that we've been able to find out how that works and what's going on in those processes. So, yes. So, cell biology is an existence proof at some level. That is some really rather special kind of radical nanotechnology is indeed possible. Russ: But you don't think it's going to happen. Beyond the biological. Or at least there's some limit to how we are able to mimic those biological processes. Guest: [?] is this. So, in the view of radical nanotechnology most associated with our [?], for example, the argument goes something like this: It's biology shows that you can do it. But biology deals in a kind of haphazard and random way. It uses, the kind of materials it uses, are pretty shoddy. I mean, proteins are not things that you'd want to build anything big or strong out of. So, the argument is that biology shows that it's possible; but biology does it kind of with poor materials. It's just constrained by, you know, the random way that evolution has taken place. As soon as we get some new Ph.D. from MIT on the job, they'll do much better: will use proper design principles, will use proper materials, and will get something that's much more powerful. And something that's much more powerful in the visions that are associated with our attraction, are basically things that look like mechanical engineering but are shrunk down to our scale. And so my argument is this--and I think it's an important one. And it's an argument that it's only been possible to make in the last 20 years; now we understand how biology works. Actually cell biology works the way it does because that's the right kind of technology for the nanoscale. Because the physics that takes place at the nanoscale is different. It feels, it looks different to the physics that we are used to intuitively at the macro scale. Things that to us look slightly strange--dependence on random motion, the dependence on things sticking together and unsticking, the dependence on molecules flexing, opening up, shutting down--you know, these things, they are not--they do it that way because it's a very effective way to do it on our scale. So, the kind of great classic picture of radical nanotechnology is this idea of grey goo--this idea that if we could make a replicator that would go around and replicate itself by munching the, you know, the food from the environment and converting it into more copies of itself, what we're describing there is essentially a bacteria. That's what a bacteria does. And I suppose the argument to the radical nanotechnologist is that bacteria, you know, we'd very rapidly be able to make a better bacteria than a bacteria is, because you know, we're clever and understand. So, yeah. But I think that fundamentally misunderstands how optimized bacteria are for that nanoscale world; how much more difficult it can be to do that--just by using inappropriate concepts that we've learned in macro-scale engineering. Russ: I can't help but be reminded by what's called in economics the socialist calculation debate--that a central planner could outperform markets because markets are just haphazard and they come together through prices but they are not designed to achieve anything. So if we were in charge and we had just--if the only challenge here, a big enough computer--which of course in the 1930s was not, was a pipedream. Now we have a big enough computer, in some sense. We have a much bigger computer-calculating power than we had then, but we still are no closer than we were then to being able to plan a 330-million- or a 7-billion-person economy, and achieve what is achieved through market processes. So, it's a--there's a certain messianic romanticism there, again, that is drawing on other traditions, it seems to me, in its appeal. Guest: I think you are exactly right. I would think that your parallel there is exact. There is a very close parallel behind the kinds of emergent processes that happen in a cell, where many things happen really determined by local interactions--the emergent combination of all those local interactions that produces the magic that is a metabolism. What people talk about in systems biology: these things are not actually--there isn't a central controller that's making them all work. It is this emergent process. And I think the analogy to a planned and unplanned economy is very apt.
26:03Russ: But the part I found most interesting about the book was--maybe not the most but one of the most--was the discussion of whether we'll ever be able to upload a brain into a computer. And the argument there is that--I'm going to read the way you write about it in the book. You say,
... "uploading" a human consciousness to a computer--remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone's mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. "Mind uploading" has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.
So, I've thought a lot about the fact that just even in the last 5 years, the ability to keep photographs and mindless musings through blogs that we have is quite extraordinary. A record of our lives is already being accumulated into the digital cloud. But that is nothing like what this vision is. This vision is really that my conscious mind would simply be acting like it does now, but instead of in a wet environment, as you phrase it, it would just be in a dry environment of a computer. You suggest that is not going to happen. What are the challenges? Guest: Well, I think--yeah. There are two questions. One is, do I think it's possible in principle? Do I think it's going to happen any time soon? Soon, being the next hundred years. I'm pretty confident that it won't happen in the next hundred years. I'm not--I don't think it's a very interesting-- Russ: Darn-- Guest: Yeah. Sure. There's an interesting point of principle. I think it rests--the idea that it might happen rests on some misunderstandings of how the brain works. And in particular how complicated the brain is. The big point I'd make is, one can look at extrapolations from Moore's Law indeed about how many transistors that you couldn't get in a computer; and it's very tempting to say, 'Well, okay, what's the unit of computation in a brain?' The usual suspect would be to look at how many neurons you've got, because we know that neurons are important in computation in the brain. But I think that's a kind of mistake. It assumes that the neuron is the basic unit of computing. And it's not. I think the most important point, I think, that there is in that chapter is that the use[?] of computing in biology is not a neuron. It's a molecule. The simplest life forms aren't doing a great deal of computing all the time. A bacteria--you think about a bacteria as simple and crude. But it is sensing its environment; it's doing calculations to incorporate the information it gets about its environment. And then it's responding to those calculations by changing its behavior or indeed changing its essence. Maybe not its essence, but its external form. So, when you understand that, then you realize that the scale of computation that is happening in your brain is just many, many, many orders of magnitude greater than the scale that we can conceive of in a synthetic system. That's my important point. And then there's a secondary point to point out: how one can simulate brains and the nature of simulation in complicated, multilevel systems. In a sense a counterargument to that would be, I know that what's going on in a computer is more complicated than just transistors, because the transistors themselves integrate the behavior of lots of electrons, so I could make an argument that actually a true simulation of what a computer is would actually involve looking at what the electron is doing, not just what the transistors are doing. Which again gives you many, many orders of magnitude of complexity. But the key difference there is there is a difference between a designed system and an evolved system. And that difference is this: A designed system, like a computer, has a kind of separation of levels. You can talk about a transistor as being an independent unit: you understand how it behaves without understanding what the electrons are doing, because we designed it that way. An evolved system, there's no kind of separation of levels of complexity that you can rely on, because the thing has evolved. No one has designed it, to make it easier to design the circuits. It's just evolved, from the simplest organisms that are still doing all this information processing up to complicated higher animals that are doing much more complicated kinds of information processing. So I think it's that misunderstanding that has given people false hope that we'd be able to reproduce a consciousness on the kind of time scales that are foreseeable given what we know now.
31:35Russ: So, there's two parts to that. One, it seems to me, is--you say it's many, many orders of magnitude. And of course the answer to that is, 'Okay, so it will take longer.' So, really the question is whether there is some fundamental barrier. And it seems to me that you are closer to that issue when you talk about the evolved versus the designed. So, I can reverse--well, I can't, but someone can--reverse-engineer a device, a gadget, a design product. So, you can look at it, you take it apart; you see things you recognize. And you try to reproduce those. You may struggle. You may be missing some pieces of the technology that would allow you to create those pieces. But you can see them and recognize them. What's going on in the brain that makes that more of a challenge? Why is it I can't just take the brain of a person who has passed away, or an MRI (Magnetic Resonance Imaging), see what's going on, and say, 'Okay, we'll just get the computer to do that'? Guest: Well, that's a scale issue. Now we are talking about practicalities. MRI, it's a marvelous technique, but you know, its resolution is millimeters, usually, in [?] circumstances maybe tens of microns in the most extreme research environments. It's still orders of magnitude bigger than the scale of molecules. Russ: But if we had a better MRI, whatever that means, would understanding the molecular level of activity in the brain in theory, or in practice, give us a brain? Guest: Well, now--yes. Now we are going on from the practicalities: If we had a better MRI, it's in the category of-- Russ: If my grandmother could fly, she'd be an airplane. Guest: Exactly. So, you know, there are interesting technical reasons why it's difficult to make the resolution of an MRI a great deal smaller than it currently is. So, you know, this is all very technical issues. I think it is possible to get very high readouts of brains--fascinating work, this is idea of looking for the connectome, people trying to work out the connections of all the neurons in animal brains--fascinating, fascinating science. Downside of that is that it usually requires the creature to be dead in the first place, because the techniques are necessarily destructive. And then, as I say, there's still the question: the connectivity probably still isn't enough. It's the state of the molecules sitting in the synapses that are controlling the strength of interaction across synapses. So, I'm really just emphasizing--I'm not--in a sense, thought experiment is an interesting one but that's not the relevant question for what's going to happen in the next hundred years. Russ: I'm just going to read a quote here that relates to this from the book. You say the following:
One metaphor that is important is the idea that the brain has a "wiring diagram". The human brain has about 100 billion neurons, each of which is connected to many others by thin fibres--the axons and dendrites--along which electrical signals pass. There's about 100,000 miles of axon in a brain, connecting at between a hundred to a thousand trillion synaptic connections. It's this pattern of connectivity between the neurons through the axons and dendrites that constitutes the "wiring diagram" of the brain. I'll argue below that knowing this "wiring diagram" is not yet a sufficient condition for simulating the operation of a brain--it must surely, however, be a necessary one.
So, that conveys some of the magnitude of the physical challenge. But again, give enough time, perhaps we could get at that. I guess--there is something really fascinating about the idea that if I could observe your brain in real time--which is not possible, remotely possible, now--and I knew the initial conditions, I could predict your actions and thoughts for the next, the rest of your life. And thereby raising this question, this classic question--essentially I'd be God; I'd be raising the question of free will: if I know what you are going to do, how much free will could you possibly have. You sidestep that, correctly I think, in a book of this length--it's only about 45, 46 pages, for those listening at home, it's very nice. But you are suggesting there is something more than just a physical challenge here. Is that correct, or not? Guest: Well, I think--yes. The question of free will, again, that's fascinating; that could take a long time. Again--many people who are much cleverer than me have thought about that in great detail. The only point I would make is just a physical one: that actually in principle we wouldn't be able to--even if we thought everything about the brain were reducible to where the molecules were, we would not be able to reproduce that into the future from knowing the initial conditions, because there's a fundamental randomness about the way that biological macro-molecules work. Russ: Yeah; that was my segue. Carry on. Guest: Yeah. So, it's fascinating to us where that randomness comes from. I think it's actually pretty fundamental. But, you know, there is no doubt: If I am setting up a computer simulation to simulate, at the molecular level, what's happening when a receptor molecule, when a messenger molecule hits a receptor molecule, the way I'd do that simulation would have randomness built in. Because that's the nature of the physics. I mean, to be technical, I'd be solving large dynamic[?] equations which have a random term in them, a noise term, which really arises from the Brownian motion from the bombardment of the molecule by the surrounding water molecules. So, that kind of randomness, it's a fundamental feature of the warm, wet nanoscale world, which is what our brains work in. Russ: But you do make the point at one place in the book that--is that--this is a philosophical question as much as a scientific one: is that randomness that we observe in that wet world, and in the world of physics generally, is that something fundamental or is that just a statement that we don't yet really, fully understand the physics? What are your thoughts on that? Guest: Well, I think it's fundamental at level that it comes from quantum mechanics. That much I am sure of. Of course, where the randomness in quantum mechanics comes from is something that I'm not sure of, because that's hotly debated. But you know, to the extent that we can tie it down to a particular bit of physics that produces randomness, it's the quantum mechanics that does it.
38:56Russ: Now, you don't talk about this in the book, but it's something I keep thinking about and reading about, which is consciousness. Philosophers have recently been arguing--Nagle and Chalmers, most prominently--that our current understanding of the physical world does not allow us to account for the existent of consciousness--the feeling that our life is like a movie in some sense. The feeling that certain things are exhilarating. The feeling that we have memories that bother us or that excite us or thoughts of the future. That all of these, this complex inner world that we have is somehow not amenable to the standard science of biology. Have you thought about that at all? Do you know anything about that literature? Does it speak to you? Guest: I've thought about it. And you know, I wouldn't want to--I want to be very tentative in my response. And I refer to my own kind of, my own intellectual traditions as it were as a physicist. The kind of physics I do, the idea of emergent phenomena is very important, and I think very subtle and very deep. And I don't know whether it's going to give the answer. I am enormously comfortable about the idea that consciousness can be something that emerges from the microscopic description of, the microscopic physical description of what's going on, without--while still not in some sense being fully explained by that physical substrate. It's [?] I mean. But that's a long discussion. Russ: Yeah. You write the following. You say, "... if you are alive now," and by that "now" I think you mean by your vision in this book--"your mind will not be uploaded. What comforts does this leave for those fearing oblivion and the void, but reluctant to engage with the traditional consolations of religion and philosophy? Transhumanists have two cards left to play." What are those cards, and what do you think of them? Guest: I think--well, one of the cards are cryonics-- Russ: Yeah. Those cards are cryonics and radical life extensions. Since I finished your book about 30 minutes before the interview started, I'm probably more up on it than you are. I didn't want to surprise you there. Guest: Yeah. Well, cryonics, I think cryonics is this idea that one would be able to freeze oneself, and then at some later stage one has to hope that advanced civilization would find it worthwhile to revive you and repair any damage that's been done. You know--I don't know. It's not something that appeals to me. I think everything I've said about the state of the mind, the connectome not being sufficient, if you like, to reproduce the mind, the need to understand what the molecules, where the molecules are and what state they are in makes it quite difficult for me to think that process of freezing a brain--which is a physically very intrusive process--I find it very difficult to believe that the kind of randomness that that would impose on it wouldn't scramble up what kind of consciousness that one might have. And I guess I also--it depends on this idea that--well, two things are going to happen. One is that in the future we will have technologies that are much more advanced and able to unscramble without scrambling. And b). the idea that people would want to do it. I don't know. Neither of those things seems enormously convincing to me. Russ: 'Want to do it' meaning, in the future--that they'd want to unfreeze you and bring you back to life as a kindness or a Great, Great, Great Grandfather Richard, always wanted to see 2200. So, we'll just put him the microwave. Yeah. Might be cheap. Guest: Maybe. Maybe. Yeah. So, well, we'll see. Russ: It's the best way to do history: we find out what 2016 was really like. I'm kidding.
43:25Russ: Let's turn to technology. Guest: Medical life extension. Russ: What? Yeah. Let's talk about that. Guest: Yeah. I think there--this is actually a point that, I think, you know, who can say they don't want to see all the difficult diseases of old age being cured? Of course everybody does. I certainly do. That there's a huge amount of suffering in the world from people who get diseases of old age. And we very much ought to be spending a great deal of effort trying to work out how to ameliorate that suffering. But I cannot believe that the problems are actually that close to being solved. This is--you know, I guess a case in point, where I think looking at Moore's Law, generalizing that to the idea that we're in a state of exponentially accelerating technology, therefore every problem that could conceivably have a technological solution will get such a technological solution--I think that's a delusion. In a sense you only have to look at medical science to realize why that's so. And there's this fantastic observation, this idea of Eroom's Law [Moore spelled backwards--Econlib Ed.]--you know, you look at drug development, the cost of developing new drugs is actually increasing exponentially. It's not getting exponentially easier. It's getting exponentially more difficult to put to use more drugs. And if we look at diseases like, particularly Alzheimer's, because I think that the various kinds of dementia that we all get more susceptible to as we get into old age, you know, we don't even really know what's causing most of those. We haven't really got to the point where we've identified the causal agent, let alone find out what the therapy is that's going to solve them. So, you know, it would be fantastic if we could work out how to cure those diseases. We probably ought to be spending more effort trying to work out how to do it than we currently do. But I think to say that radical life extension is around the corner and current 60- and 70-year olds are going to be able to benefit from that is, I just think, kind of a bit of a hollow[?] joke, a bit of a delusion. Russ: Well, you could argue we are spending too much on them. I think in America we're subsidized various kinds of gadgets and devices and treatments; maybe outside of the United States not so much, going the other direction. But it's not obvious to me that we're spending the right amount on extending life versus improving the quality of life when we're younger. Guest: Well, that's probably what I'm saying. I think, yes. I think it goes without saying that when one talks about extending life, it should be healthy life. Russ: Yeah. That's the only thing that counts. Although, we'll live to 250, and the last 170 years of that we'll be playing tennis in our virtual reality world, and are uploaded laying in our hospital bed--I don't know. Who knows? I am sort of a--it's funny; this book's a very sobering look at optimism. I am something of a technical, a techno-optimist. I do think that technology has made lots of things better, and I do think that life has improved. You do, too, on that--certainly on the last point; you talk about that in the book.
46:55Russ: But, let's turn to technology generally. You say that it's dangerous to see technology progress as inevitable, and that that might be a conservative philosophy rather than something you might call a liberating or life-affirming one. Why is that? Guest: Well, I think--it's dangerous in two senses. One is because I think it isn't inevitable, and I think if you stop trying, it will stop happening. And actually we could have a much longer conversation about the debate that's going on about innovation stagnation--the kind of Gordon argument that the golden age of American growth is over. I actually think there's something in that: I think, you look across the developed world, productivity has been steadily falling since the 1970s. So, if we connect economic growth and particularly per capita GDP (Gross Domestic Product) and labor productivity, if we connect those to technology innovation in some general sense, I think the numbers tell us that technological innovation is indeed slowing down. [?] by average that translates into those broad aggregates. So I think it's not inevitable that technological progress happens. We have to have the structures in society that will make it happen; and there's a big question I think about whether we have got those right structures in place now. So, that's part of my argument. If we think that technological progress is inevitable, we won't try hard enough to make sure it does happen, and we'll end up stagnating for that reason. And the other point I think is about the direction of technology. You already talked about whether you think in the United States you make the right choices about medical technology. The choice of what technology you work on, what technology you don't work on, these are choices that are made either implicitly or explicitly by somebody or as an implicit consequence of the way that you've set up your economic system. If you think that technology is just this single thing that steams ahead without intervention, you will be leaving those choices for somebody else or something else to make. So I think one needs to be deliberate about the technological choices that you make; and that is something that transhumanism gets in the way of, because it makes you think that it's all inevitable--there's no point in interfering with it, if you like. And others say that's potentially an economic conservative position, because broadly speaking I think there is some political economy of innovation and I think that will favor--the innovations that will happen will be the innovations that favor those that have incumbent power in whatever political economy you've got. Russ: Yeah; we'll come to that in a second. I just want to put a footnote on your discussion about stagnation. I think there is some suggestion that productivity is down, that measured productivity is down, certainly; and the question is whether we are measuring output correctly. Right now, I'd say that our measurement systems for things like GDP, for things like productivity--they were always very imperfect. I think they've gotten more imperfect. I don't know if that's a legitimate phrase; it probably isn't just perfect. Less accurate, as the world has become more digital and as we inhabit the virtual world in the way that we do more and more often. Just to take an obvious example, this conversation has no--well, a little bit of a contribution to GDP because I am paid by Liberty Fund to produce it. But there is no payment by the listeners. They enjoy it at no charge. And they seem to enjoy it. They listen and they value it despite the fact that they don't pay for it. So it's hard to measure, I think, productivity in this world. We haven't figured that out. We also haven't figured out the institutional ways to deal with this type of innovation--driverless cars, some very interesting things happening there. But certainly the institutional framework, the political economy for coping with driverless cars or even driven cars like Uber, we are using very old, ancient, 1950s-style regulation to cope with it. And we're going to have to create some new stuff to make that stuff, the new stuff, as productive as it could be. So, I think it will be interesting to see if that happens. But I want to come to your last point-- Guest: Can I just--I do want to rebut that-- Russ: Yeah. Go ahead. Guest: I think that's an argument--that's a really interesting argument; you spent a lot of time talking about that. I don't fully buy this argument about this measurement of GDP. I buy that--I mean, I do buy the argument that GDP is mismeasured. But I think, you know--again, I agree with Gordon this. I think it was ever thus. I think we go back to the early 20th century, the example I--that struck me--so I looked the numbers us: In the United Kingdom right up to about 1930 the change was very quick. The death rates for childbirth were about 5%. So, you know, every child you produced you had a 5% chance of dying. Okay, so that's not in the GDP, either. But the chance that you and your loved one, every time you try to produce an infant you've got a 5% chance of dying--I mean, that's a huge unmeasured contribution to GDP. So, I accept that, you know, listening to an interesting podcast is kind of a piece of value that we are not capturing in GDP. But we have to argue: What's more important in here? There were some very big ones in the past, too. Russ: Well, but that's in interesting example, because, as we've lowered that mortality rate, those improvements--you can debate whether the word, 'in,' what the word 'in' means--but those improvements are not in GDP. And in fact in some situations they would lower GDP by creating more people who live longer and who are retired, say; would certainly lower per-capita income. Which is one of the reasons per-capita income has to be used carefully. So, it's a complicated point. But I take your point, that: It's easy to say GDP is measured imperfectly. That's not enough, by itself, to refute, say, Gordon's or others' arguments about stagnation. What I think is important is that you and I spend an immense amount of time, I suspect, even you, with you, with all the work you do as an academic in your institution and as a researcher and as a scientist--you probably spend a reasonable, some amount of time doing stuff that is just entertainment, inspiring, heartwarming that comes through the Internet, which just didn't exist 50 years ago, 30 years ago. You could argue, well there were other things that existed then. You could have a wonderful dinner with your family; and that wasn't correctly measured in GDP by the cost of the food or the electricity used to cook it or that. So I take that point. But it is interesting that while the stuff that gives people lots of satisfaction in today's world is not measured. Now, you could argue also that it's a negative. People are obsessed with these things--they should--the sign's wrong. So it is a mess. But I'd say we don't--there is a change there. You could argue--certainly GDP in the past wasn't measured accurately. But there was a baseline that you were changing from. Here it seems more of a qualitative than a mere quantitative change. Guest: Of course you are right. And, you know, I never realized that videos of cats could be so funny. Russ: Exactly. Thereby totally destroying my argument. Perhaps.
55:15Russ: I want to move on to a very provocative thing you say toward the end of your book, which is the following:
One could argue that transhumanism's singularitarianism constitutes the state religion of Californian techno-neoliberalism, and like all state religions its purpose is to justify the power of the incumbents.
What are you talking about there? A really interesting idea. Guest: Yeah. It is particularly interesting, about who are the people who are interested in transhumanist ideas. And it is conspicuous that the people who are, the kind of the great and good of Californian tech scene. So, you know, Ray Kurzweil is employed by Google. You've got people like Pete Steele[?], who made their fortunes in the tech world and have turned into evangelists for these ideas. So, it seems to me, you know, significant that this is a set of ideas of ideas that frankly, if I go around Sheffield and Rotherham, I don't suppose there's a transhumanist in the entire place. But-- Russ: It's a backwater. What can you do? Guest: Well, you know, people around me talk about beef cows. It's not--but yes. So, I think that kind of--yeah, that association of where these ideas are seen to be widely held and influential, it seems significant. And I think the idea--and this comes back to this idea about technology being inevitable. If you are in a position to benefit from the advance of technology from a particular direction, then it very much suits your book to say that there's no point worrying about it: This is going to happen, that's how it's going to be. So in that sense it is something that--there are a set of ideas that do justify that particular set of incumbents. Russ: And to bring it back to bear on my own personal philosophy, I think my political and my economic philosophy: I am a classical liberal, called sometimes a neoliberal by others who are not as fond of it as I am. And basically my world view says that the role of the state is to do a handful of things it does well: courts, property, defense, police. And not much beyond that. And I allow for emergent, bottom-up ways, voluntary cooperation among people to solve social problems as well as to regulate industry through competition. And that's an argument for "leaving things alone." Now, one of the critiques you could make of that worldview is: Well, it's easy for me to say. I have a good life. Of course I want to leave things alone. And my children are going to have probably a good life, and so they are going to benefit from leaving things alone, more or less. So, this idea that--I'm trying to give your argument a little more bite: it's an argument I think you adapted from another writer-- Guest: Oh, Dale Carrico. Russ: So, that criticism is--the rich and the powerful benefit from technology not so much as the poor. Now, the counterpoint to that, of course, is poor people seem to me have much better lives than they had a hundred years ago, and that technology plays a pretty large role in that. My guess is the counterpoint to that is they could have even better lives if it was steered in a certain way. I don't know. What are your thoughts? Guest: Yeah. I think there's two parts to that. One is, to the classical liberal, I would assert that actually that worldview seriously underestimates the role of the government in shaping the technology that we've got now. So, I think--you know, it's no coincidence, the giant burst of technological progress that we are seeing now, followed, you know, a World War and then a Cold War, that was basically fought with technology. You know, you go back to this classical argument about the iPhone--all those technologies came ultimately from government as a commissionaire--as a driver of technology. I'm not saying--I do need to stress--I don't mean by that that the private sector wasn't enormously important in making those technologies, integration of those technologies, making them into actual products. But, you know, the fun thing that your iPhone does because it's gone accelerate[?], got an accelerometer in, those accelerometers--why does anybody make accelerometers? Because they needed to make guidance systems for ballistic missiles. So, thousands of those examples that you can think of--ways you can trace back the technologies that are now in consumer products through to those government interventions. So, as I say, I just--I agree with you that technology has benefited many people, rich and poor. I disagree with you that technology best emerged from the government not intervening. I'm not necessarily saying that the ways that those technologies emerged through, as part of this military-industrial complex, was the ideal way of doing it. So, I think that--I think it's quite difficult, I think, to be a classical liberal and to be, and to be in a world where you think that radical advances in technology are good things. Russ: Well, I think it would be a lot better--I remember when people used to point to Bell Labs, which government funded for a long time; and the source of a lot of innovation. It's not obvious to me that having a lot better telephone, say, or whatever Bell Labs was creating at the time, was worth it. It's not--the real problem with-- Guest: [?] Bell Labs invented the transistor. You wouldn't have a computer; you wouldn't have a cellphone. Russ: Correct. But the question-- Guest: Bell Labs invented UNIX. So, you know, it's interesting--it wasn't so, it was actually directly government funded; actually that was an interesting case because in a sense Bell Labs was propped up on the [?]-- Russ: Yeah, it's complicated. It's a little more complicated. I shouldn't have said that. Yeah, good point. The product of a government--correct.
1:02:18Guest: But, yes. It's a long argument that we could have. But I think, looking at the history of innovation, it is interesting. I think the radical innovations in the second half of the 20th century, they didn't emerge from a bottom-up process. For better or worse. That's just the way it was. Russ: Well, I think a lot of it did. I think--but I'm trying to make a subtler point: Just because technology emerges doesn't mean it was worth the investment, right? So-called improvements that are made are not necessarily worth it. It depends on the costs. The private sector spends a lot of time worrying about whether it's worth it. But they, of course, don't take in and account all the costs; and they don't all the benefits. The public sector in theory takes care of both, but I think in practice doesn't do such a good job of it, either. But for different reasons. Of course, it's an unanswerable question: what would happen if the government got out of the technology business? It is certainly, in today's world, for whatever you can argue about what's happened in the past, it's certainly true in today's world that there's a lot of effort being spent at private efforts to improve technology. Now, that still has been incentivized in many cases, as I mentioned with medical devices--it's incentivized by public policies that make some things more profitable than others. So, it's inherently an unanswerable question. But my only point, that I think--I'll let you disagree is you want--is that, yes it's true that there are technologies that exist in the iPhone and elsewhere that came from government. But basically no one is steering technology now. And I want to make sure I get your point right about taking care of elites and the powerful. I don't think it's steered very much by public policy. It certainly could be, though, to achieve certain ends or to try to achieve certain ends. And my general feeling is that it wouldn't serve the public: it would serve those elites, anyway. That's why I tend to want government to be less involved rather than more. Guest: Yeah. It's arguable[?] I think the mechanisms--I say 'steer,' and in many case I'm not necessarily thinking that 'steer' occurs because there is some committee that sits around and says they are going to put £10 billion pounds into a moonshot to make this, that, or the other. The way in which public policy plays out, the incentives that are constructed that favor one kind of innovation rather than the other. You know, health care is a fantastic example, because you've got kind of two contrasting systems, either side of the Atlantic. You have your heavily private-based health care system, but none the less, produces all sorts of incentives, some positive, some not positive. This side of the Atlantic, we have, you know, actually a very top-down, centralized system, that actually in its own way is not brilliant for innovation, either. But, you know, it steers innovation in different directions. And again, not with anybody really thinking about, just often unintended consequences of policy decisions that are made for other reasons. Russ: We don't have a very private system. It's just more private than yours. I'll concede that. But, we have--steadily, for the last 60 years, moved away from a private system in the most important variable, which is: Who pays for it? And what's true in the American system is that there are still a lot of private providers, but they are increasingly constrained by public policy of various kinds. Guest: And the other thing to say is, of course, this is unpredictable. The difference, I think, that happens here: I think the private sector, particularly uses, there are huge market innovation that happens through the interaction of users with technologies. People use technologies in different ways, and that's noticed by, you know, actors who are then able to exploit that. That's a fantastic way of kind of doing local optimization, if you like. But it seems to me that the major interventions that cause big saultations, if you like--big leaps in innovation--they need a bigger push than that. They often don't go in the directions that people who are making that push intended. The most, the most far-reaching example that I always talk about when I talk about this is the Haber-Bosch process. The Haber-Bosch process has absolutely transformed the 20th century--the process for fixing nitrogen. It probably means the population of the world is, you know, twice, you know, more than twice as big as it otherwise would be. You know, more than half the population of the world as it currently stands exists because of this piece of technology. That piece of technology was developed in the First World War (WWI) by the German government at the time, [?]. And it was developed to create explosives. Because Germany didn't have access to the nitrates from Chile due to the blockades of that country during the First World War. It was a massive effort. It was a massive effort--you look into how much money the German government spent on commercializing that process under the pressure of being in this war. It was colossal. It was unthinkable that the private sector could have done that. And it had absolutely far-reaching consequences. Which were not in fact the consequences that caused the innovation to be developed in the first place. Russ: Yeah. That's a fantastic example. The only thing I would add to that is the--which we haven't mentioned--is the role of the patent system, intellectual property. Which I worry is increasingly used to protect incumbents. It's a complicated issue, obviously. It's hard to know what the right answer is. But I do worry about the political influence there. So, I just want to accept your point, that we shouldn't be sharing all technology no matter what, because there is a good chance that some of it is being influenced by folks who benefit from it. And I think we should always be aware of that. Guest: No, I'm with you on the patent issue. I think, you know, that was, it's a good idea that, actually again, actually patents is a social innovation, isn't it? And even social innovations can be used and misused.
1:09:13Russ: Let's close with artificial intelligence more generally. So, you are a pessimist about the rapture of the nerds. But we don't have to go all the way to the rapture to get to a very radically changed world. And a lot of very smart people--in the last year, Stephen Hawking, Elon Musk, and others--have said that the rise of artificial intelligence is around the corner to being a threat to humanity. Do you want to say anything about that? Guest: Well, yes. I mean, it clearly is a threat to human[?]. It's a powerful technology. And it clearly is advancing very fast. And it's the concatenation of the availability of huge amounts of data and, you know, techniques for assimilating on, you know, generalizing from that data, if you like--machine learning ideas. These are very powerful ideas that will have significant applications. We have autonomous systems that already are being used in ways that some people at least would think is a threat. I'm thinking about autonomous drones. You know, the day that a terrorist organization works out, has to take a self-driving power and make a car bomb out of it, is not going to be a very cheerful prospect. Maybe I'm stepping one back from saying this is an existential threat to humanity. But these are powerful technologies, undoubtedly, that will have far-reaching influences. Not necessarily for the good.