David Gelernter on Consciousness, Computers, and the Tides of Mind
Nov 7 2016

Tides%20of%20Mind.jpg David Gelernter, professor of computer science at Yale University and author of The Tides of Mind, talks with EconTalk host Russ Roberts about consciousness and how our minds evolve through the course of the day and as we grow up. Other topics discussed include creativity, artificial intelligence, and the singularity.

Pedro Domingos on Machine Learning and the Master Algorithm
What is machine learning? How is it transforming our lives and workplaces? What might the future hold? Pedro Domingos of the University of Washington and author of The Master Algorithm talks with EconTalk host Russ Roberts about the present and...
Richard Jones on Transhumanism
Will our brains ever be uploaded into a computer? Will we live forever? Richard Jones, physicist at the University of Sheffield and author of Against Transhumanism, talks with EconTalk host Russ Roberts about transhumanism--the effort to radically transform human existence...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Nov 7 2016 at 12:27pm

The first part of the discussion was void of memorable ideas. The very last topic on AI can actually be settled very quickly withe these simple arguments:

1. AI is a tool like a hammer. The designer is and always will be fully responsible for his/her tool

2. The machine will never create but will always be an extension of the brain of the designer. As such it might look like the machine dominates other humans, but in fact the designer dominates those other humans, not the machine

3. Carbon, Electrons, Switches, the biologic substrate, etc. are necessary (per our current knowledge), but not sufficient to make a conscious human. In other words, both a pile of minerals and a human may contain water, carbon, etc., but the pile of minerals will not become a conscious human.

Greg G
Nov 7 2016 at 1:39pm

I liked all the parts of the discussion.

And I don’t think that these kind of questions are “settled” with analogies and predictions.

Nov 7 2016 at 1:50pm

This was a macro talk, based on philosophy and psychology metaphors rather than micro principles. There is a tremendous amount of clinical experience with the concepts superficially introduced in this discussion which were not referenced to constrain these gross metaphors for reality. Neuroscientists are still struggling to define the micro-principles let alone create a theory of consciousness. We still don’t know well what biochemical environments determine whether a synapse gets larger or gets smaller…. We still don’t know well which environments make a nerve cell fire at high frequencies vs. low frequencies….. We still don’t know well about the chemical environments which drive the transitions between the equilibrium states of sleep vs. wakefulness….

We know a little but not enough to create grand unified theories of consciousness reminiscent of the GDP factory.

Luke J
Nov 7 2016 at 4:04pm

Listening to David Gelernter on AI gives me Happiness 147.

Gene Banman
Nov 7 2016 at 4:24pm

The almost 10 minutes of stream of consciousness from David Gelernter in the middle of the interview had me tuning out. Very hard to follow. Russ said his book is “challenging”. Listening to him certainly is!


Paul McLellan
Nov 7 2016 at 4:40pm

The whole part about computers never being conscious made me feel as strongly as he did the other way around. Apart from the special case of yourself, how do you know anyone else is conscious? If AI gets good enough, emotions will be easy to mimic too.

David’s argument seems to boil down to:
1. I am conscious, that’s my definition of conscious.
2. You are made of neurons, so I assume you feel roughly the same way.
3. Comptuters are made of silicon, so I assume they cannot feel like that. Therefore machine consciousness is impossible.

It is a similar argument to Searle’s Chinese Room, where there is supposed to be a fundamental difference between being able to answer in Chinese any question posed in Chinese, and “understanding” Chinese. Again, my question is “How do you know that a particular Chinese person understands Chinese?”

I have been a computer scientist for 50 years and back when I started it was always “Will computers be able to think?”. I would always say, depending on your definition of “think”, then one of (a) one day computers will be able to think or (b) you cannot prove that people think or (c) you define thinking as something only people can do, by definition, in which case obviously computers will never be able to do.

Nov 7 2016 at 6:08pm

David’s argument against computer consciousness was really weak, I agree with Paul McLellan. If we take the operating assumption of the natural sciences (that we’re made of stuff that follows natural laws), then I see no reason to be skeptical about the eventual possibility of computers simulating the physical/chemical processes of a human mind with full precision (which would be just as conscious as I am). It’s just a question of learning how to program an accurate simulation of physics/chemistry/biology and then programming in a configuration of matter that matches a brain (or even just a fertilized egg, and then letting it develop and giving it appropriate stimuli). Maybe we’ll never have enough computing power, or maybe we’ll never learn how the matter involved in our brains is configured in detail, or maybe we’ll never discover enough of the physical laws required to simulate the physics of a brain accurately. Personally I’m optimistic. I think not only will we be able to simulate consciousness at the level of physics, but we’ll probably succeed at creating an AI without doing a full physics simulation using various algorithms / heuristics.

Nov 8 2016 at 4:58pm

Agree with Bob and Paul that the argument presented for “machines will never be conscious” was very weak.

One prong of David’s argument is “all conscious things that we’ve ever seen are biological, so it’s highly unlikely that a non-biological thing will ever be conscious.”

What David is ignoring is that conscious beings are all a lot more complicated and better at general information processing than any machine we’ve ever made so far. For David’s point to be compelling, we’d need to have a machine equally as complex and good at information processing as a human (or animal that we think is conscious), and somehow establish that it wasn’t conscious. It isn’t enough to point out that extremely simple machines aren’t conscious.

Another prong of David’s argument is “neurons are more complex than on/off switches, so machines can never be conscious.”

I’m surprised that a professor of computer science could make this argument. When you use a computer to simulate something complex, you don’t use one on/off switch to simulate huge of the complex thing. Instead you use millions or billions of switches to simulate a tiny part of the complex thing. David’s argument seems as flawed as the following: “an on-off switch is a lot simpler than a bomb, so a computer will never be able to simulate a bomb explosion.” Maybe David thinks even a computer as big as the sun couldn’t simulate one neuron?

Nov 8 2016 at 9:34pm

As with other commenters above, I found David’s argument against the possibility of machine intelligence and consciousness to be very weak, and was shocked to read that he is a professor of computer science.

The gist of David’s argument seemed to be that every conscious being he’d encountered looked like an animal, so the onus was on the other side to prove that there could be a conscious being that didn’t look like an animal. I imagine that hundreds of years ago a similar argument was made that every thing that could fly was a bird (or bat), and that was the limit of the set of things that could fly…until we expanded that set through technology.

A computer is, by definition, a programmable electronic device. What is the human brain if not a programmable electronic device?

To attack the problem from the opposite direction, what prevents us from completely simulating a human brain in a conventional computer? You might argue about how large or slow the resulting computer might have to be, but ultimately we could simulate every single atom in David’s brain with a large and complex computer. Why is carbon intrinsically more able to be conscious than silicon? Alternatively, what if we make a computer that is primarily constituted of carbon rather than silicon? What if we create computers that are nearly indistinguishable from brains?

David’s argument amounted to “show me”, but the matter is decided by the very definitions of the terms. Brains are one particular form of computer. If you feel that brains can be conscious, then you feel that computers can be conscious.

[Nickname was changed from uppercase JW to JSW–Econlib Ed.]

Simon Forbes
Nov 10 2016 at 7:06am

David’s defense of his philosophical views on AI consciousness was outright bizarre. For some reason he arbitrarily selects his views as the default view and asserts the burden of proof lies on others to convince him otherwise. I could just as easily claim consciousness relies on complex patterns that form thought, claim that is consciousness, and challenge him to limit it to biological processes only. He might have reasons to disagree but he’d need to present an argument.

You wouldn’t expect a philosopher to talk on AI consciousness without relying on the views of computer scientists on the ability of computers to perform the operations necessary, even if in virtual space. A philosopher arguing about AIs technical capability with only their own thinking on a subject they are not expert in would be rightfully criticised. If David wants to talk philosophy, I would like to hear him reference philosophers.

Maybe David does speak in depth in his book, but his points as I interpreted them here were not worthy of discussion in this podcast in my opinion. It’s a shame because I’d love to hear him back up his thoughts on the limits of AI in more detail. Maybe provide some more examples of how AI could fail, what technical or theoretical limits there are, etc. It’s all fascinating and something he is rightly qualified to speak on.

Nov 10 2016 at 10:14am

Very enjoyable podcast.

I think the argument presenting against AI was novel, and feel the responders are talking past the argument. Creating agreed upon definitions might make the agreements and disagreements more clear.

Mark Crankshaw
Nov 10 2016 at 10:29am

David Gelernter has let his ideological slip show when he says that he knows of no “Western” philosophical position that has an “instrumentalist objectification of human life” and that “toying with human life” is “fascist” in its inclination.

I know of one: Marxism and all of the left-wing collectivist variants polluting the politics of the West as present. Marxism is Western. American social “liberalism” and European Social “democracy” are Western. All preach that “our thoughts transcend the mere individual lives of particular human beings”, and that we should “sacrifice” our individualism, our individual interests for the good of the collective (by whatever code word the collective is disguised: society, the country, the world). The “social” in socialism and social democrat is there for a reason. Further, collectivists have always been quite eagerly led by elites that claim to operate at a “higher level” than the lumpen proletariat they purportedly claim to represent.

I happen myself to see “fascism” as a creature of the “Left”, a very close cousin to the “left-wing” variants found on the traditional political spectrum. The traditional political spectrum — that is, fascist on the “right” and communism on the “left”– is, in my view, absurdly and deliberately false. A more accurate spectrum would be along collectivist and individualist lines: total collectivism at one extreme and total individualism at the other. With that more coherent and logical spectrum, fascism, socialism, communism, and American so-called “liberalism” sit side by side, polar opposites to everything I cherish and value.

Hillary Clinton stands humiliated and defeated today because she, and her leftist followers, have an “instrumentalist objectification of human life”, in the form of endless plans, programs, and regulations to run our lives, and she quite transparently and arrogantly believes that she, and liberal elitist allies, operate at a “higher level”, intellectually, morally, and philosophically. Just ask them, their smug sense of superiority permeates everything they say and do. I’m pretty sure Hillary believed that “we” needed her. Earth to Hillary: no we do not!

It’s not just a fascist inclination, Mr. Gelernter, you left a good junk of the traditional political spectrum out of that equation. I find the Left (as traditionally defined) contemptible and odious precisely because they are equally so inclined.

Daniel Barkalow
Nov 10 2016 at 1:00pm

Still near the beginning, but I think that regarding this as a single linear spectrum is a bit too simple. There are plenty of states that have different combinations of features than his spectrum would predict, with meditation being the best known (being specifically both alert and unfiltered/undirected).

It’s interesting to note that Douglas Hofstadter’s “Copycat” AI system (intended as a cognitively plausible approach to a toy problem, rather than most AI systems’ super-human approaches to practical problems) had an explicit variable (called “temperature”) which shifted between searching for new options and nailing down details.

Robert Swan
Nov 11 2016 at 4:34pm

Paul McClellan’s comment beautifully covered most of my thoughts on the machine consciousness question. Put me on his “me too” list.

I’ll add another observation.

Prof. Gelertner seems to believe that only God’s creations can have consciousness.

I have only occupied one mind, but have watched a number of others as they have grown. Not much sign of consciousness for the first year or so, and my own earliest identifiable memories are from when I was three years old, with growing awareness after that. It seems reasonable to say that consciousness emerges over a number of years.

Given most religions’ hostility towards evolution, how is it that God bestowed upon us a mind that evolves? I guess, like me, He enjoys a touch of irony.

I’m not terribly fussed about machine consciousness, but I am a fan of clear reasoning. Prof. Gelertner’s closed mind did not score well.

Michael McEvoy
Nov 12 2016 at 9:42pm

I agree with the timbre of several comments above. Dr G’s arguments were neither convincing nor moving. Were he a guest in my home I would look him in the eye and tell him his tone is not unlike those he seems to deplore so much. To toss out the Fascist term as he did – oh for heavens sake come on. In short , he is being a bully.

I am inspired by comments from my fellow Econ Talkers . I suspect I am far to the left of most of you, so I was surprised to read your reactions to the guest. Also relating to some comments above that are more political in nature, there is a smugness and authority on the libertarian and right end of the spectrum that I do not care for . And the guest seemed to project that.

To Mark C – I do not believe that Hillary Clinton thinks that way . She is as human as you and I. She has ample ambition and wants to -how might Russ put it? – be loved and to be lovely. You may differ strongly with her over how to do that . Her policy ideas may be unlovely , but I think you are off base in your characterization of her . She is no angel, no doubt. Most of us are not. I could accept her shortcomings more than i could Mr Trumps’.

Mark Crankshaw
Nov 13 2016 at 9:24am

@ Michael McEvoy

You are, of course, entitled to your opinion of Clinton and we are going to have to agree to disagree as I find your argument completely unconvincing. Hitler, Stalin and Mao had ‘simple ambition’ and wanted to be loved and to be lovely as well–so I really don’t care…that exonerates nothing. All of the excesses, tyrannies, brutalities and atrocities committed on human beings over the centuries have been committed by humans (who else is there?), so one’s ‘humanity’ isn’t of issue. All too human, as Nietzsche said.

Of course, the events of the past few days, the violent rioting and the unrelenting incivility from Clinton supporters, has only re-enforced my low opinion of the extreme Left (and those on the Left that have accommodated the extremists for decades by looking the other way or excusing them).

The Left, and their willing accomplices in the mainstream media, have spent the past two years attributing the darkest political attributes and motivations to ‘Trump supporters’ and now, as it turns out, that was all merely projection. Why am I not surprised?

Re-write your statement and see how many of these ‘protesters’, professional leftist political agitators, and the angry leftist pundits in the liberal media would agree:

“I do not think that Donald Trump thinks that way. He is as human as you and I. He has simple ambition and wants to -how might Russ put it? – be loved and to be lovely. You may differ strongly with him over how to do that . His policy ideas may be unlovely , but I think you are off base in your characterization of him. He is no angel, no doubt. Most of us are not.”

This was not the message the Liberal media, Barrack Obama, and Hillary have been spewing prior to the Election and this is not what liberals and telling their agitated hyper-ventilating liberal friends after the Election. If you are doing so, then fine, great, kudos, I most approve, but somehow I rather doubt you are. Again, if you are (I don’t claim to ‘know’ you), then super, well done.

You say “there is a smugness and authority on the libertarian and right end of the spectrum that I do not care for”. Fair enough, my whole point, however, is that description neatly fits the liberal/socialist and left end of the spectrum just as well (if not better). Do you disagree?

Nov 13 2016 at 3:43pm

It was surprising to hear a discussion on the spectrum of consciousness without any reference to Ken Wilber’s work. In my opinion his approach is far more comprehensive, and more consistently integrated with related fields such as psychology, biology, spiritual traditions and others.

Robert Swan
Nov 16 2016 at 3:34pm

I listened to the podcast again and have a couple more observations.

Firstly, apologies to Prof. Gelernter for misspelling his name earlier.

Most comments have been on the machine consciousness question. The “spectrum of consciousness” also deserves comment. I’m with Daniel Barkalow; to grade state of mind as “higher” or “lower” on this spectrum is just another instance of the thing I’m always banging on about: turning a complex multidimensional thing into a single number. Something has to be lost in the process, and it may well be something important.

On machine consciousness, Gelernter’s stance didn’t come across better at second listening. His personal repugnance to the idea was clear, but that’s all he offered. I’d like to hear something more reasoned (as Roger Penrose did in The Emperor’s New Mind), you know, something at a higher level on the consciousness spectrum. For someone who has just oversimplified consciousness to criticise AI enthusiasts for oversimplifying the brain is a bit rich.

Roger Barnett
Nov 17 2016 at 10:50am

Recently, at Burj Khalifa handed my camera to a stranger to take a picture of me and my wife. Right alongside us was a young couple taking a picture of themselves using a selfie stick. An “aha” moment: They were not comfortable interacting with people–to ask for an assist to take a picture–but they were comfortable interacting with technology. Selfie stick has my vote for “imbecile technology” of the century.
Roger Barnett
AB (Economics) Brown U. (in the era B.C. “Before the Craziness”)
MA, PhD (International Relations) USC.
Loyal follower of Econtalk for almost a decade–I listen when I exercise in the morning.

You’re the top!

Nov 27 2016 at 9:31am

Just to be clear, jw (lower case) disagrees with JW (upper case) and agrees with Prof. Gelernter in that the Singularity will not happen for a VERY long time, if ever.

[Note: The new commenter with the uppercase nickname JW has graciously agreed to change his nickname to JSW.–Econlib Ed.]

Nov 29 2016 at 9:20am


Thank you, that was very considerate.

Tanya U
Dec 1 2016 at 2:32am

I welcome Econtalk inviting a broad range of guests with a variety of views, so thank you, Russ. David Gelernter had me yelling at my devil’s device (oh sorry, smartphone), though.

The only way this could have ended any more like a cliche would have been if he had quoted an “interesting passage of the book titled ‘get off my lawn'”.

Part of the problem is that he doesn’t understand a lot of technology. How does he know what a “sum” or its abstraction in the plus sign is? I venture to say that the mathematicians who first programmed assembler coded the abstraction of addition much better than we could ever. The rest of his understanding of how a CPU works was just rambling.

Then the notion that we are going through a lull in innovation (kids these days, so uncreative). Wow. The advances in medicine garnered over the last 20 years are mibd-boggling, automation is finally on the horizon for many things we never would have imagined… but it all sounded like technology for him was more ‘planes, trains and automobiles’.

Lastly on kids. How does he presume to be the guy with the right answers. It can be just as valid to question HIM and believe in the abstractions that machine learning promises. How does he presume to know that facebook is “baaad” for kids, when I see my 10 year old achieving creativity and social competence in social media I would have dreamed of when I was a kid. Walling off kids to technology is exactly the wrong way to tackle the problem, just as it was walling them off to books or ‘dangerous ideas’ in generations past.

On the consviousness side… meh. I’ll stick with the good ‘ol ‘thinking fast and slow’.

John L
Dec 2 2016 at 7:00pm

Ultimately, the Turing test is about function. Yes, it’s true that it may someday possible to create a machine that is practically indistinguishable from a human mind in terms of inputs/outputs. David Gelernter acknowledged this.

HOWEVER, the hard problem of consciousness is very real. Fundamentally, each of us is (probably) convinced that our own consciousness is real but it’s very subjectivity prevents anyone else from confirming that this is true. There is probably no way to know whether a computer can ever truly be conscious and accepting the Turing test is a cop-out, plain and simple. I think his skepticism is absolutely reasonable in this context and he is right that the burden of proof is on the person who asserts that a computer will one day be conscious.

One of the functions of the human mind is to process information, but it is false to say that it simply ‘is’ a computer, as JSW does.

Comments are closed.


EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:



Podcast Episode Highlights

Intro. [Recording date: October 6, 2016.]

Russ Roberts: At the heart of your book, the idea of a spectrum of consciousness--of an Up and Down spectrum, in particular. Explain what you mean by that.

David Gelernter: Well, it's usual for people to think of the mind as a static object. I don't really mean 'people.' I mean professionals, like collegists[?], philosophers, neurobiologists tend to think of the mind as static. It's either conscious or it's unconscious. It's awake or asleep. It's, you are thinking--that's all the descriptive information you need to know: You are thinking, you're just thinking. If you are conscious, you are just conscious. But in fact, I think almost everybody who has thought about it realizes--certainly every child realizes--that you think in different ways at different times. You are better attuned to certain mental tasks at some times than at others. We're almost all best at careful analytic, step-by-step, maybe mathematical-type thought when our energy is high, when we're fresh, when we're alert, our mental energy is pumped up. On the other hand, we have to be in a very different frame of mind if we are, if we want to go to sleep, or we find ourselves drifting off to sleep. Clearly the kind of thought we are doing while we drift off to sleep--our minds are active; it's not as if they are blank. But the kind of thought that leads us into the stages that lead and turn into sleep is very different from what you would do if you were working on a spreadsheet or solving a calculus problem or doing some sort of piece of analysis. And connecting these two endpoints, or these two alternatives--the kind of focused analytic, rational, logical, reasonable thought on the one hand and the drifting, free-associational thought that we find on our way to sleep--between focused, active thought on the one hand and thought that is largely passive, that happens to us when our minds are drifting, when we are falling asleep and when we ultimately fall asleep--we're not taking action. We can't make ourselves fall asleep. These are things that happen to us. So, between the active analytical side of the spectrum and the passive, memory-intensive, drifting part of the spectrum, there is an entire range of different moods, different types of consciousness, different approaches to the mental world that surrounds us, different ways in which our mind operations. Different relationships between the mind and memory. An entire spectrum that changes continuously throughout the day. And into the night, too, as we sleep.

Russ Roberts: And I should add that you call the more alert, focused, analytical, problem-solving kind, the Up range. And the Down is the more drowsy, or thinking happens to us, rather than us feeling like we are in control.

David Gelernter: Right. I've spoken of Up Spectrum as the alert, wide-end end, and the Down Spectrum as the drowsier, sleepier, and sleep and dreaming end.


Russ Roberts: So, you argue, in one of the more fascinating--this is really an utterly fascinating book. It's a challenging book. It's full of just very, very different perspectives on all kinds of aspects of our mental life as well as our culture. But you've talked about this cycle, certainly through the day--that you end up in a drowsier state before you go to sleep. Then you sleep; and you wake up, maybe perhaps somewhat drowsy but you wake up and you are in the more alert state. But you also make a similar cycle for our evolution as human beings from childhood to adulthood. So, explain how you see that.

David Gelernter: Well, this is an interesting consequence. I mean, I regard it as interesting. It wasn't my initial focus. And when I, for instance, noticed it, I thought it was not there. But it kept demanding attention. At first, I cared about the day--the course of the day, the way our thought changes over the course of the day. Which seemed to me very important in the way we led our lives every day, day by day, a matter of real significance. But one can't help thinking about, at times, the development of children. And the fact is, if we turn the spectrum upside down or we look at the shift from highly focused, analytical, abstract thought--which we can manage when we are wide awake--to the kind of vividly imagined, illogical, more passive but more sensual thought that leads us into sleep, we see clear relationships to the way infants' and children's' thinking and thought develop. This is a complicated topic, because you can learn only so much by speaking directly with young children; then only so much by observation. But it certainly appears to be the case that children develop an ability handle abstractions. And this goes back to the classic work of Piaget and even earlier work. Children develop the ability to handle abstractions gradually in childhood. It's not something one is born with. Just as one is not born with a capacity to do arithmetic, or even to handle language. Well, one may be born with a capacity to handle language, but not with the practical skill of using language. So, these--the various abilities to deal in abstractions, beginning with language, moving on to reading and writing and arithmetic and the classical thing that children need to learn; and then moving from there moving into abstract thought, logical, rational reasoning, analysis going from the pre-language phases of childhood up through the ages of 5, 6, 7, 8, 9 where we are adjusting ourselves to the cultural world in which we live--the language world, the analytical, arithmetical, mathematical world--we are developing our ability to deal in abstractions. And it just so happens that every day we retrace that pattern in reverse. This is the kind of resemblance that modern science is heavily biased against. And I was, as part of modern science, was very reluctant to see anything there. The idea that ontogeny recapitulates phylogeny--the idea that--let me say in broader terms that developed mental processes over a long period of time reflect changes over a much shorter time period, was something that science in the 19th and earlier 20th centuries was fascinated by--turned out to be gigantically oversold, and has been out of fashion, now, for generations. Really for, I don't know, half a century. For a long, long time. But, nonetheless, if--bias and fashion in science is a huge topic that people have not grasped firmly enough. We tend to be science worshipers. And we tend to be more or less blind to the fact that science has its own fads and fashions and prejudices and inabilities to see clearly. And in this case, I think the resemblance is there. And I think it tells us something about the nature of the spectrum and gives us some hints about what to look for in more detail as we study the development of children and their growing capacity to deal in abstractions. That is, to deal with memories not of a specific concrete object or instance or person, but to deal with memories that abstract over many people: 'Everybody I've met in the first grade is like this,' or 'All houses with decks seem to have birdfeeders at their back doors.' Something like that. The ability to deal in abstractions, which is really the ability to think--there is no difference between generalizing and thinking--is something we don't understand enough, and something[?] the spectrum can help us understand, as we watch our own ability to think in abstractions crumble every day. The fact is, you can try and make yourself prove a theorem or put together an impressive abstract, logical, rhetorically impressive argument at the point that you are falling asleep; but it just won't work. You don't have the mental capacity at that point in the cycle, down the spectrum. So we need to see these resemblances, I think; there's a lot we have to learn about both the development of children and our own daily, cognitive life. I mean, this is a subject that almost doesn't exist. If you tapped a psychologist or philosopher or a neurobiologist on the shoulder and said, 'Tell me about my daily cognitive life,' you are most likely to get a blank stare. I mean, 'You're alive. That's it. We can tell you about thinking, but there's nothing to discuss in the course of the day and what happens over a day.' But that just is obvious in the change of developing children.


Russ Roberts: So, fads and fashions in science, and particularly social sciences, are a big theme of this program. So, it's a very comfortable idea here. But I'm a little bit puzzled by this spectrum. And in particular--thinking about only myself, which of course is the thing that I'm best at thinking about in terms of how consciousness works--when I'm working on a problem, when I have a problem to solve and I'm trying to think about, say, just to pick an example--I'm going to give a talk and I can't figure out what the sequence of the arguments should be, that would be the most effective. Or, maybe I'm preparing for this interview and I want to figure out: What's the key thing I want to make sure I get across? And, what's an analogy I want to make sure I make? Or something like that. I often find myself as I fall asleep focusing on those. And of course, deliberately. They don't fall into my mind. I say to myself, 'What am I going to think about now?' And I think about those things. And sometimes I have to get up, out of the dark, and get a piece of paper, find my phone, and make a note. Because I've solved that problem. And I also like the idea that while I'm asleep, the problem is going to get worked on. And let me give one more example. When I'm doing what I would call creative work--say, writing the lyrics to a song or a poem, trying to think of a plot twist for a novel--I'm highly alert and focus. And yet you associate story-telling, emotion, feeling, creativity more with the Down spectrum, which is drowsier. Understand my confusion there? What am I missing in what you are trying to say? So, that's two questions. One is about going to sleep--it's really the same question. Sometimes when I'm drowsy, I'm very focused. How is that?

David Gelernter: Right. Two good points. One of which has to do with the nature of creative work that we do, and the other has to do with the delicacy of the mental balance that you need for creativity. In order to be creative your memory has to be swinging around with freedom; but you've got to be sufficiently alert to notice what you are doing, what you are thinking about. The first issue is important. I.e., you might be nearly asleep when you find yourself thinking about some problem or other, maybe in a new way or a useful way or an interesting way, and you have to get up and jot things down. I think--the balance of the evidence suggests that the new thinking, the useful, creative thinking we do lower down on the spectrum closer to sleep tends to be driven not by our ability to think up new things so much as our openness to our own minds: our willingness to accept what our minds put forward without jumping the gun, censoring our own thoughts, quickly dismissing ideas that we had some reason to dismiss in the past without looking into them. What happens is as we approach sleep, we drop our guard. We don't pay as careful attention to what thoughts are admitted into consciousness. Usually when we are wider awake we are very alert to what we allow into consciousness. There are thoughts that are upsetting or painful. And there are also thoughts that are just a waste of time, as far as we're concerned. And we are quick to reject them. We have very well-disciplined minds, up-spectrum. On the other hand, the thoughts that we need in order to make progress--anyway, the thoughts I need in order to make progress, in writing a piece or putting together a talk--as an example--you give, or doing work of that sort is often a matter of getting over hurdles I've erected myself: thinking freely, clearing away my tendency to turn away from a promising path too quickly without exploring it. And the enormous value of thinking about things for me when I am tired is that I'm not in guard-dog mode, and my thoughts can flow freely. And often they wind up--not often, but occasionally they wind up flowing into fruitful directions--

Russ Roberts: Correct--

David Gelernter: where they would not have flowed if I had been editing, if I had been editing and censoring my thoughts. And it is the unedited--I think you'll find that for every occasion on which you get up with a thought that's so exciting you no longer want to lose these--you want to write it down or put it down some way. There are a hundred occasions on which you think on which you think your thoughts are nonsense, in which they are swirling around in no particular direction--

Russ Roberts: Yes, speak for yourself--

David Gelernter: they aren't going anyplace in particular. Of course, we remember the significant ones.

Russ Roberts: Yeah.

David Gelernter: We remember the ones where we've actually accomplished something.

Russ Roberts: The truth is, there's only about a handful of times that I get up like that in the middle of the night. And I'll also add that I also occasionally take a 20-25 minute nap in the afternoon. And every once in a while, as I prepare for that nap--which I find delicious; it's a couple of times a week, maybe--I will occasionally have an idea that causes me to jump up and not take a nap. But most of the time, I go to sleep. And I enjoy it.

David Gelernter: Right. Well, fair enough. And by laying off the heavy-handed editing which we all do, we make opportunities for ourselves: we've got so much more in our memories than we are generally willing to acknowledge. I mean, we take very gentle care of ourselves: we don't want to damage ourselves. There are so many thoughts that are somewhat painful, upsetting, embarrassing; and we just don't think them. We're not set up to entertain painful thoughts. We're set up rather to suppress them. And we're very good at suppressing unpleasant thoughts. And, you know, this work goes back to [Sigmund] Freud and the gross unpopularity of Freud today--the fashion that's been moving, lifting a little bit, there's sort of a fog over our own intellectual history that has obscured our view of some of the most important philosophical work of the past century and a half is changing a little bit. But probably not enough. And just as Freud wrote about our tendency to suppress painful ideas: Freud himself was a painful idea who, you know, our tendency has been to suppress.


David Gelernter: You also mention, which is important, the fact that you have a focused sense when you are working on lyrics or writing poetry, let's say. And I've argued, on the other hand, that you need to be well down-spectrum in order to get creativity started. That is, you can't be at your creative peak when you've just got up in the morning: your attention is focused and you are tapping your pencil; you want to get to work and start, you know, getting through the day's business at a good clip. It's not the mood in which one can make a lot of progress writing poetry. But that's exactly why--that's one of the important reasons why creativity is no picnic. It's not easily achieved. I think it's fair to say that everybody is creative in a certain way. In the sort of daily round of things we come up with new solutions to old problems routinely. But the kind of creativity that yields poetry that other people value, that yields original work in any area, is highly valued, is more highly valued than any other human project, because it's rare. And it's rare not because it requires a gigantic IQ (Intelligence Quotient), but because it requires a certain kind of balance, which is not something everybody can achieve. On the one hand--it's not my observation; it's a general observation--that creativity often hinges on inventing new analogies. When I think of a new resemblance and an analogy between a tree and a tent pole, which is a new analogy let's say that nobody else has ever thought of before, I take the new analogy and can perhaps use it in a creative way. One of a million other, a billion, a trillion other possible analogies. Now, what makes me come up with a new analogy? What allows me to do that? Generally, it's a lower-spectrum kind of thinking, a down-spectrum kind of thinking, in which I'm allowing my emotions to emerge. And, I'm allowing emotional similarity between two memories that are in other respects completely different. I'm maybe thinking as a graduate student in computing about an abstract problem involving communication in a network like the ARPANET (Advanced Research Projects Agency Network) or the Internet, in which bits get stuck. And I may suddenly find myself thinking about traffic on a late Friday afternoon in Grand Central Station in Manhattan. And the question is--and that leads to a new approach. And I write it up; and I prove a theorem, and I publish a paper. And there's like a million other things in the sciences and in engineering technology. But the question is: Where does the analogy come from? And it turns out in many cases--not in every case--that there are emotional similarities. Emotion is a tremendously powerful summarizer, abstractor. We can look at a complex scene involving loads of people rushing back and forth because it's Grand Central Station, and noisy announcements on [?] to understand, loudspeakers, and you're being hot and tired, and lots of advertisements, and colorful clothing, and a million other things; and smells, and sounds, and--we can take all that or any kind of complex scene or situation, the scene out your window, the scene on the TV (television) when you turn on the news, or a million other things. And take all those complexities and boil them down to a single emotion: it makes me feel some way. Maybe it makes me happy. Maybe it makes me happy. It's not very usual to have an emotion as simple as that. But it might be. I see my kids romping in the backyard, and I just feel happy. Usually the emotion to which a complex scene has boiled down is more complex than that--is more nuanced. Doesn't have a name. It's not just that I'm happy or sad or excited. It's a more nuanced; it's a more--it's a subtler emotion which is cooked up out of many bits and pieces of various emotions. But the distinctive emotion, the distinctive feeling that makes me feel a certain way, the feeling that I get when I look at some scene can be used as a memory cue when I am in the right frame of mind. And that particular feeling--let's say, Happiness 147--a particular subtle kind of happiness which is faintly shaded by doubts about the coming week and by serious questions I have about what I'm supposed to do tomorrow morning but which is encouraged by the fact that my son is coming home tonight and I'm looking forward to seeing him--so that's Happiness 147. And it may be that when I look out at some scene and feel Happiness 147, that some other radically different scene that also made me feel that way comes to mind--looking out at that complex thing and I think of some abstract problem in network communications, or I think of a mathematics problem, or I think of what color chair we should get for the living room, or one of a million other things. Any number of things can be boiled down in principle, can be reduced, can be summarized or abstracted by this same emotion. My emotions are so powerful because the phrase, 'That makes me feel like x,' can apply to so many situations. So many different things give us a particular feeling. And that feeling can drive in a new analogy. And a new analogy can drive creativity. But the question is: Where does the new analogy come from? And it seems to come often from these emotional overlaps, from a special kind of remembering. And I can only do that kind of remembering when I am paying attention to my emotions. We tend to do our best to suppress emotions when we're up-spectrum. We're up-spectrum: We have jobs to do, we have work to do, we have tasks to complete; our minds are moving briskly along; we're energetic. We generally don't like indulging in emotions when we are energetic and perky and happy and we want to get stuff done. Emotions tend to bring thought to a halt, or at any rate to slow us down. It tends to be the case as we move lower on the spectrum, we pay more attention to emotions. Emotions get a firmer grip on us. And when we are all the way at the bottom of the spectrum--when we are asleep and dreaming--it's interesting that although we--often we think of dreaming as emotionally neutral except in the rare case of a nightmare or a euphoria dream, and neither of those happen very often--we think of dreams as being sort of gray and neutral. But if you read the biological[?] literature and the sleep-lab literature, you'll find that most dreams are strongly colored emotionally. And that's what we would expect. They occur at the bottom of the spectrum. Life becomes more emotional, just as when you are tired you are more likely to lose your temper; you are more likely to lose your self-control--to be cranky, to yell at your kids, or something like that. We are less self-controlled, we are less self-disciplined; we give freer rein to our emotions as we move down spectrum. And that has a good side. It's not good to yell at your kids. But as you allow your emotions to emerge, you are more likely to remember things that yield new analogies. You are more likely to be reminded in a fresh way of things that you hadn't thought of together before.


Russ Roberts: So, I want to take this into story-telling, which is something we talk about occasionally on the program, something I'm extremely interested in and I know you're interested in as well. I want to give you a couple of proofs for what you just said, from my own experience. One is certainly writer's block--the blank piece of paper that confronts a writer on a bright Monday morning at 8:30 a.m. And that challenge is well known. It's a cliché but I think it's true. And that's why--

David Gelernter: Absolutely.

Russ Roberts: That's why Hemingway said--I think it's a genius idea--he said, 'Always stop writing when you know what's going to happen next.' And that is what allows you to get back into that story-telling mind, that drowsier, emotional, more open creative mind. And the other thing it makes me think of is the role of music. A lot of writers--and I've done this myself, often, not always but often--will use music as a way to jumpstart their creativity. And that's also consistent with your story, because it's basically saying: The music--which is full of emotional associations, full of memories--sort of frees up our mind to get into that more down-spectrum feeling, which allows us to free associate, be more creative--

David Gelernter: Absolutely. It's a fascinating example. Because the only thing music can communicate is emotion. It can't speak to us--obviously it can't lay out propositions or make assertions or tell jokes, in a certain sense. But basically what music does is suggest emotions. And it can suggest an enormous range of emotions. And exactly as you say: it can put us in a mood or make us feel in a way that we are not accustomed to feeling that's not our usual state of mind; it's not our usual mood. And when that happens, ideas, recollections can emerge that don't usually show up, and that spur us on the way to new and different thoughts and ideas.

Russ Roberts: So I want to tie this--

David Gelernter: It's interesting with Hemingway--just parenthetically. It's interesting you should mention Hemingway. I mean, one associates Hemingway--his very best work, the short stories is my view that he wrote in the first 10 years of his career in Paris, his Paris short stories. And one associates him with a rigid agenda. Every morning--or at least that's what he says. Maybe it only happened one morning out of two. But he gives us the impression: Every morning, he gets up, he walks a couple of blocks, he trudges upstairs to his unheated garret, he makes a fire, he unloads some [?] from his pocket and he sharpens his pencil with his penknife. And he sits down and he gets to work. And he works hard like that all day. And on the one hand, you say, 'Well, a person is not in the mood for creative writing to do his best creative writing early in the morning.' Your emotions are firmly disciplined, held in check. They can't get out. But it's interesting that much of Hemingway's originality in his first stylistic phase was precisely this tightly reined-in feeling that his prose gives you. Certainly there are deep emotions simmering under the surface--

Russ Roberts: Sparse--

David Gelernter: but it's exactly that they are beneath the surface, rather than hung out to dry on the surface of the prose, that makes his writing so powerful. And I think it probably has something to do with the way he manipulated his own spectrum. Not really on purpose. But he probably had some awareness. I mean, he was a sharp guy, probably generally knew what was going on.

Russ Roberts: Well, I certainly agree with you that his best work were those short stories. I've read a lot of his novels when I was younger; and thinking back on them, I have no interest in reading them again. But I have some interest in the short stories.

David Gelernter: It is interesting. I have a similar feeling. Although, I can extend it to--his first novel, and to a lesser extent his second, The Sun Also Rises, and A Farewell to Arms--I've found radically more readable than anything else he ever wrote in the way of novels, for the rest of his life. And too bad, that his career should have been biased like that. And yet, one is grateful for what he did--

Russ Roberts: Interesting, yeah--

David Gelernter: which is funny.


Russ Roberts: I want to talk about your story-telling, and mine as well. Which is tied into an idea which is in the book. Which is, over time--meaning the last 300-400 years--our culture has become increasingly enamored with what you call up-spectrum thinking--the more analytical, logical kind--and less respectful of the emotional, story-telling kind. And in economics, it manifests itself in a disregard for people like Adam Smith and F. A. Hayek and an honoring of people like Paul Samuelson and others who have added mathematics to economics. Sometimes fruitfully. But often to my mind not much advancing or understanding. But put that to the side: It's certainly the case that story-telling is less respected than analytical thinking in our current culture. And I'm thinking about an earlier book of yours, which was 1939, which is a book about the New York World's Fair. And it's a wonderful book. And in that book you mix observations about the fair with a narrative story. And I've done something similar in my novels: I write novels that illuminate economics--try to illuminate economic ways of thinking and teach the reader things. And I've often wondered whether, as much fun as those were to write--and I assume 1939 was fun to write for you--I wonder if we handicap the reach of the book, of those books, by mixing in those two types of thinking. Just to put it one way, one publisher that turned it down, said, 'Well, I wouldn't know where to put it. I wouldn't know what section of the bookstore to put it in.' And I thought, 'Well, that seems like a feature, not a bug, to me.'

David Gelernter: Yeah.

Russ Roberts: I understand why you like to sell books--not so much, actually, as it turns out; most of the time they don't try [?]--but I understand that you do have to put it somewhere. But you could put it in fiction and advertise it as an unusual fiction book. Or put it in economics and say, 'But it's also a story.' That doesn't appeal to many people. And many people I think find it jarring. So, talk about those two things--the evolution of our culture toward analytical, up-spectrum thinking, and the way it affects, say, story-telling and communicating in these kind of books.

David Gelernter: Well, what you say about the book store reminds me so strikingly of a message I try and get across to some of my students. But which jars radically with the message they receive from the educational establishment. Which is it's not natural to believe that you are what you major in: the idea that a person has one area in which he really--

Russ Roberts: Yeaaahhh--

David Gelernter: and everything else is just irrelevant is absurd.

Russ Roberts: Destructive, horrible.

David Gelernter: The successful minds--and they've told us, you know, all sorts of people who have achieved all sorts of things--have told us of their interests, and there are very damned few of them who are interested in one thing and whose interests don't slope over into other things. I mean, Hemingway was just as interested in hunting and fishing as he was in--and for that matter, military history and strategy and camp craft and a whole bunch of other things as he was in prose and literature; and history in Paris--all sorts of things. Anyway. But there is a layer of society, the educational bureaucrat layer, that makes up majors and sort of holds down the administrative positions at the university. And many faculty positions, too. Certain people, and publishers and publishing industry--I mean, I'm, I've been tremendously lucky and I've had some extraordinary editors to work with, in this last book especially at Norton. But in other places, too: I've been very lucky. But I'm not so naive as not to have noticed that there are a lot of people in the publishing world who don't have the imagination to be authors--not to put too fine a point on it--who would have a more focused and narrow view of things.

Russ Roberts: But I'm suggesting they had a point. Which is, as painful as it was to hear that, and as easy as it is to dismiss as unimaginative, and as correct as you are about specializing as an undergraduate, maybe there is something to this idea that mixing up-spectrum and down-spectrum communication is challenging to people.

David Gelernter: I don't think that's true. I mean, it's an interesting hypothesis, and something that ought to be entertained, certainly, not rejected. But I tend to think that--I tend to think that we have much more flexible and varied minds than we give ourselves credit for. It's true that, while we are focused on one topic, it might be an up-spectrum, or a mid- or a down-spectrum topic, we don't bob and dance around the spectrum all that much. But we assume that over the course of the day we make quick excursions down or up, and quick excursions to one subject to another. I think a lot of education is in beating that out of people--is in urging people to restrict their view of what they are doing, not to get lost in--the culture we built is intensely hostile to the idea that an expert on one topic can also be an expert on another topic. It isn't that it rubs people the wrong way. It doesn't rub normal people the wrong way. But the educational bureaucracy, there are people who like organizing and managing and running things, who want everybody to be parked in the right intellectual parking space; and who despise sloppy parking. I think we'd make a lot more progress, and [?], scientifically, artistically, if we took it for granted--as earlier generations did, in the 19th century--if you were an educated person, of course you were interested in science. And you read what was happening in science. Whether you were a newspaper editor or banker or whatever you were, there were a cluster of areas that constituted culture; it was understood then and still true today. You know, it was assumed that everybody would be interested in them. Not at the same professional level, but it was natural to be up to date on what was happening, intellectually, artistically, scientifically, musically. We've lost that. Or more like killed it on purpose. And I don't think that we are thereby acknowledging a psychological reality about human nature. I think it is a psychological reality that people have points on the spectrum that are natural for them. I mean, I know, for absolute fact, that there are--I have--[?]--minds are high spectrum. That's where their thinking runs; that's where they are comfortable. And that doesn't mean they can't think other ways; but they are not comfortable thinking other ways. I know people who insist that they can't think in an up-spectrum fashion--

Russ Roberts: I don't do math--

David Gelernter: they are dumb or they just don't have the brains for it, and stuff like that. Which is annoying. Because they've been talked into that by their own teachers, sometimes their own parents. It's true that our personalities run at some spectrum point, but it's also true that 90% of the value of the human mind is in its flexibility, the fact that we can do so many things, the fact that it isn't a one-trick outfit. It can do a lot of things. And we tend to suppress that; we don't like it. We're indebted to people who are willing to manage and administer; but that also is a personality that has taken its bite out of culture. I certainly think it's true that the first thing you mentioned--that culture has moved up spectrum. I see the uncritical admiration for turning into worship of science all around me. I can see the contrast between when I was a child, and the climate today has gotten worse. Even though, when I was a child, there were plenty of people who remembered the Manhattan Project, the origin of nuclear physics, who remembered heroic achievements and the founding of computing today were not quite in as imaginative and productive a time. Let me [?] oscillate, we'll be in equally imaginative times again. But it seems that our science worship, our mathematics worship, only increases. And when you talk about the tendency among some people in economics to push things in a mathematical direction, certainly what I found, when I was a student years ago, interested in economics--took an introductory course which was one the one for science majors. And it turned out merely to be in applied mathematics--

Russ Roberts: Yup. Like [?]--

David Gelernter: differential equations. And we never learned anything about how people, how an economy actually operates. Which is what I hoped we would learn. But you see the same thing in computer science also, in which people are well trained mathematically, and they take these tools for granted. But they'd much rather publish a paper with a lot of theorems and proofs than a paper that simply describes in straightforward English what is going on, or what the assertions are that they want to make. This is a powerful tendency. And I think it's done a lot of damage.


Russ Roberts: Well, I want to turn to computer science. Here's a quote from the book. You say, "Post-Turing thinkers," you are talking about Alan Turing, "decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer--the most important, most influential and (intellectually) most destructive analogy in the last hundred years (the last hundred at least)." So, you are a skeptic about the ability of artificial intelligence to eventually mimic or emulate a brain. So, talk about why. And then why you feel that that analogy is so destructive: because it is extremely popular and accepted by many, many people. Not by me, but by many people, smarter than I am, actually. So, what's wrong with that analogy, and why is it destructive?

David Gelernter: Well, I think you have to be careful in saying what exactly the analogy is. On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we'll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there's a tremendous amount to be done that we haven't done yet. On the other hand, if by emulating the mind you mean achieving consciousness--having feelings, awareness--I think as a matter of fact that computers will never achieve that. Any program, any software that you deal with, any robot that you deal with will always be a zombie in the sense that--in the Hollywood and philosophers' sense of zombie--zombie a very powerful word in philosophy. In the sense that it's behavior might be very impressive--I mean, you might give it a typical mathematics problem to solve or read it something from a newspaper and ask it to comment or give it all sorts of tests you think of, and it might pass with flying colors. You might walk away saying, 'This guy is smarter than my best friend,' and, you know, 'I look forward to chatting with him again.' But when you open up the robot's head, there's nothing in there. There's nothing inside. There's no consciousness.

Russ Roberts: Well, you wouldn't see it even if it were there, because you don't see it when you open a human brain, right? So that's not a good proof--

David Gelernter: That's true. That's not a proof at all. That's just a clarification of what I'm--

Russ Roberts: I hear you.

David Gelernter: asserting. And then the question is: Why would one make that assertion? So, again, what I'm asserting is not that computers are limited in the performance they can put on. They will be able to put on very good performances and seem very human-like. However, they will never be conscious; they will never feel an emotion. They will never be aware of anything in the sense in which we are aware of things. And why that is, is because all the evidence that we have--and this is not really a matter of something that is proved or that is answered definitively, answered 'no.' It's more of a scientific than a mathematical question. The scientific question which evidence accumulates and you do your best to figure out which way it's pointing and what the trajectory is. All the evidence that we have suggests that consciousness is an organic phenomenon, is a biophysical phenomenon associated with a very special type of physics and chemistry. The only instance of consciousness that we're aware of, in the cosmos--granted we've only looked around on this planet, but at any rate, we haven't heard of any other instances so far; and there are many, many kind of life on this planet. But the only consciousness we're aware of is associated with highly sophisticated and complicated animals, is associated basically with human-like creatures, of whom there are very few compared to the generations of bacteria, which completely dominate any list of all life forms. The only place we found consciousness--the only instances of consciousness we have to suspect, the only instances of feelings or the suspected presence of feelings--are associated, every single one, 100% of those instances are associated with human-like animals. That is, a certain kind of carbon chemistry. A certain kind of physics. A certain kind of communication from nerve cell to nerve cell that's electrical on one level and chemical on the level beneath it. If I say, 'Well, sure, but consciousness could be an abstract phenomenon,' it's true: Consciousness could be. Rust could be an abstract phenomenon. Green-ness could be an abstract phenomenon. Having apples, fruiting out apples in the fall, could be an abstract phenomenon. And it could be that I could make any of those things happen with software. If I believed that rust or fruiting out of apples were abstract, I could tell my graduate students, 'Get to work and write software that I can download that will make my computer rusty,' or 'will make it fruit out in apples.' The proper answer would be, 'We'll try it if you like.' But all the evidence that we have suggests that fruiting out of apples or rusting or being bright green are chemical properties. They have to do with particular kinds of chemistry and physics. They are not random properties that--they are not like gravitational attraction, which itself is associated with mass but which is a fairly abstract property. They are not abstract in that sense. If you ask us to list instances of rusty things that have no iron content and weren't exposed to a certain kind of oxidation, the examples are zero. There is not a single such instance. There is not a single such instance of anything fruiting out in apples that isn't an apple tree, or that doesn't have the bio-chemistry and physics of an apple tree. Nor is there a single instance of anything being conscious or having feelings that is not a human being or an animal very similar to human beings in the complexity of its nervous system and its physiology in general. So, the idea that we could make computers conscious by dint of building the right software is an assertion which is allowable in principle but seems completely random--totally unsupported. If you make this assertion--that is, despite the evidence of 100% of all conscious things on earth that have ever been observed: 'I think that some day computers will be conscious'--I think the onus is on you to give me some reason to think that's true. I don't have to prove it's false. All the empirical evidence we have suggests that it's false.

Russ Roberts: Okay. So--

David Gelernter: All the empirical evidence we have suggests that consciousness is something that happens to animals. Not [?] of silicon and not birdcages and not fruit trees, but animals. So, if you think otherwise you've got to explain what makes you think so. And that proof or that argument has never been forthcoming.


Russ Roberts: So, in an essay I wrote recently, an example I gave related to this--we'll put a link up to the essay--is, it is possible that some form of artificial intelligence, whether it's a vacuum cleaner, say, that goes around your house and knows to avoid corners or to reorient itself or some other piece of smart technology: Will it ever have a yearning--say, a robot that aids you in your daily tasks, that will come, that's coming?--will such a robot ever yearn to be a driverless car? Will it ever say, 'You know, I just wish I'd been a driverless car.' Now, we have trouble imagining--I have trouble imagining that. So, I, like you, reject that argument. The argument that the other side would give is that, 'Well, we just haven't gotten far enough. The brain is just a giant computer. It's a bunch of neurons, a bunch of on-off switches. That's what a computer is. Yes, the chemistry and physics are complicated, but eventually, just like people said we'd never figure out, say, speech recognition, or facial recognition, we've made progress on both those fronts. Some quite impressive. It's just a matter of time before we get that yearning feeling, emotional side to consciousness that you think is only present in carbon-based life.'

David Gelernter: Yeah. You see, it's complete nonsense. It's one non sequitur after another. It's a non-argument. You could say, 'A computer is like a brain because it's a bunch of on/off switches.' But that's absurd. I could say, 'The railway system is like a brain because it's a bunch of on/off switches.' The brain is not a bunch of on/off switches. That is a ridiculous claim. Although certainly people in computation make it often. It's patently absurd. It's a collection of neurons which have very specific chemistries, chemical makeups. The neurons have complex behaviors. They are not merely on or off. They generate chemical signals and pass signals downwind under certain circumstances. The resemblances between a brain and a computer are minimal, trivial, and superficial. There are a million on/off switches in the world. I mean, I could look at my house and say it's nothing but on/off switches. Here I see an on-off switch right here that turns on the light. There are some lights over this desk area. And across the room I see more on-off switches that control the lights somewhere else. And here are more on/off switches for more lights. And in the kitchen there's an on/off switch that has to do with the garbage disposal for some reason; and another on/off switch downstairs that controls the furnace--it's a cut-off for the furnace. So, I could say, 'Look, when you get right down to it, you really want to understand this in a proper abstract way, what is a house? It's a bunch of on/off switches. It's just like a brain.' And by the same token, I can say, 'What is a brain? It's a bunch of on/off switches.'

Russ Roberts: Yeah, but to be fair--

David Gelernter: It's an absurd abstraction.

Russ Roberts: David, to be fair to the other side--and again, I don't agree with it--I think the leap of creativity that the other side is making, which is a little more fair than the house analogy, is that the machine does many things the brain does. It calculates, right? It doesn't make abstract calculations but it does make algorithmic calculations the same way you and I do in our upscale, up-spectrum moments. So I think that's where it really gets--

David Gelernter: It really does not.

Russ Roberts: Well, that's where it got its selling point, this idea of artificial intelligence after a while, its going to do something--

David Gelernter: It does arithmetic. But when you say, 'like we do,' that's just not true. When you do arithmetic, you think about what you are doing. You are aware of what you are doing. You know that what you are doing is correct; or you might know that you went of the track somewhere and your answer is probably going to be wrong and you should go back. You are capable of changing the method you use. You are capable of saying, 'I learned some other way to do long division and why am I doing this?' You are capable of saying, 'Why should I do this at all? I have a calculator. I have a computer. I don't need to do this.' You are capable of writing numbers and saying, 'Why is a 5 so much more complex than a 1? Why is a 2 relatively complicated but a 7 is simple?' That is your conscious, your conscious agent, which is radically different than a machine. It's true a computer can do arithmetic; and so can a calculator made out of gears and sprockets, and so can an abacus. All sorts of machines can do arithmetic. But it's got nothing to do with the mind, because they do arithmetic in a way that's so radically different from the way we do. Mainly they do it in a zombie-like state. They do it unconsciously. They do it without awareness, without the creative ability to understand what they are doing, to change what they are doing, to evaluate what they are doing, to feel what they are doing. You might be doing a long problem, and you get to the final answer and somebody says, 'That's right,' and you are happy. A computer doesn't have that capability. It doesn't do--it does arithmetic in a way that isn't at all the way you do, because as far as it's concerned, it's never heard of arithmetic. It has no concept of a number. It doesn't know what plus means. It has no idea what it's doing. It doesn't know that it's manipulating numbers. It might as well be manipulating spreadsheets or designs for furniture or profiles of movie stars in the 1930s. It doesn't resemble what you do at all, except in a radically superficial way.

Russ Roberts: So, I'm going to pile on. Now that I've dutifully challenged your view, I'm going to pile on and add to it. Which is: You speak in the book, you write in the book very eloquently about what I would call the 'Aha Moment'--you didn't call it that--but how so many creative moments in science or in literature and elsewhere are just "out of the blue." And I think of Andrew Wiles--one of the most moving things I've ever seen is when Andrew Wiles had "proven" Fermat's Last Theorem, and then that proof turned out to be wrong--and for a long, long time, it was months, I can't remember if it was a year, more than a year, but it was a long, long time--it appeared he had just simply failed. So he'd gotten all the accolades and all the glory for solving the greatest mathematical problem of all time. And then it turned out not be true; and he had no solution to it. And in the documentary about this, he says, 'And then one day I was sitting at my desk, looking off in the distance. I was thinking about--' x, and he suddenly saw the right way to fix the proof. And it's incredibly moving. And I think it's as if a computer trying to solve some problem having worked on all night, and then the electricity gets shut off accidentally. But you turn it on in the morning and it just gets the answer right away. And that, just probably isn't going to be--

David Gelernter: Isn't likely[?]--

Russ Roberts: It happens. In fact, it seems to be impossible.


Russ Roberts: Let's turn to the so-called Singularity. You didn't say why this analogy of the brain to a computer is destructive. Does that have anything to do with your views on the Singularity? This idea that somehow artificial intelligence will outstrip human capability and will become somehow like a kidney source, out of control, robots, artificial intelligence. Are you worried about that?

David Gelernter: Yeah, I am. Although the first reason I think this view is destructive is just because it builds ignorance. And nothing that suppresses knowledge and substitutes ignorance can be good. If teachers go around teaching their students, at Yale and a million other universities and high schools all over the landscape, the mind is like software and the brain is like a computer, it's a falsehood. It's a lie. Not a deliberate lie in a moral sense. It's a falsehood. And not only is it wrong in itself, but it suppresses a search for truth. A few students are going to be smart enough to say, 'Well, wait a minute: it doesn't quite add up.' But most people believe what they're told. Most students believe what they're taught. And those are a lot of minds we're taking out of action by teaching them falsehoods. At least, at least, if such people would say, 'There are some, a majority, who say a mind is like software but there's a significant minority who say that's nonsense,' that would at least give the mind a little ledge of opportunity. But most teachers are too certain of these falsehoods even to question them. But there's an even greater danger, which is that we start incorporating circuitry--cells, computers--into living human beings. And--

Russ Roberts: This is Transhumanism, now. Which we had an episode with, with Richard Jones. We'll put a link up to it for listeners.

David Gelernter: Yeah. But this is very much on the Singularity agenda, on the Kurzweil agenda. And so, we build an improved human being. And technology itself improves. That's the nature of technology. And so, I can--if I've got $5000 to spend, maybe I can buy an extra 10 IQ (Intelligence Quotient) points for my first kid. But three years later, that same $5000 will buy, you know, 30 extra IQ points. So, I know the older child is strictly less intelligent than his younger brother. He's just not as advanced. And he never will be, unless I operate on him. I mean, maybe his future constant surgery, as I take out the old chip and slip in a new one; and he puts on a new personality--because of course your IQ affects everything, every aspect of your personality. I mean, that would be a nightmare. Assuming that he keeps his old personality. Which assumes that each generation of children you produce is smarter than the previous one. And for the first time we have actually obsolete human beings. We have almost planned obsolescence in the sense that we know perfectly well that the next generation of human-ware, chip-ware, whatever it is, or brain-ware, is going to be more advanced in this generation. This kind of toying with human life which is so fundamentally fascist in its inclination--I'm not calling Kurzweil a fascist--there is a philosophical inclination which says our thoughts transcend the mere individual lives of particular human beings; we exist at a higher level, I and my followers, and we can think of much bigger pictures than the ordinary normal human being can do--that is, to me, an unacceptable way to think, a morally evil, unacceptable way to think. From my standpoint, I think of it as un-Jewish, because I am a Jew. But I know no Western moral system, whether Christian or philosophical, that will allow this kind of instrumentalist objectification of human life, which is identical, I think, to transhumanism and to the whole drift of the implantation of computers into human beings--the reckless, morally reckless toying with human life in that way.

Russ Roberts: I'm not sure it's going to be stoppable. It doesn't seem to be stoppable, right? If you start by thinking about steroids and sports and plastic surgery and the arts for actors and actresses, the way we use technology to enhance ourselves, constantly, whether it's the smartphone in our pocket or the--it used to be the calculator in our briefcases. It's just very hard to say 'No,' to those improvements for human beings.

David Gelernter: Well, yeah. It's true. I mean, I could say that steroids and plastic surgery are still very narrow phenomena and you've got to be pretty crazy--I mean, you have to be insanely obsessed; you have to be in moral no-man's land in terms of the importance to you of some aspect of your career in order to go that route. I think most people would probably still reject it. I think you're right, but it's very difficult to see how transhumanism will be defeated[?]. But I still think it is defeatable. And the only way to do it is to do just what you are doing: is to talk about it. I think it will win insofar as we don't think about. In that case we'll just slide into it. We'll just give in to the Professor Feel-Goods of the future. If we talk about it, we won't do it. The hope of the singularity people is that they will catch people's attention and they won't get any serious objections because it won't awaken any serious thought.


Russ Roberts: Let's close with this quote from the book, and I'll let you comment on it. You write

To learn how to communicate with our fellow human beings, young people must turn off Facebook, shut down their computers, and look people in the eye, listen to their voices, watch their gestures. They must look for subtleties and ponder their meanings. They must learn to read, not words (which are easy), but people--and that requires a whole childhood and adolescence to learn. Some people never manage it, although they try; this sort of reading, the important kind, requires intelligence and talent, not just a few years' dogged practice. By allowing children to play with computers when they should be dealing with each other face-to-face, we are damaging the most important learning process of their lives.

I'm very sympathetic to that view. We watch almost no television in our house, growing up--my children. And I thought that was a good thing. And yet, as they get older, and as our digital lives get more vivid and common and ubiquitous, I find it--I've found it, now they're mostly out of the house--but I've found it more and more difficult to keep them away from screens, and more and more face to face. A little bit like this issue we are talking about of transhumanism. Very hard for people to resist these temptations. Do you want to react to that?

David Gelernter: I think the process of walling children off is doomed. You just can't do it. It will never work. I mean, i.e., when I was a child I--there were families here and there that said, 'Our kids will not watch any TV,' and it worked while the kids were small. You know, obviously, and didn't got out to their friends' house and were easy to command. But children don't stay that way. But I do think that children who have a solid foundation in what you might call humanistic love of life will never be damaged by the world at large; will be offered all sorts of attractive [?] and will indulge in some of them--I mean, we grow up and we need to try these things. But their basic beliefs in themselves, in their love for their families and the basic structure of their personalities, can't ever be damaged if they emerge from young childhood with a solid, whole personality. It's never going to crack. That's my belief. It's even my observation, to a limited extent I've been able to look around. It's a hopeful and optimistic view: it's got a lot of hopeful optimism as opposed to realism in it. But I think it may be true. I hope it's true.