Where Are You on the Spectrum?

EconTalk Extra
by Amy Willis
David Gelernter on Consciousne... Tim Harford on the Virtues of ...

consciousness.jpg Why do we tend to think of the mind as static--whether throughout our lives or even throughout the day? What do our brains have in common with computers? David Gelernter, Yale professor of computer science and author of The Tides of Mind, joined EconTalk host Russ Roberts this week to explore the nature of consciousness.

So brush the dust off your Up spectrum (or is it your Down spectrum?) and share your thoughts with us. Or share your vacuum cleaner's thoughts.* As always, we love to hear from you.

1. How does Gelernter describe the journey our mind makes between its Up and Down spectrums over the course of the day? Does this affect your experience? If yes, and how do these changes affect the way you think and the actions you take at different points throughout the day?

2. How similar are Roberts's and Gelernter's conceptions of "the hard problem of consciousness?" Specifically, how does each see the future of artificial intelligence? To what extent are emotions programmable, or able to be understood in a physiological manner?

3. Gelernter says, "toying with human life is fundamentally fascist." What does he mean by this, and why does he find the singularity to be such a dangerous idea? How does his conception of "the rapture of the nerds" compare to that of Richard Jones? (Recall Roberts's conversation with Jones in this April episode.) What's your take--is the singularity akin to the rapture, an immoral objectification of human life, or something else entirely? (And to what extent should the singularity be on our near-term radar?)

4. How does Gelernter's illustration of the "tides of mind" explain why analytical thought seems to be preferred over story-telling in our age? Roberts and Gelernter disagree on how challenging mixing Up and Down spectrum modes of communication is for people...What do you think?

* Related to Question 2...Roberts asks both on Medium and in this week's conversation, "Could a smart vacuum cleaner feel sadness at not having a chance at being a driverless car?" What does he mean by that, and just how outlandish is this question?

Comments and Sharing

TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (3 to date)
Mark Dunnu writes:

I haven't finished listening to the podcast, but had to respond to some of Gelernter's claims, comparing consciousness and "rustiness" or "greenness".

Of course you cannot find something rusty with no iron content, because rusty is the word we use for oxidated iron! You may as well use as an argument that you cannot find wet things without water molecules in them or wooden things without cellulose!

The comparison to consciousness - a deep and not well defined philosophical concept - is plain ridiculous!

Before jumping into the deeper water by claiming that consciousness is only a property of "humanlike" creatures, let us start with life itself - a much simpler concept. Would he (Gelernter) also claim with full conviction that since the only examples of life we know are carbon based, no other molecule could form the basis for life?

All the given examples for why we don't expect something to happen were of elementary physical processes, for which we can work out the (gasp) mathematics more-or-less analytically. Not all the steps but the holes are relatively small - comparison to analytical understanding even biology, and much less so - the mind, where the holes are miles wide, is ludicrous on all levels.

There is a reason mathematics is gaining such prominence - It allows people to speak in exact terms instead of throwing poor analogies to the wind.

Daniel Barkalow writes:

I believe that we could construct a vacuum cleaner that could wish it were a self-driving car. But I don't think that we could do so accidentally, because consciousness is difficult. And I don't think we would do it intentionally, because it would make it a worse vacuum cleaner. Part of what makes a robot vacuum cleaner so effective is that it doesn't get distracted, and only has abilities related to cleaning. Even all of the improvements we could make in its situational awareness (with respect to behavior) wouldn't involve giving it anything that would make it closer to being conscious, any more than they would make it human-shaped. Also, I think there's been practically no work on artificial consciousness in the past 30 years, and it's really not relevant to any current practical problems, so I don't think we're getting any closer to making a conscious robot over time. Maybe when we're automated all the high-skill intellectual jobs, we can mess around with figuring out consciousness as a matter of pure intellectual curiosity.

brit cruise writes:

Enjoyed the spectrum analogy, still thinking about it - I do feel down spectrum can be activated outside of 'falling asleep' - mainly by activating the default mode network. I also find it interesting that 'down spectrum' is most difficult to automate.

Comments for this podcast episode have been closed
Return to top