Continuing Conversation... Nick Bostrom on Superintelligence

EconTalk Extra
by Amy Willis
PRINT
Nick Bostrom on Superintellige... Bostrom follow-up...

In this week's futuristic episode, Roberts chatted with philosopher Nick Bostrom on the promises and potential dangers of superintelligence, smart machines which he believes will be able to radically outperform humans in the future.

Are you as concerned as Bostrom about these supermachines? Do you share Roberts' skepticism about their danger? Wherever you fall, share your thoughts with us in the comments. As always, we love to hear from you.

AI2.jpg

Check Your Knowledge:

1. Bostrom notes two ways we might get to a world which features superintelligence. What are they, and which of the two does Bostrom believe to be more likely? Which do you believe to be more likely?


Going Deeper:

2. Roberts and Bostrom discuss two ways that superintelligence might be "kept in a box," or controlled: capability-control and motivation-selection methods. How do these two strategies differ, and which one do you think would be more effective? Explain.

3. Bostrom advocates that any superintelligence under development should satisfy the common-goods principle. What does he mean by this? Why is Bostrom so sanguine about the possibility of meeting this criterion, while Roberts remains skeptical? Whose view do you find more compelling and why?


Extra Credit: Is superintelligence cause for concern or celebration?

4. How does Bostrom's notion of superintelligence compare to Robin Hanson's of the singularity? Which is a more optimistic vision of the future?

5. At about the 17 minute mark, Roberts suggests that Bostrom's view of Superintelligence is akin to how some view God--omniscient and omnipotent. How does Roberts justify that comparison? Why does he bring in the example of "big data" to make his case?

6. Several times during the conversation, Bostrom notes that "we" have the potential to design the superintelligence of the future. What sort of collective "we" is Bostrom referring to? Though Bostrom is talking about technology, not politics, to what extent does Bostrom ignore "the challenges of the political we?"

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker


COMMENTS (14 to date)
Chris writes:

Very interesting podcast. Although, I think the core debate is somewhat unknowable. Are all of the bad things we imagine 'learned' or innate to who we are as people and animals?

You can see why evolution would result in innate emotions/feelings such as fear/greed/etc. to be developed - survival of the fittest, eat or be eaten. I'm not sure that evolution would apply to computer artificial intelligence. The 'computer' is not going to be eaten by a lion or starved by competition for food. There is probably low correlation between intelligence and the willingness to do harm to others for you own benefit. It often isn't the dictator that creates the weapon of mass destruction. If it is true that artificial intelligence won't 'learn' human emotions that could be destructive, then it would be a mistake to program human emotions into the computer in the first place. Why introduce the risk?

This could make for a fantastic movie as mentioned in the podcast!

Will writes:

To think we could engage in capability-control or motivation-selection methods may be to optimistic. The whole idea of machine learning is to feed the appropriate training data to the appropriate model using the appropriate training algorithm. This will lead to the model teaching itself to "reason" the solution to the class of problem your training it for. If we can develop an advanced enough model/training algorithm it may create within itself something we would consider a negative trait. This in order to enable it to reason a solution over a wide class of problems. Since the model is somewhat black box it would be difficult to select out specific abilities. We will most likely develop an algorithm that works to train the model without knowing exactly how the model is working.

Cris Sheridan writes:

Chris, the principles of evolution and survival of the fittest already apply to computer intelligence and have been directly implemented in a variety of contexts, particularly in the financial markets through algorithmic trading. However, that said, I do believe there is an upper bound to machine intelligence (as I believe Russ would also agree) and that it would be incorrect to conceptualize this process of "machine evolution" as an isolated development without taking into account the cybernetic interplay between humans and machines not just at the individual level, but in terms of society at large. This is where I think Nick Bostrom's view of superintelligence is incomplete. Although he raised this as a possibility in the beginning of the interview, i.e. as a form of "collective intelligence", he (as many others) have chosen not to pursue this line of thinking and focused perhaps too narrowly on superintelligence from the vantage point of the machine or AI and not from a more accurate cybernetic standpoint, which Kurzweil and others have expressed.

In essence, we are witnessing the birth of a new ontological entity that is superintelligent, but it is both human and machine--what we might call a synthetically human, virtually divine superorganism akin to Hegel's articulation of the "world-self" or Weltgeist (see http://mises.org/library/hegel-and-man-god).

mark e writes:

This was truly an amazing interview.

I started reading Bostrom's book a few days ago, and I became an insta-fan of his after reading this passage from his Introduction...

"Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions. I have gone to some length to indicate nuances and degrees of uncertainty throughout the text— encumbering it with an unsightly smudge of “possibly,” “might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly.” Each qualifier has been placed where it is carefully and deliberately. Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse."

This statement sounds like something Hayek would have soundly endorsed and has become my personal creed in many of my social network profiles and discussion threads.

Amy Willis writes:

@Will, interesting...admittedly I understand precious little about the tech involved here...Is your point similar to Bostrom's point regarding the way in which humans would insert preferences into machines? If so, does that mean that we really can't have any control over the value/preference systems the machines would use, contrary to what I thought I understood Bostrom to be saying?

Lio writes:

Stephen Hawking has just warned: "Artificial intelligence could end human race"

I do not think so, but who really knows?

Amy Willis writes:

@Lio, I noted the same story re: Hawking. Here's the link.
Is he more or less convincing than Bostrom?

CrisisMaven writes:

There is no such thing as "radically outperform humans in the future". Machines won't replace the Einsteins, Heisenbergs and Plancks, nor the Rembrandts, Picassos or Michelangelos. The horse was supplanted by the car because that offers moe comfort and economies. People still tend to walk though, they even run a million times more marathons (at least so it seems) than when the buggy was en vogue. So they will still opt to think for themselves and foil any Turing machines tricks by simply making a joke the machine can't get the point of. And yes, I don't try to outrun a Boeing, but then I don't use one to go about the house.

Anthony Perry writes:

Bostrom says that the super intelligent entity must obey the laws of physics. I did not read his book but the discussion suggests that there is no consideration in his thinking for the immaterial or existence outside the observable universe. It seems to me that is a serious flaw in trying to philosophize about states of existence so remote and so distant from our present one.

Another concern about the discussion is that the human is more than a brain. Trying to match the other physical functions of which the brain is one as they have developed in the context of the earth's environment would add complexity to the project. Humans have physical characteristics as well as cognitive ability that have resulted in our success. Our relative size, mixture of sensory organs, prehensile thumb, self-healing ability and on and on. We will advance in these regards, so our future development may well be in the biological rather than the digital and mechanical area.

Can a machine have real consciousness? What about mood. Sensation. Love. Desire to reproduce. Feelings of protectiveness for children and family. Self-interest. Adaptability. Why do we have these things? Are we just conglomerations of chemicals or machines?

Cris Sheridan writes:

@lio Elon Musk as well has voiced his concerns over AI/Superintelligence/Skynet on many occasions. Via CNET: Musk has compared "the future of AI to the "Terminator" series, nuclear weapons and to "summoning the demon."" (link to article)

Mike Laursen writes:

We cannot tell a superintelligent computer exactly what the human race wants, but suppose that we program it to think of us as all as customers to be satisfied.

Imagine the computer conducting customer satisfaction focus groups or using "lean startup" techniques, such as A/B testing, to try to suss out what people really want.

I suppose this is the point where Rod Serling would give the story an ending where the A.I. gets to know what we want a little too well, and the viewer ends up disgusted at human wants brought to their unfettered extreme.

Cris Sheridan writes:

@Mike Laursen, I like how you think. Certainly there's a bit of Brave New World in all of this!

Andrew writes:

If a super intelligent being does take over, I hope it decides to make pizzas instead of paper clips.

That would be a tyranny of our machine overlords that I could get on board with.

Ron Crossland writes:

Bostrom is an interesting person with some sustained thinking about the matters of machine intelligence.

I have difficulty with his initial premises. He extrapolates future machine intelligence from machine technology that is currently changing and therefore less predictable. The idea that a paper clip making machine contains the intelligence to master the universe's resources, but the inability to inspect it's own goals, is not my definition of intelligent. Intelligence includes a fundamental learning heuristic ability - which this example showcases in all ways but the ability to learn about the futility of certain goals.

Our predictive accuracy concerning future machine intelligence is far less than our current ability to predict dart throwing, in the dark, with a randomly moving dart board. Twenty years ago, when the Internet was a baby, only the daring were predicting that search engine algorithms of the kind we use everyday in 2014 were within the bounds of feasibility.

Our understanding of the upper limits of machine intelligence evolves. Our view today will be superseded by 2034.

Let's replay this interview 20 years from now.

Comments for this podcast episode have been closed
Return to top