In this futuristic episode, Roberts chatted with philosopher Nick Bostrom on the promises and potential dangers of superintelligence, smart machines which he believes will be able to radically outperform humans in the future.

Are you as concerned as Bostrom about these supermachines? Do you share Roberts’ skepticism about their danger?

AI2.jpg

Check Your Knowledge:

1. Bostrom notes two ways we might get to a world which features superintelligence. What are they, and which of the two does Bostrom believe to be more likely? Which do you believe to be more likely?

Going Deeper:

2. Roberts and Bostrom discuss two ways that superintelligence might be “kept in a box,” or controlled: capability-control and motivation-selection methods. How do these two strategies differ, and which one do you think would be more effective? Explain.

3. Bostrom advocates that any superintelligence under development should satisfy the common-goods principle. What does he mean by this? Why is Bostrom so sanguine about the possibility of meeting this criterion, while Roberts remains skeptical? Whose view do you find more compelling and why?

Extra Credit: Is superintelligence cause for concern or celebration?

4. How does Bostrom’s notion of superintelligence compare to Robin Hanson’s of the singularity? Which is a more optimistic vision of the future?

5. At about the 17 minute mark, Roberts suggests that Bostrom’s view of Superintelligence is akin to how some view God–omniscient and omnipotent. How does Roberts justify that comparison? Why does he bring in the example of “big data” to make his case?

6. Several times during the conversation, Bostrom notes that “we” have the potential to design the superintelligence of the future. What sort of collective “we” is Bostrom referring to? Though Bostrom is talking about technology, not politics, to what extent does Bostrom ignore “the challenges of the political we?