It seems obvious that moral artificial intelligence would be better than the alternative. Can we make AI’s values align with ours, and would we want to? This is the question underlying this conversation between EconTalk host Russ Roberts and psychologist Paul Bloom.

Setting aside (at least for now) the question of whether AI will become smarter, what benefits would a moral AI provide? Would those benefits be outweighed by the potential costs? Let’s hear what you have to say! Please share your reactions to the prompts below in the comments. As Russ says, we’d love to hear from you.

 

 

1- How would you describe the relationship between morality and intelligence? Does more intelligent necessarily imply more moral- either in humans or AI? Can more intelligence offer a greater chance at morality? What would AI have to learn to develop a human-like morality? How much of (human) intelligence comes from education? How much of morality?

 

2- Where does (human) cruelty come from? Bloom suggests that intelligence is largely inborn, though continually influenced later, while morality is largely bound in culture. To what extent would AI need to be acculturated for it to acquire some semblance of morality? Bloom reminds us that, “… most of the things that we look at and we’re totally appalled and shocked by, are done my people who don’t see themselves as villains.” To what extent might acculturation create cruel AI?

 

4- Roberts asks, since humans don’t really earn high marks for morality, why not use AI’s superintelligence to solve moral problems- a sort of data-driven morality? (A useful corollary question he poses is why don’t we make cars that can’t go over the speed limit?) Bloom notes that obvious tension between morality and autonomy. How might AI help mitigate this tension? How might it make such tension worse? Continuing with the theme of morality versus autonomy,  where does the authoritarian impulse come from? Why the [utopian] human urge to impose moral rules/tools on others? Roberts says,  “I’m not convinced that the nanny state is merely motivated by the fact that, I want you not to smoke because I know what’s best for you. I think some of it is: I want you not to smoke because I want you to do what I want.” Is this a uniquely human trait? Might it be a trait transferable to AI?

 

5- Roberts says, “The country I used to live in and love, the United States, seems to be pulling itself apart, as is much of the West. That doesn’t seem good. I see a lot of dysfunctional aspects of life in the modern world. Am I being too pessimistic?” How would you respond to Russ?

 

Bonus Question: In response to the Roberts’ question above, Bloom responds, “I have no problem conceding that economic freedom writ large has helped change the standard of living of humanity by the billions. That’s a good thing. I don’t have any problem with the idea that there’s cultural evolution, and that’s a good thing, that much of it’s been productive and means people lead more pleasant lives. I think the question is whether the so-called Enlightenment Project in and of itself is the source of all that.”

To what extent do you agree with Bloom? This question also recently arose in this episode of the Great Antidote with David Boaz, who insist that not only is the Enlightenment responsible for such positive change, it is a project that is ongoing. Again, to what extent do you agree?