It seems obvious that moral artificial intelligence would be better than the alternative. Can we make AI’s values align with ours, and would we want to? This is the question underlying this conversation between EconTalk host Russ Roberts and psychologist Paul Bloom.
Setting aside (at least for now) the question of whether AI will become smarter, what benefits would a moral AI provide? Would those benefits be outweighed by the potential costs? Let’s hear what you have to say! Please share your reactions to the prompts below in the comments. As Russ says, we’d love to hear from you.
1- How would you describe the relationship between morality and intelligence? Does more intelligent necessarily imply more moral- either in humans or AI? Can more intelligence offer a greater chance at morality? What would AI have to learn to develop a human-like morality? How much of (human) intelligence comes from education? How much of morality?
2- Where does (human) cruelty come from? Bloom suggests that intelligence is largely inborn, though continually influenced later, while morality is largely bound in culture. To what extent would AI need to be acculturated for it to acquire some semblance of morality? Bloom reminds us that, “… most of the things that we look at and we’re totally appalled and shocked by, are done my people who don’t see themselves as villains.” To what extent might acculturation create cruel AI?
4- Roberts asks, since humans don’t really earn high marks for morality, why not use AI’s superintelligence to solve moral problems- a sort of data-driven morality? (A useful corollary question he poses is why don’t we make cars that can’t go over the speed limit?) Bloom notes that obvious tension between morality and autonomy. How might AI help mitigate this tension? How might it make such tension worse? Continuing with the theme of morality versus autonomy, where does the authoritarian impulse come from? Why the [utopian] human urge to impose moral rules/tools on others? Roberts says, “I’m not convinced that the nanny state is merely motivated by the fact that, I want you not to smoke because I know what’s best for you. I think some of it is: I want you not to smoke because I want you to do what I want.” Is this a uniquely human trait? Might it be a trait transferable to AI?
5- Roberts says, “The country I used to live in and love, the United States, seems to be pulling itself apart, as is much of the West. That doesn’t seem good. I see a lot of dysfunctional aspects of life in the modern world. Am I being too pessimistic?” How would you respond to Russ?
Bonus Question: In response to the Roberts’ question above, Bloom responds, “I have no problem conceding that economic freedom writ large has helped change the standard of living of humanity by the billions. That’s a good thing. I don’t have any problem with the idea that there’s cultural evolution, and that’s a good thing, that much of it’s been productive and means people lead more pleasant lives. I think the question is whether the so-called Enlightenment Project in and of itself is the source of all that.”
To what extent do you agree with Bloom? This question also recently arose in this episode of the Great Antidote with David Boaz, who insist that not only is the Enlightenment responsible for such positive change, it is a project that is ongoing. Again, to what extent do you agree?
READER COMMENTS
Frank C Graves
Apr 23 2024 at 7:41pm
Thanks for the provocative question which is, as you alluded, is likely far from just hypothetical. I see 3 related issues that were missing from the conversation, which I think help answer whether one would or should participate with an AI avatar of a loved one or a famous person.
First, and overarching, is the question of whether there would be genuine humanity of some kind in the avatar, in the sense that it would have concerns or needs beyond just being a conversationalist with you. In essence, do we think it could have consciousness or is it just a complex conditional playback machine for discussing events in the past of the deceased real person (and reacting in past ways to new information you give it)?
Second, there was a suggestion that we might prefer some kind of adoring perfection in the avatar — which indeed might be fun for a cyberdate or two, but which misses the fact that any good relationship is intensely about giving as well as receiving. You love someone in large part because they have sometimes needed, and allowed you, to help them and they grew through that engagement (as did you — putting in a plug for interactions with real people!).
Third, our relationships are strong in large part because we share our new and amazing (or sad) experiences with each other. It is hard to see how the avatar of my spouse could travel with me to a new country or city and share the experience of the sights, food, etc.. Until they solve that aspect of AI, I think one’s avatar would become sort of stale.
That said, I would like to have dinner with Adam Smith. I think this works best when the avatar is for episodic intellectual engagement only.