Continuing Conversation... Gary Marcus on the Future of Artificial Intelligence and the Brain

EconTalk Extra
by Amy Willis
PRINT
Gary Marcus on the Future of A... Joshua Angrist on Econometrics...

In this week's EconTalk episode, EconTalk host Russ Roberts talks with NYU's Gary Marcus on how artificial intelligence may influence human flourishing and when such effects might become a real concern.

As always, we want to hear what you think.

augmented reality2.jpg

Check Your Knowledge:

1. How does Marcus define artificial intelligence? Why is Google Translate, for example, not an example of artificial intelligence?


What are the problems with Big Data?
Going Deeper:

2. Of the eight (no, nine) problems with "Big Data," according to Marcus and Ernest Davis, which do you find the most troubling, and why?

3. Toward the end of the conversation, Roberts asks Marcus about possible economic effects that may come as a result of advances in artificial intelligence. How does Marcus respond? What does he leave out? Are there effects beyond employment effects that could be cause for concern? Explain.


Extra Credit:

4. Marcus suggests that the "end game" of artificial intelligence should be a guaranteed basic minimum income. Why do you think he makes this argument, and to what extent do you agree? How do you think James Otteson would reply to Marcus's claim? (You may recall we asked about this in last week's Extra...)

5. Roberts suggests that as people spend more and more time in virtual worlds, we will see a cultural shift in what's regarded as acceptable and praiseworthy. What sort of guidance do you think Roberts (and Adam Smith) would offer with regard to these shifting behavioral guidelines? In other words, how will we know what sorts of virtual activities (or time spent on them) will be "lovely"?

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker


COMMENTS (7 to date)
Don Crawford writes:

Russ and Gary Marcus seemed to share the concern that there might not be enough work to go around in our future. Marcus suggests we might have to give everyone a guaranteed income so that people who can't find work would have enough to live on. If we think about the economic world in the right way, we can see that wouldn't be a good idea AND it won't become necessary. What if we still ran around in tribes of 100 people or so? What would be the result of a labor-saving device or invention? Say one person invents a way to hunt/kill game so that he could easily collect enough food for everyone in the tribe. Would everyone else in the tribe go hungry? Not at all likely. The super hunter who has the excess game would trade some game to the person who would collect all the herbs, and some to the person who does the cooking, and some to the person who makes clothes, etc. Nowadays wealthy people hire gardeners, personal trainers, cooks, as well as people to ghost write their memoirs, to handle their social media, wash their cars, and so on. As long as the government doesn't get in the way, anyone, like our super hunter, with extra resources will end up using those resources to hire other people to do things that the super hunter thinks need to be done. When the wealthy individual says the equivalent of, "Here, give me a hand," the only reason people would say no (and remain unemployed) is if there is a better deal to be had by being on the dole. Humans are endlessly inventive when it comes to things that need to be done and in a free market existing resources will be allocated to do those things--that weren't jobs that needed to be done when we couldn't afford them.

David Abensur writes:

I am a frequent listener of Econtalk (loved it). It will be extremely useful for your listeners if you interview the great philosopher John Searle from Berkeley on topics such as The making of Social Reality, Causation of social events (different type of causation on first world War compare to the causation of physical events such as the law of physics), strong IA, neuroscience, etc... I have read Searle and he truly change the way I see the world (different type of onthologies, object, subject, social)

V P writes:

Trying to address Don Crawford:
Your analogy does not necessarily apply. Yes, let's say rich people would hire others for some price X > 0, but there are other components needed to make goods than labour - capital and natural resources, so X will not necessarily be enough to cover rent and food because resources could be used more efficiently than to incentivize humans. In your analogy only one resource becomes abundant, however, there is nothing to say that AIs could not theoretically be more efficient at everything.

SaveyourSelf writes:

4. Marcus suggests that the "end game" of artificial intelligence should be a guaranteed basic minimum income.

Marcus suggests the basic-minimum-income redistribution scheme is one possible solution to the “problem” of employment displaced by technology.

Why do you think he makes this argument?

He makes this argument because that is how the human brain works. It makes associations. Presently, when people become unemployed they are awarded unemployment insurance. But unemployment insurance is a temporary solution and he worries that AI could lead to a permanent unemployment situation. If unemployment were ever a permanent state, then a more permanent solution would be required. Since unemployment insurance is what we use in the short term to handle labor displaced by technology or competition, it is a very short leap to simply extend it forever—then it is a permanent solution. Right?

Marcus uses association in place of reasoning again around 22:31 in the interview when he says, “And I think it's a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don't even have a theory about how to regulate that.”

He can barely imagine the outcomes or problem the future may present, but he already knows the solution to those problems is regulation. How does he know? Because that is how he sees similar problems addressed in the present.

To what extent do you agree?

Not at all.

Associations are necessary for uncovering causality, but not sufficient. An informal study of unemployment insurance reveals that it does not reduce unemployment. On average, it lengthens the duration of unemployment because it reduces the incentives to find new employment. So, although it is correct to associate it with problems of unemployment, it is incorrect to assume it solves those problems. More accurate to say it makes them worse. And even if it didn’t exacerbate the problems it purports to fix, how safe is it to assume that the outcomes evidenced in the short term are the same when applied over the long term? Not. Look at the economic impacts to East Germany after it was…acquired…by the USSR. Initially, East Germany was the economic leader for the USSR. Apparently the behaviors developed in a market economy have some inertia. But gradually, over the long run, the productivity of East German steadily declined until it was just as bad as everywhere else in the USSR. People adapt to their environment, good or bad. That is what they do.

I predict the same thing would happen with a basic-minimum-income guarantee. In the short term it would have no obvious affect. It might even have a brief positive affect, just like you see when unemployment insurance is utilized only for a couple of weeks. As time stretches on, however, negative outcomes will emerge which worsen over the long run.

Rather than just criticize, however, allow me to propose a novel alternative. Leave people alone. Because, according to John Medina in Brain Rules, “Though we know precious little about how the brain works, our evolutionary history tells us this: The brain appears to be designed to (1) solve problems (2) related to surviving (3) in an unstable outdoor environment, and (4) to do so in nearly constant motion” (pg 4).

So, in the absence of regulation; insurance; redistribution; or a basic-minimum guarantee, I can say with great confidence that we can solve and survive the unknowable problems of the future, because that is what we are designed to do.

Dan Hanson writes:

The notion that artificial intelligence will make human labor obsolete is really just a failure of imagination.

You could have made the same argument during the mechanization of farming. You could even say the situation was much worse - you had a large percentage of the population engaged in making and distributing food, many of whom were poorly educated and who lived far away from cities and modern jobs.

From the perspective of an economist during those times, you could make the claim that mechanization was soon going to result in hordes of unemployable farm workers. If a government followed that logic and provided a minimum income to such people, it would have removed the incentive to reorganize, retrain, move, and otherwise take the steps required to find productive work.

Today we might have tens of millions of 'reservation farmers' making subsistence income on public money, and because the necessary restructuring never took place the original economist's prediction would have been self-fulfilling.

So what kinds of jobs will emerge to replace those lost to AI? Who knows? The market is complex and moves into the future almost on a random walk. Maybe we'll all be making virtual products for virtual worlds. There are already people making incomes designing dresses and other objects for purchase in virtual worlds.
Or maybe we'll find a completely different way to provide value to each other.

Knowing what jobs will spring up to accommodate workers replaced by AI is about as likely as being able to predict in 1960 that a layed-off telephone switchboard operator would one day be employed as a webmaster or desktop publisher.

V P writes:

"The notion that artificial intelligence will make human labor obsolete is really just a failure of imagination."
It may or may not. Is there something useful that average human can do and a future machine definitely can't? I think not. Imagination is just imagination.

SaveyourSelf writes:

Dan Hanson wrote, "...and because the necessary restructuring never took place the original economist's prediction would have been self-fulfilling."

That is a remarkably strong argument.

Comments for this podcast episode have been closed
Return to top