Amy Webb on Artificial Intelligence, Humanity, and the Big Nine
Mar 11 2019

BigNineCover-193x300.jpg Futurist and author Amy Webb talks about her book, The Big Nine, with EconTalk host Russ Roberts. Webb observes that artificial intelligence is currently evolving in a handful of companies in the United States and China. She worries that innovation in the United States may lead to social changes that we may not ultimately like; in China, innovation may end up serving the geopolitical goals of the Chinese government with some uncomfortable foreign policy implications. Webb's book is a reminder that artificial intelligence does not evolve in a vacuum--research and progress takes place in an institutional context. This is a wide-ranging conversation about the implications and possible futures of a world where artificial intelligence is increasingly part of our lives.

RELATED EPISODE
Jerry Muller on the Tyranny of Metrics
Historian and author Jerry Muller of Catholic University talks about his latest book, The Tyranny of Metrics, with EconTalk host Russ Roberts. Muller argues that public policy and management are overly focused on measurable outcomes as a measure of success....
EXPLORE MORE
Related EPISODE
Rodney Brooks on Artificial Intelligence
Rodney Brooks, emeritus professor of robotics at MIT, talks with EconTalk host Russ Roberts about the future of robots and artificial intelligence. Brooks argues that we both under-appreciate and over-appreciate the impact of innovation. He applies this insight to the...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Ajit
Mar 11 2019 at 12:55pm

Very thought provoking episode. I have the following thoughts:

1) China reminds me of the Soviet Union and post-Independence India(to a lesser extent). Namely, an interventionist government trying to steer the direction of the country because they perceive what the technological future is . This turned out to be wrong back then and likely wrong today. Yes ML is the frontier topic today, but who knows if it will be in 10-20 years. One could easily see the next hot industry being in biotech, health care, or a number of other industries we have yet to imagine. I’m skeptical that forcing everyone to become an ML expert means society as a whole becomes better off. No one thinks we should be giving chemical engineering textbooks to kindergarteners.

2) The power of the g-mafia seems overblown to me. I bet the big 3 in Detroit felt like unshakeable monopolies back in the 1950s as well. Furthermore, there are ready alternatives that aren’t so inferior. If you don’t like google, use bing. If you don’t like using amazon, there’s ebay. Or if you think Amazon’s cloud hosting is too much, there are lots of alternatives.

3) I am with Russ in his pessimism of China. Even if it succeeds at making the world’s best nanny state tool – who would want to live there? The soviet’s learned a painful lesson – your social control experiment won’t work if the lab rats keep finding a way to escape. Maybe the current generation will find it palatable to live in such a society – but i bet a lot would happily migrate out to Korea, Japan, or the United States. And certainly few immigrants would willingly come to China under those terms. I would argue the great asset the United States has ever had relative to other countries has been its willingness to attract immigrants.

4) One reason China’s forced emphasis on ML won’t come back to bite them – open source. Right now, if you are willing, nearly all the ml tools and algorithms you could ever need are available online with incredible documentation. Stackoverflow is one of the most invaluable tools for a tech team.

Kevin Remillard
Mar 11 2019 at 1:27pm

Social Credit Systems in China and now you too can pay to see yours in the United States.

“See & Improve Your Reputation Score. Check Out Anyone Else’s”

“Reputation is more important than credit. Only MyLife provides public Reputation Scores based on public information gathered from government, social, and other sources, plus personal reviews written by others.”

[Quote marks added for clarity. Kevin is quoting an ad for reputation services. He’s not advertising that particular service. —Econlib Ed.]

Kevin D Remillard
Mar 13 2019 at 6:29pm

Yes, thanks.  OY!

Eric
Mar 11 2019 at 1:49pm

Nice episode. But I would argue that there really isn’t much competition in the United States, and won’t be anytime soon. Even though there are different choices, the Sand Hill Road and Wall St investors are all pulling the strings, in many cases serving on the boards of directors across the various “competitive” companies. Recall Eric Schmidt was on Apple’s board, for example. And all the G-Mafia companies hold patent portfolios that will insure a Mexican standoff between each other and blocking out any upcoming competitors. The VC money doesn’t want to fund the next Google, it is looking for the next Google acquisition.

James Liu
Mar 11 2019 at 5:16pm

Wait, the majority of Americans use Apple products? Uh, no. Maybe culturally, it feels that way. Makes me slightly worried about how perceptive this futurist is if she is using a gut feel about the prevalence of iPhone?

Amy Webb
Mar 15 2019 at 8:57am

Hey James. Amy Webb here. Regarding Apple/Americans — I think that’s a mischaracterization of what I said. Apple has a high penetration in some markets for some products, certainly not all.

Rick
Mar 11 2019 at 6:47pm

The non-profit alternative is (essentially) the open source movement.

Unfortunately it’s not going to disciple Facebook until either contributing is dramatically easier (friendlier programming systems) or the median internet user gets dramatically more savvy.

Nicholas E Goldwater
Mar 15 2019 at 10:45am

Lets not confuse terms.  Open Source is what platforms like Facecrack and Tweaker are built on.  They have taken free software and made it essentially proprietary because they offer a service not a tangible product.

If there is going to be a ‘transparent’ option, free software is most likely the best option all around.

Jeremy
Mar 11 2019 at 8:30pm

Thank you for an excellent discussion. Quick note: If we want to inject competition into the industry, we need to diminish the power of the patent system which is allowing technology firms and lawyers to ‘build a moat’ around business models before future firms can even materialize. The large technology companies would love more regulation and protection (including things like ‘Net Neutrality’) which diminishs the likelihood of the next garage company disrupting everything. Ask yourself: What does Facebook, Google, Amazon fear? Its not more regulation or antitrust 

Jop
Mar 13 2019 at 4:39am

Would you mind elaborating on why the big tech companies would embrace Net Neutrality?

Jeremy
Mar 14 2019 at 1:35pm

Well, they don’t support ‘Net Neutrality’ out of altruism. In general, ‘Net Neutrality’ regulation requires communications companies to treat all bandwidth equally (among other things). Any type of new regulation increases the fixed costs of compliance and reduces the likelihood of new entrants, competitor products, and innovation (just look to days of the heavily regulated phone companies). 

In some cases, these technology companies who are supporting the regulation have attempted to launch their own competitor products in various markets. The Googles, Facebooks, Amazons of the world have the scale to easily spread new compliance fixed costs across a large number of customers (Joe’s garage startup does not). 

Again, ‘Big Tech’ doesn’t need to worry too much about new regulations. They worry about becoming the next Myspace/Aol/Wallmart/etc 

I’m sure there is a link to ‘Bootlegger and Baptists’ theory discussed on previous Econtalk episodes. 

Net Neutrality = Public interest (‘baptists’)  who want free/cheap/price controlled internet + Profiteering (bootleggers) businesses which depend on the internet

Marie
Mar 12 2019 at 11:02am

when did “regulation” become such a dirty word?  It seems clear to me that the “non-profit” Russ suggests needs to be our government.  We need a Cabinet position in AI, one that hires the smartest people and pays them well.  We don’t have to become anything like China, and we never will, but surely some kind of hybrid is required now.  And yes, it needs to be in some sense global, especially when gene editing is being considered.

C W Yong
Mar 12 2019 at 11:22am

It has been a most fascinating episode. Thank you, and congrats!

There is much I tend to agree with Ms Webb about the beginning shift towards a dystopian future. However the example she gave about Amazon halting your ability to order popcorn order because you did not exercise enough sounds rather implausible to me. Why would be the incentives for a profit seeking gargantuan to do that? This particular “danger” would seem much more likely with a state player.

Jop
Mar 13 2019 at 4:41am

I think her logic as to why Amazon would disable you from making popcorn is that they will also be in the health insurance business, and therefore have an incentive to keep their customers healthy, and they will have the means to do so via their product and services platform of delivering and building the microwaves. At least, that’s how I read it.

Amy Webb
Mar 15 2019 at 9:00am

Jop has it right. There would be an economic disincentive to lock us out of devices that facilitate consumption. If an Amazon microwave prevents us from popping popcorn we won’t buy more. I was referencing tie-ins to health tech and wearable devices, which could become part of a broader Amazon (or Google, or Apple) ecosystem. Factoring in our health data, there would be an economic incentive to keep us trim and fit. Or… as would likely be the case… it might just result from someone trying to “optimize” us for our best lives and didn’t see this as a possible negative downstream implication.

Jonas J
Mar 24 2019 at 12:52pm

My guess would be that it’s the other way – the  popcorn and other unhealthy habits will be quickly reflected in higher insurance premiums.

Interestingly this could flip the information asymmetry around – with the AI supported insurance provider knowing more about our health than the insured. Probably not to the benefit of the consumer.

John K
Mar 12 2019 at 11:32am

As for a solution for how to turn Facebook into a public good, couldn’t the US taxpayers buy the company from the stockholders and turn it into a nonprofit? What would happen if Zuckenberg actually proposed this?

Todd Kreider
Mar 12 2019 at 12:40pm

Webb says that while there are other large companies “that are helping to grow the ecosystem, overwhelmingly these are the 9 companies that we ought to be paying attention to.” This isn’t correct and doesn’t presents an accurate view of what is happening in A.I.

Samsung, the second largest tech company in the world –  and not Chinese or American -wasn’t included despite years of research in A.I. including the recent opening of three new A.I. centers that will employ 1,000 A.I. researchers.

Sony is now collaborating with a top robotics engineering school, Carnegie Mellon University; Panasonic has teamed up with the A.I. home systems company, Caspar. Honda now has A.I. partnerships with DARPA, M.I.T and the University of Washington; Toyota has created Toyota  A.I.

London based Deepmind, which attracts a lot of talent from Europe, may be the top A.I. research company in the world. (Webb might be including them with Google, although it is a separate company that is also under Alphabet.)

Last week, ZDNet listed the five leading companies with A.I. patents acquired over decades: IBM (U.S.) 8,300 Microsoft (U.S.) 5,900 Toshiba (Japan) 5,200 Samsung (South Korea) 5,100 NEC (Japan) 4,400. SCGG (China) has had the highest growth in patents since 2013.

The world of A.I. is not the U.S. versus China.

 

 

 

 

 

Steve Hardy
Mar 12 2019 at 2:37pm

I saw Ms. Webb’s interview on NPR Nightly News where she was much more adamant about wanting the government to control AI similar to what is happening in China.  I suspect that some busybody in the government is more likely to keep me from buying more popcorn as opposed to Amazon who is in the business of selling popcorn. I also wonder why so many are concerned about companies that we get “free” services from sharing the data that we have volunteered to give them, that are then used to target advertising for products that we might have interest.   Perhaps the solution is for Facebook, Google, etc. to give users the choice of the current model or to keep their data private and paying for the service.
 
 

Amy Webb
Mar 19 2019 at 3:40pm

Steve, not sure what program you’re referring to. I’ve been on several NPR shows discussing the AI and the book — but there is no NPR Nightly News. I am not in favor of regulations, and I’ve been consistent on that point.

Dave
Mar 12 2019 at 3:34pm

As much as I dislike his politics, and nothing against Amy Webb…
It boils down to what Cory Doctorow has been saying and writing over the last two decades or so.

Michael Joukowsky
Mar 12 2019 at 5:25pm

I do not agree with Mr. Roberts nor with Ms. Webb. Regulation is always the wrong answer. Who will make the decision? Why will their decision be better than the individual? Companies who live in competition are more important because they have to enter the capitalist market place which then chooses what it values. A regulator will destroy innovation and devalue our knowledge. Making one of the g-mafia into a non-profit is not going to work for the very same reasons. There is something missing here and that is the wonder of the world and how we continue to innovate and survive. The only way that this process can be destroyed is by destroying free speech, freedom and ideas coming to market.

bonaparte
Mar 12 2019 at 5:30pm

The speaker keeps saying that she is not a fan of regulation. However, her admiration for the Chinese R&D ecosystem and her suggestion of a global supervisory body reveal the opposite.

Peter
Mar 12 2019 at 10:42pm

Ms. Webbs view of the Chinese AI ecosystem is very different to the one that Kai Fu Lee describes in the early chapters of his book ‘AI Superpowers: China, Silicon Valley, and the New World Order’

His belief is that the maniacal entrepreneurial competition between companies is one of the things that makes Chinese tech companies such a threat.

His discussion of how the Chinese government shapes investment in new technology ventures (like AI) is also very different to the conventional narrative.

Kai Fu Lee did his doctoral dissertation on speech recognition at Carnegie Mellon in 1988, he is a former employee of Apple, Silicon Graphics, Microsoft and Google, and currently runs a venture capital firm that invests in Chinese AI firms.

So he’s definitely a go-to guy for a first hand perspective on Chinese and Silicon Valley AI.

Seth
Mar 20 2019 at 6:28pm

+1

Kai-fu Lee paints a very different picture of the competitive environment in China. Also, the anxiety about an Asian competitor who will beat us with their top-down planning is very reminiscent of the “Fifth Generation” project in the 1980’s, when *Japan* was going to bury us with their central planning.

That said, China is very aggressively pursuing the technology with absolutely no regard to privacy or liberty. Ms. Webb is right to raise the alarm about this tendency in both countries.

John Cheong
Apr 1 2019 at 7:56pm

Interesting episode – however it would be much better if there is a more rounded description how the social credit system is currently being employed on the ground in China.
Amy’s dystopian characterization of the social credit system is quite similar to that of many other western observers but the reality is that the current system is currently used mostly by businesses and individuals for pragmatic, market driven decision making rather than overt state control.
Ms Webb also made a simple extrapolated of her dystopian view of the social credit system for individuals to the state level – this is a rather strange analogy given that states are already “rated” in the current system according to their credit worthiness (as pointed out by Ms Webb) – so how will this new tech change state-to-state interactions?

Eric L Willson
Mar 13 2019 at 8:12am

Fantastic episode. Thank you.

Related to the recent Boeing airline crashes, isn’t this the ultimate in AI gone wrong: “In the Indonesian crash, an-stall software baffled pilots by pitching the plane’s nose down dozens of time before it crashed. The system was activated by a reading from a single faulty sensor, and it didn’t respond as the flight crew desperately tried to halt the dive.”  (Quoting from a Bloomberg news article 13-Mar-2019).

I am overstating the obvious here, but these recent tragedies are examples with infinitely greater consequences than microwave ovens that deny you popcorn, car radios that reduce the volume when backing into your garage, or even public shaming for jaywalking.

What are the solutions for ensuring against or safeguarding of implementation of technologies into applications that jeopardize life itself?

Daniel Robin
Mar 13 2019 at 9:18am

Any entrepreneur small or large is taught solve someone’s pain. If the big nine are causing pain then some smart kid is going to find a way to make a fortune solving it and then sell his company.  Until that process has played out, I don’t see the need for government or not for profit intervention.

Scott
Mar 13 2019 at 9:50am

As a long-time listener to, lover of and learner from Econtalk, Russ’s sanguine and trite reaction to what is quite clearly a clear and present danger to human liberty was surprising and disappointing.

The objective of AI is to predict human behavior.  Once behavior is predictable, it is controllable.  Proven throughout history, the default human condition is dominance of the small, powerful group over the masses.  This fits perfectly into the idealogies of those who are allocating massive resources towards the development of AI.  The U.S. experiment countered that condition to great success, but alas, temporarily.

Ms. Webb’s tenant that it is, in fact, too late to apply free-market forces on the training data is the cold, hard truth.   I believe deeply in the value of the individual to contribute to the betterment of the species by being left alone to make self-interested decisions.  But the facts support one conclusion: human liberty has been surrounded by hostile forces and the outlook is bleak.

Shawn Eng
Mar 13 2019 at 12:13pm

“Maybe this will improve Chinese society according to their standards…”

We’re about to watch China genocide a million Uighurs. May future econ students remember your work and not moral blindspots.

This link offers a historic warning on US firms collaborating with authoritarian regimes:
https://ibmandtheholocaust.com

[Comment edited for clarity based on email correspondence with commenter. —Econlib Ed.]

Russ Roberts
Mar 15 2019 at 2:44pm

Shawn,

One should be careful or at least charitable when you call people out for “moral blindspots.” My statement about societal improvement was a bit tongue-in-cheek–I think I made it pretty clear that I am deeply concerned about the potential of AI to enhance the control of the Chinese people by the government. Strangely enough, I am also against genocide, but thanks for giving me a chance to point that out in print.

But I will be thrilled if future Econ students remember any of my work.

A.G.McDowell
Mar 13 2019 at 2:51pm

I think attempts to control the application of technology by so-called ethics education of programmers are misconceived. Most programmers have little freedom to control the behaviour of what they implement, let alone its application. Organisations developing programs typically expect programmers to work from formal or informal requirements, and use independent testing to check that programs and systems behave as they are supposed to. One popular scheme appoints a “Product Owner” and gives this single person responsibility to decide what needs to be done and when. It is to these Product Owners (who are usually not developers) and their managers that any legal or moral force should be applied. Even if practical, the idea that a small number of people who happen to have rare skills should make all decisions about the application of those skills leads to very strange consequences. Should we ethically educate heart surgeons with the expectation that they will judge their patients and treat only those patients who will provide a net benefit to society?

In considering China it is always worth considering what conclusions they may have drawn from the fall of the Soviet Union. I suspect that one conclusion was that communism, as practiced in the Soviet Union, destroyed social capital. I think the social credit scheme, and the legalisation of (officially approved versions of) religion represents a Chinese reaction to this experience.

An earlier example of focused government investment in computer technology was the Japanese fifth generation computer project. It appears to have been focused in an unproductive direction.

Jesse
Mar 14 2019 at 1:57pm

Although my priors make me predisposed against Ms. Webb’s thesis, I did try to listen with an open mind. While I found the conversation both interesting and thought-provoking, I was mostly unmoved by Ms. Webb’s arguments.

The Amazon microwave/popcorn example encapsulates my biggest issue with these types of dystopian future conversations: people tend to massively overstate the power of AI. Listening to Ms. Webb tell the story, it sounded like the microwave used a sophisticated algorithm and detector that is able to perceive what foods are being cooked. The reality is that Amazon Alexa is simply able to identify the phrase “cook me a bag of popcorn” and has hard-coded the proper cooking time/power level into its system. This “AI” could be tricked by simply saying “heat up my Acai herbal tea for 90 seconds.”

Ms. Webb might counter by saying that this is just the first step, and with the inevitable march of progress it’s just a matter of time before Amazon’s AI could detect the popcorn without being explicitly told by the user, but not all progress works this way. If someone starts exercising and reaches the point where they can run an entire mile without stopping, is it inevitable that they will one day run a 4-minute mile? There are all types of barriers, both technological and social, that could conceivably limit AI technology.

 

Chris Screwtape
Mar 14 2019 at 11:14pm

I’d like to comment on Social Credit. I am an American that has lived in China for just over ten years now.

Russ, you asked why there isn’t outcry or more push back on social credit. I think the answer is a simple one. Social credit is nothing new. A while back you did a podcast on Mao’s Great Famine.

Well, even back then there was a crude social credit system. Communes didn’t have enough food but still had production quotas to meet. So cadres categorized people and alloted work points. The young and weak were systematically starved to death. Those capable of labor were prioritized for food. My mother-in-law and father-in-law both lived through this area.

For many the memory is secondhand, but people definitely remember, and that memory plays a much bigger role in dissuading people from complaining about things than any alleged tattle tale culture.

And let’s not mince words. The Party doesn’t care if we smile or not. They’re not scoring us on being pleasant people. They’re grading us on our loyalty to the system–or at least our acquiescence.

Also, please remember the society we are living in: We are all breaking the law. Sure, wearing lipstick might not be a capital offense anymore, but there are still plenty of laws on the books. But thanks to arbitrary enforcement we don’t have to worry–so long as we toe the line. In other words, the downsides are very real, and everyone knows how they are vulnerable. So best not rock the boat.

Now for the carrot. Social credit is great. It means cheaper interest rates. It means getting to use the library without forking out a 100 RMB safety deposit. Coupons galore.

Russ, your wife is a teacher. You’ll be happy to know that China values her selfless contribution to society. To show our appreciation, we’re willing to offer discounts on car rentals, flights, hotel rooms, preferential insurance rates, and unbeatable prices for leasing a new laptop, cellphone or household appliance. Just download Alipay. It’s all there and a whole lot more.

Finally, in closing, your guest says that as Chinese become more affluent, they will want more of a say. Maybe.

But for the last eight or so years things have become more oppressive, not less. And I’m not talking big things happening out west. I’m talking little things. Everyday I have to scan my face to leave and enter my apartment compound. At the metro station, I always see multiple young men stopped by police for ID checks–unless the officer on duty is too busy chatting up the female army veteran running the x-ray machine. Whenever I take a taxi, the navigation system is saying every minute or so, “Speed check 100 meters ahead” or “Law enforcement camera 100 meters ahead”. Hell, a month or so ago, they arrested some guy at a concert using face recognition software.

We jokingly worry about Alexa cutting off the popcorn. Well, apparently, the AI on two Boeing 737 did a lot more than say no more popcorn. No need to speculate. We’ve already got real worries.

Todd Kreider
Mar 15 2019 at 10:35am

But for the last eight or so years things have become more oppressive, not less.

Thanks for your interesting detailed comment. I recently read a report, “Answering China’s Economic Policy: Preserving Power, Enhancing Prosperity”, co-authored by Aaron Friedberg, a political scientist at Princeton, who wrote: “By most accounts the political climate in China today is more repressive than at any point since the Cultural Revolution of the 1960s.” This seemed like a major overstatement (his source was the 2018 Congressional report on China), so I looked up China’s ratings at the Fraiser Institute’s Human Freedom Index and found:

On a scale of 1 to 10; 2018 not yet available:

Human freedom: 2008 (6.0) 2010 (5.9) 2012 (6.0) 2014 (6.0) 2016 (5.9)

Personal freedom: 2008 (5.7) 2010 (5.6) 2012 (5.5) 2014 (5.5) 2016 (5.4)

Economic freedom: 2008 (6.3)  2012 (6.4) 2016 (6.5)

That doesn’t look like a reversion to the Mao era to me, although it does indicate a small decline in personal freedom. Do you generally agree with the ratings or do you think thy are missing something?

Chris Screwtape
Mar 18 2019 at 12:32am

I think those ratings give you a good idea of the change we’re talking about. I’ll make two caveats.

First, there were some big changes like getting rid of the reeducation through labor camps, but the camp system in  Xinjiang is larger than ever. So you might think, looking at the rating, there was only a slight deterioration, but there are definitely large groups a lot worse off than they have ever been since the beginning of reform and opening.

Second, I think these ratings miss out something. In China it is (or at least was) acceptable to worry about one man rule. And that’s what has a lot of people worried right now. We’re wondering what’s going to happen after Xi’s next term expires, how long will he stay, that sort of thing. So when people make comparisons to the Mao era, I think they are misunderstood. They are not trying to compare and contrast between contemporary China and Mao’s China. What they’re really comparing and contrasting is Mao’s Party and the contemporary Party.

Social Credit, I think, will evolve into tools used to control and mobilize the Party. The Party, frankly, doesn’t do a good job at present. A lot of members stop participating and paying dues upon leaving university because party organization in the private sector is so utterly lacking and because there is no apparent benefit to maintaining membership.

Finally, it is useful to think of social credit through the lens of selectorate theory. Politicians maintain power by building and maintaining coalitions. Coalitions can either be built and maintained by providing public goods or private goods. Politicians always prefer the most cost effective option. Tools like social credit let the Party dish out private goods in a manner that seems earned. X deserves a preferential rate because he’s got a 900 social credit score, not because his father is whoever.

The Party already plays that game with pension benefits. I don’t see why they won’t play it here also.

 

Kevin Barnett
Mar 15 2019 at 10:54am

I find it interesting how worked up people get over FB, Google and Amazon. The data collection is completely voluntary. I personally receive a great value for it. Maybe my data is worth more. Maybe not. One thing I’m sure of is that I don’t want a position in the Government (at any level) telling me what I can do with my data.  That said, I do like transparency. Companies shouldn’t be able to hide what they do deep in a Terms contract that no one reads.

I really wish Amy or Russ had mentioned that we already have a government controlled scoring system for our citizens. It’s made up of data that I have no say how it’s collected or what it’s used for. Yep you guessed it our credit score. Government “regulation” didn’t keep the collection agencies from gross security practices that have left millions of Americans vulnerable to identity attacks. What’s worse is that the collection agencies proceeded to charge for protections against the attacks even though they caused the additional risk. However, congress makes an event out of questioning FB, Google and Twitter. About data that is voluntarily made public by it’s users.

Listener
Mar 15 2019 at 4:12pm

I think the author did not explain AI sufficiently well, or does not understand the mechanics of supervised ML, GANs and other techniques.  There is a distinction between AI/ML and other decision-making algorithms.  In particular, the comment that automotive ABS systems use AI was just wrong.

PaulaC
Mar 15 2019 at 5:23pm

This episode confused me a bit –
1. Amy, you described the companies in China working in tandem with the government, and I understood you thought they would be more successful than their US counterparts because of the cooperation and streamlining. I see their cooperation as tending to monopoly and clearly having more of a stranglehold on their people, more of a risk of paper cuts because of the centralized power and long term vision (don’t have to worry about shareholder returns, stakeholders have different demands).

2. But then you spent time going through Google/Alexa, talking about how much danger is there (I get it!) and I sort of felt you wanted them to have LESS power over us, but then you wanted them to have a standardized interface/technology. Isn’t it better that we do not have Alexa, Google Home, and Apple devices talking to each other and sharing data? If they did, then the possibility that they could truly take over our lives is higher. Higher risk.

3. I think the lack of having someone think about the greater good for our society is not a problem that can be solved. I think it’s a dilemma that you have to live with and mitigate. Any sort of governing body will become political, will skew towards favoured groups and harm others. It might hamper innovation and institutionalize the Mafia – could completely raise the entry costs for future competitors, no?

Do you think we should emulate China?
Do you think that great innovations can be top down and controlled?

PS: I think that people will PAY for apps that stop them from eating popcorn! I can totally see people actually choosing to lose weight by letting the fridge and appliances control what they eat. Makes me think of Ready Player One – there’s a part where he cannot get online until he’s burned enough calories to equal what he’s consumed. In fact, I want someone to make that app for my son.

Bob S
Mar 16 2019 at 11:05am

Alexa’s Popcorn

I think you got a bit out over your skis here. Alexa’s been around a few years, so all the early adopters have bought in. I suspect that Amazon found their purchasing to increase, and so wants to spread Alexa into the follower buyers. The addition of Alexa to a microwave is a variant on the classic marketing scheme of a “free sample” attached to e.g. a laundry detergent package.

Additionally, Amazon will gain some insight into buyers that might be only casual customers, and thus learn about how better to market to them. The “1984” scenarios would likely evoke strong revulsion by Amazon customers.

BTW, op/ed in WSJ 3/16 on AI by R. Fontaine & K. Fredrick “The Autocrat’s Toolkit.”

 

Earl Rodd
Mar 19 2019 at 2:50pm

I found the guest’s conclusion more frightening than the problem – she seemed to suggest furthering the monopoly power of the giants by getting them into one big “collaborative” effort. I suggest than simple anti-trust monopoly busting is the solution. She seemed to think you can’t “breakup” Facebook etc. Every monopolist in history has made the case whey they cannot be broken up. Facebook  seems easy. They have multiple data centers. So first make them divest non-core parts like WhatsApp and Instagram. Then divide the user base (geographically, randomly, by friends matrices??) between multiple new companies all of which have all the current software. Maybe some support structures are their own company initially (e.g. censorship). Now something might happen that did happen in the PC industry – initially we had many non-interacting architectures but it quickly became an advantage to interoperate – so much so that this advantage exceeded the advantage of customer lock-in. That this has not happened with Facebook is evidence of monopoly. If split, the parts, and new competitors might see the competitive advantage in interoperability deals (communicate with friends on a different platform).

There is one new competitor, usa.life, but I don’t think it stands much chance with a non-interoperating monopoly like Facebook. And I found its initial implementation really disappointing – it looks just as busy and messy with excess junk as Facebook.

I agree with the host in saying the solution to dealing with the Chinese is not to emulate them with central control. Yes, if they behave badly enough, the US may need government power exerted. But power exerted with malice by foreign powers is, I think best dealt with as a government issue, not by making our own industries into such big monopolies that they can “compete”.

Earl Rodd
Mar 19 2019 at 3:23pm

This comment is more on the concept of “artificial intelligence” than the specific content of the podcast. If this is not appropriate, please delete it.

In many ways, I think the term “artificial intelligence”, though in wide use,  is misleading. People assume that the machine is mimicking how humans process data and make decisions. This is not true. What we really have is “machine intelligence”.  A clear example of the difference was the Watson vs. human in Jeopardy. There were two ways in which the difference is clear. First, Watson used entirely different methods than the human to arrive at an answer – sophisticated and intelligent, but methods suited to the machine. Second, and the reason I think Watson happened to win this particular trial: Watson not only received the question (clue) as text, not voice, but had an electronic signal for when the button was enabled. This turned out to be faster than human response time, even a human trained to know when the clue is ending. When I watched it, I was convinced that Watson only won due to its ability to win the “race to the button.” That doesn’t mean a different machine with a “fairer” race to the button method could not and would not win, it’s just why I think Watson happened to win this one.

Another example is when Deep Blue defeated Gary Kasperov in chess. In the crucial game, Kasperov made an emotional error in resigning a game that could have drawn. In some contexts, the machine’s lack of what we call emotion would be a decided disadvantage, but in this case it was an advantage to the machine. In many real life situations, machines excel vs. humans because of this difference. In others, humans are essential.

In the end, I think the term “machine intelligence” better reflects what is going on. While the result a machine comes to may be very “intelligent” and a based on complex learning, this is different than human processing which includes emotion, consciousness, and even pondering the philosophy underlying those things. It is true that a machine can come to a result that even its inventors cannot understand how it arrived at (I experienced this phenomena in the 1960’s with compiler optimization), but that is not the same as human unpredictability. Again, just more reasons why machines are better at a growing number of tasks, and not at others. And why part of exploiting “artificial intelligence” is not the machine design, but manipulating the problem to better suit machine learning/intelligence.

Chris
Mar 26 2019 at 5:47pm

This was a disappointing podcast for me.  The discussion made it seem like it was a forgone conclusion that free markets and consumers will not be able to stop the ‘big nine’s’ unwanted encroachment on people’s freedom.  If anything, the discussion seemed to point the finger at greed, in particular public shareholders, as a root cause of the issue and concern.  Then the conclusion was that consumers are hostages so it is up to the government or some sort of non-profit system to rescue us.  Really a strange episode for me and gave me the rare incentive to post a comment that will never be read.  At least Google will mine it and associate it with me for posterity’s sake.

Can the realized harm these companies have caused be articulated?  Yes, some private information has been disclosed but I find it hard to discover actual harm caused.  I am concerned about the potential for harm but it is not often that a theoretical threat causes people to stop doing something that is free and either beneficial or enjoyable.  My guess is if Facebook causes actual harm to users through its use of their personal data, it won’t take much for users to find an alternative.  Until then, do we really want to start throwing out solutions to a potential problem?  Emerging technologies and new ways of doing things are going to cause potential problems.  It seems that most following this podcast would agree that anticipating those problems or the emergent market based solutions to those problems is not easy so creating top down solutions may do more harm than good.  I didn’t get that message at all from this podcast.
The thought that investors are the driver for the companies trying to monetize private data is laughable to me.  First, two of the companies, Facebook and Google are controlled by their founders through dual share classes.  Mark Zuckerberg has more than 50% of the voting interest in Facebook.  Page, Brin and Schmidt likewise control more than 50% of the voting interest in Alphabet.  If shareholders wanted to ‘fire’ the leadership of these companies for not maximizing profits, the founders can tell the minority shareholders to go pound sand.  Bezos doesn’t have voting control but does anyone actually believe that the short term profit incentives of shareholders influences the world’s richest man?  Whether the stock is 10% higher or lower will influence his decisions?  Shareholders will oust the charismatic leader of the company inescapably connected to his leadership because of quarterly results?  In my mind this doesn’t even warrant being part of the discussion of the ‘problem’.
Non-profits will do it better?  If we give a non-profit a monopoly, they will behave differently?  How is this different than saying the government will do a better job that a for profit company?  My memory is fuzzy, but hasn’t there been guests on in the past that have talked about non-profit’s overstepping their bounds to protect their turf?  The ACA comes to mind.  At the end of the day, it is competition that keeps things in check and I’m surprised that Russ is advocating for a different altruistic outcome with non-profits.  Have you seen the executive pay of major non-profit leaders?  Higher education is dominated by non-profits (with competition!) is that making it affordable and high quality for the students?  Would love to hear of the non-profit monopoly that acts materially differently than the for-profit monopoly.

Really an entire podcast about a problem that is real but yet to be fully realized with solutions that assume the market has no mechanism to solve.  Shocked to hear on Econtalk.

I’m also curious as to how this is the one area that state directed research is more productive than privately directed research but I’ve rambled on long enough.

I enjoy the vast majority of the podcasts!

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: February 12, 2019.]

Russ Roberts: My guest is futurist and author Amy Webb.... Her latest book is The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity.... Your book is a warning about the challenges we face, that we're going to face dealing with the rise of artificial intelligence. What is special about the book, at least in my experience reading about AI [Artificial Intelligence] and worries about artificial intelligence is that it doesn't talk about AI in the abstract but actually recognizes the reality that AI is mostly being developed within very specific institutional settings in the United States and in China. So, let's start with what you call the Big Nine. Who are they?

Amy Webb: Sure. So, what's important to note is that when it comes to AI, there's a tremendous amount of misplaced optimism and fear. And so, as you rightly point out, we tend to think in the abstract. In reality, there are 9 big tech giants who overwhelmingly are funding the research--building the open-source frameworks, developing the tools and the methodologies, building the data sets, doing the tests, and deploying AI at scale. Six of those companies are in the United States--I call them the G-Mafia for short. They are Google, Microsoft, Amazon, Facebook, IBM [International Business Machines], and Apple. And the other three are collectively known as the BAT. And they are based in China. That's Baidu, Alibaba, and Tencent. Together, those Big Nine tech companies are building the future of AI. And as a result, our helping to make serious plans and determinations, um, for I would argue the future of humanity.

Russ Roberts: And, just out of curiosity: I don't think you say very much in the book at all about Europe. Is there anything happening in Europe, in terms of research?

Amy Webb: Sure. So, the--you know, there's plenty of happening in France. Certainly in Canada. Montreal is one of the global hubs for what's known as Deep Learning. So this is not to say that there's not pockets of development and research elsewhere in the world. And it also isn't to say that there aren't additional large companies that are helping to grow the ecosystem. Certainly Salesforce and Uber are both contributing. However, when we look at the large systems, and the ecosystems and everything that plugs into them, overwhelming these are the 9 companies that we ought to be paying attention to.

3:18

Russ Roberts: So, I want to start with China. I had an episode with Mike Munger on the sharing economy and what he calls in his book Tomorrow 3.0. And, in the course of that conversation, we joked about people getting rated on their social skills and that those would be made public--how nice people were to each other. And we had a nice laugh about that. And I mentioned that I didn't think that that was an ideal situation--that people would be incentivized that way to be good people: despite my general love of incentives, that made me uneasy. And in response to that episode, some people mentioned an episode of Black Mirror[?]--the video series--and also some things that were happening in China. And I thought, 'Yeh, yeh, yeh, whatever.' But, what's happening in China--it's hard to believe. But, tell us about it.

Amy Webb: Sure. And, let me give you a quick example of one manifestation of this trend, and then sort of set that in the broader cultural context. So, there's a province in China where a new sort of global system is being rolled out. And it is continually mining and refining the data of the citizens who live in that area. So, as an example, if you cross the street when there's a red light and you are not able to safely cross the street at that point--if you choose to anyway, as to jay-walk--cameras that are embedded with smart recognition technology will automatically not just recognize that there's a person in the intersection when there's not supposed to be, but will actually recognize that person by name. So they'll use facial recognition technology along with technologies that are capable of recognizing posture and gait. It will recognize who that person is. Their image will be displayed on a nearby digital--not bulletin board; what do you call those--digital billboard. Where their name and other personal information will be displayed. And it will also trigger a social media mention on a network called Weibo. Which is one of the predominant social networks in China. And that person, probably, some of their family members, some of their friends, but also their employer, will know that they have--they have infracted--they have caused an infraction. So, they've crossed the street when they weren't supposed to. And, in some cases, that person may be publicly told--publicly shamed--and publicly told to show up at a nearby police precinct. Now, this is sort of important because it tells us something about the future of recognition technology and data. Which is very much tethered to the future of artificial intelligence. Now, better known as the Social Credit Score, China has been experimenting with this for quite a while; and they are not just tracking people as they cross the street. They are also looking at other ways that people behave in society, and that ranges from whether or not bills are paid on time, to how people perform in their social circles, to disciplinary actions that may be taken at work or at school, to what people are searching on--you know, on the Internet. And the idea is to generate some kind of a metric to show people definitively how well they are fitting in to Chinese society as Chinese people. This probably sounds, to the people listening to the show, like a horrible, Twilight Zone episode--

Russ Roberts: It sounds like 1984, is what it sounds like to me. It's not like, 'I wonder if that's a good idea.' It's more like, 'Are you kidding me?'

Amy Webb: Yeah. And so like, when I first heard about this, my initial response was not abject horror. I was curious. I was very curious.

Russ Roberts: [?]

Amy Webb: But like, here's what made me curious: Why bother? I mean, China has 1.4 billion people. And if the idea is to deploy something like this at scale, that is a tremendous amount of data. And you have to stop and say to yourself, 'Well, what's the point?' So, this is where some cultural context comes into play. So, I used to live in China. And I also used to live in Japan. And, they are very different cultures, very different countries. One distinctive feature of China is a community-reporting mechanism that is sort of embedded into society. And going back many thousands of years--you know, China is an enormous--it's a huge piece of land. And you've got people living throughout it; in fact, they are so spread apart, you have, you know, significantly different dialects being spoken. So, one way to sort of maintain control over vast masses of people spread out geographically was to develop a culture--sort of a tattle-tale culture. And so, throughout villages, if you were doing something untoward or breaking some kind of local custom or rule, that would get reported--you would get reported. Sort of in a gossipy way. But, you would get reported; and ultimately that person that heard the information would report that on up to maybe a precinct or a feudal manager of some kind, who would then report that up to whoever was in charge of the village or town; and then you would get into some kind of actual trouble. This was a way of maintaining social control. And so if you talk to people in China today, a lot of people are aware of monitoring. What I find so interesting is that at the moment, the outcry that we see outside of China does not match the outcry that I have observed--or actually to the lack of outcry--that I have observed in China. Now, there's one other piece of this really important: This is that using AI in this way ties in to China's Belt and Road Initiative [BRI]. And you might have heard about the BRI. This is sort of a master plan--it's a long-term strategy that helps China optimize what used to be the previous Silk Road--trading route. But it's sort of built around infrastructure. What's interesting is that there's also a digital version of this--the sort of digital BRI--where China is partnering with a lot of countries that are in situations where social stability is not a guarantee. And so, they are starting to export this technology into societies and places where there isn't that cultural context in place. And so, you have to stop and wonder and ask yourself, 'What does it mean for 58 pilot countries to have in their hands a technology capable of mining and refining and learning about all of their citizens, and reporting any infractions on up to authorities?' You know, in places like the Philippines, where free speech right now is questionable, this kind of technology, which does not make sense to us as Americans, may make slightly more sense to people in China, becomes a dangerous weapon in the hands of an authoritative, an authoritarian regime elsewhere in the world.

11:14

Russ Roberts: It reminds me, when you talk about the tattle-tale culture--of course, the Soviets did the same thing. They encouraged people to inform on--telltale sounds like a child reporting an insult. It's a monitoring mechanism by which authoritarian governments keep people in line. And you talk about the lack of outcry. Well, one reason is, is that you are worried that your social score is going to be low. Outcrying is probably not a good idea.

Amy Webb: That's right. That's right.

Russ Roberts: You should mention also, which I got from your book, that: It's not just like it's awkward, it's kind of embarrassing, you have a low score. These scores are going to be--going to be used, or being used?--to deal with people get credit, whether they can travel? Is that correct?

Amy Webb: Right. So, again. It's China. So, we can't be 100% of the information that's coming out, because it's a controlled-information ecosystem. But from what we've been able to gather, in all of the research that I've done, you know--I would suggest that it's already being used. It's certainly being used against ethnic minorities like the Uighurs. But we've seen instances of scoring systems being used to make determinations about school that kids are able to get into. You know, kids who, through no fault of their own may have parents that have run afoul, you know, in some way, and earned demotions and demerits on their social credit scores. So, it would appear as though this is already starting to affect people in China. And, again, my job is to quantify and assess future risk. So, as I was doing all of this research, my mind immediately went to: What are the longer term downstream implications? I think some of them are pretty obvious. Right? Like, some people in China are going to wind up having a miserable life as a result of the social credit score--the social credit score as it grows and is more widely adopted to some extent could lead to better social harmony, I guess; but it also leads to, you know, quashing individual ideas and certain freedoms and expressions of individual thinking. But, the flip side of this is: If it's the case that China has BRI--and it's investing in countries around the world not just in infrastructure but in digital infrastructure like fiber and 5G and communications networks and small cells and all the different technologies, in addition to AI and data, isn't it plausible that some time in the near future, our future trade wars aren't just rhetoric but could wind up in a retaliatory situation where people who don't have a credit score can't participate in the Chinese economy? Or, businesses that don't have credit scores can't do, can't trade. Or countries that don't have--if we think about like a Triple A Bond rating, you know, what happens if this credit scoring system evolves and China does business with, only with countries that have a high-enough score? We could quite literally get locked out of part of the global economy. It seems far-fetched, but I would argue that the signals are present now that point to something that could look like that in the near future.

15:03

Russ Roberts: Well, this is going to be a pretty paranoid show--episode--of EconTalk. So, I'm okay with that kind of fear-mongering, because it strikes me as quite worrisome. And I think we have to be, as you hinted at, you have to be open-minded that maybe this will make a better Chinese society, as defined by them. You know, the Soviets wanted to create a new Soviet man--and woman. They failed. But now, with these tools maybe there will be a new Chinese man and woman who will be harmoniously living with their neighbor, never jaywalking, and never gossiping, and smiling more often. Who knows? But, it's not my first, default thought about how this is going to turn out. I think that--

Amy Webb: No, but you kind of--you have to start with--I want to point out that I am not like a dystopian fiction writer. I'm a pragmatist. So, this--I am not studying all of this for the purpose of scaring people. What I would argue is, I have studied all of this, and used data, and modeled out plausible outcomes; and it is scary. It really is. Because you have to, again, connect the dots between all of this and other adjacent areas that are important to note. The CCP [Chinese Communist Party] in China is--

Russ Roberts: the Communist Party--

Amy Webb: yep--is facing some huge opportunities but also big problems. The Chinese economy may technically be slowing, but it's not a slow economy. There's plenty of growth ahead. And, if that holds--and there's no reason why at the moment it wouldn't--you know, Chinese society is about to go through social mobility at a scale never seen before in modern human history. And as that enormous group of people moves up, they are going to want to buy stuff. They are going to want to travel. So, you know, that potentially causes some problems, because the more wealth that is earned, the more agency people feel, the more opinions they start having about how the government ought to be run. And, you know, the CCP effectively made the current President of China, Xi Jinping, effectively President for life. And 2049--which seems far off but in the grand scheme of things isn't really that far into the future--is the 100th anniversary of the founding of the CCP. China is very good at long-term planning. Now, they've not always made good on fulfilling promises. But they are good at planning.

Russ Roberts: Yes, they are.

Amy Webb: Right? So, I don't see all of this as flashes in the pan, and 'AI's kind of a hot buzzy topic right now.' I'm looking at the much longer term and the much bigger picture. That's what makes me kind of concerned.

18:02

Russ Roberts: I think that's absolutely right. One other institutional detail to make clear for listeners is that the Chinese Internet is roped off, to some extent--to quite a large extent. They are developing their own tools and apps. And, talk about the three companies in China that are working on AI and how they work together in a way that American companies are not.

Amy Webb: So, here's another interesting facet of the Big Nine and AI is on a sort of a dual-developmental track. In China, Baidu, Alibaba, and Tencent were all formed sort of in the late 1990s, early 2000s; and their origin stories are not all that different from our big, modern tech giants like Amazon and Google and Apple. The key distinction is that our big tech companies were formed out for the most part in Seattle, Redmond, and Cupertino--California and San Francisco. Where, the ecosystem was able to blossom: there's plenty of competition. And there was plenty of talent. California has fairly lenient--in some ways--fairly lenient employer/employee laws which has made it very easy for talent to move between companies. And, if you are somebody who studies innovation, you know, the sort of lack of--the limited or lack of regulation, the ability for people to move around--

Russ Roberts: letting people make enormous amounts of money when they succeed and losing all of it when they fail--

Amy Webb: Right. Right. Right. But, the lack of safety net, the lack of a central, federal authority, if you will, is partly what enabled these companies to grow. And to grow fast. And to grow big. Which is why we also see a lot of overlap. So, Google, Microsoft, Amazon, and IBM [International Business Machines] own and maintain the world's largest cloud infrastructure. So, if you own a website or you are a business owner or you are making a phone call, at some point you are accessing one of their clouds. You know--we have competing, for the most part, we have competing operating systems for our mobile devices. For the most part, we still have competing email systems. And that's because without a central authority dictating one of the companies was going to do which thing, they all sort of did it. They went alone. When it went on their own and built their own things. So, now we have tremendous wealth concentrated among just a few companies who own the lion's share of patents; who are funding most of the research. And, for the most part, Silicon Valley and Washington, D.C. have an antagonistic relationship. That is not the case in China. So, in China, when the big tech companies were being formed there, you don't do anything in China without also in some way creating that business in concert with Beijing--with the government. You've got to pull patents--I'm sorry--you've got to pull permits. You have to abide by various regulations and laws. People are checking in on you. So, while Baidu, Alibaba, and Tencent may be independent financial organizations, in practical terms they are very much working in lockstep with Beijing. Alibaba, for those of you not familiar with the company, is very similar to Amazon. So, it's a retail operation. Tencent is very similar to our social media: so, sort of Twitter meets gaming and chat. And, I'm sorry--and Baidu is sort of search--is the sort of Google-esque company of the bunch. When China--when the Chinese government decided that AI was going to be a central part of its future plans--and this was decided years ago--it also decided that Tencent was going to focus on health; that Baidu was going to focus on cloud; and that Alibaba was going to focus on various different data aspects. I'm sorry; and Baidu was also going to focus on AI and transportation. So, it's not as though these companies came to these additional areas of research and work on their own. It was centrally coordinated. And that's a really, really important thing to keep in mind. If we've got a central government, a powerful government that is now--that has this long-term vision and is centrally coordinating, what's happening at a top level with the research and the movements of these companies, suddenly you have a streamlined system where you don't have arguments about regulation; you don't have the companies at each other's throats--like we've seen in the United States, Apple suddenly calling for sweeping privacy regulations because, to be fair, it's sort of--they are already far ahead and it gives them a competitive advantage. You don't see all that infighting in China. So, we have some fundamental differences. And the real challenge is that while we're trying to sort all this out in the United States, you have a streamlined central authority with three very powerful companies who are all now collaborating in some way on the future. In addition to a bunch of other top-level government initiatives to repatriate academics; to bring back top AI people; but also to do things like start educating kindergarteners about AI. There is a textbook that is going to roll out this year throughout China teaching kindergarteners the fundamentals of machine learning. I mean, you know--whereas in the United States, you know, some of our government officials, you know, up until very recently denied AI's capabilities; and only yesterday--so this is February 11th--President Trump issued an Executive Order to, I guess--I mean, there's a handful of bullet points on what AI ought to be, but it wasn't a policy paper. There's no funding. There's no government structure set up. There's not--I mean it--you see where I'm getting at?

25:07

Russ Roberts: Well, yeah--let me push back against that a little bit. You know, China is growing tremendously; as you point out, they are going to, presumably, they are already in one of the greatest transformations in human history from the countryside to the city, from a low standard of living to a much higher standard of living. And most of that's wonderful, and I'm happy about it. We don't know exactly what their ambitions will be or are outside of their own borders, and therefore what the repercussions are for us. As you suggest, they are doing a bunch of stuff. But the fact that they are top down and planning and organized, and we are chaotic and disorganized--so, just to take an example, you know, there's n companies in America, more than 4; I don't know how many there are--working on various aspects of driverless autonomous vehicles. There's Uber; there's Lyft; Apple; Google; there's Waymo. There's a lot going on here. And a lot of that will turn out. That's the nature of creative destruction; and capitalism. Some of those investments won't pan out. It will--the gambles will fail and lose, and people lose all their money. And, in general, historically, that chaotic soup of competition serves the average person and the people who are innovators quite well. The fact that China has, say, Baidu focusing on that and no one else having to worry about it, could be a bug, not a feature. I'm not convinced that China teaching kindergarteners machine learning is going to turn out well. Could be a mistake. Could be an enormous blunder. They are not allowing kind of experimentation, trial and error, that in my view is central to innovation. So, I think it remains to be seen how successful their walled garden with top-down gardening going on from the government's vision of what they want AI to serve, is going to work out. It might. It could. And it could be hard--the outcomes might be really bad for not just the Chinese but for other people. But it might just kind of fail. And, I'm not even convinced that their growth path is going to continue the way it has in the past. A lot of people just assume that because they have grown dramatically over the last 25 years they'll keep growing dramatically. There's a lot of ghost cities in China; there's a lot of overbuilding. I'm not so sure they have everything under control. So, I think you have to have that caveat as a footnote to those concerns.

Amy Webb: I completely agree with you. I would say that, for years, especially in the United States, we've been indoctrinated into thinking that China is a copy-paste culture rather than a culture that understands how to innovate, and to some extent I think that that is the result of that heavy-fisted, top-down approach to business. What I'm concerned about is not whether China succeeds financially. Here's what I'm concerned about. The challenge with artificial intelligence is that it's already here. It is not--there's no event horizon. There's no single thing that happens. It's already here. And it's been here for a while. And, in fact, it powers--you know, artificial [?] intelligence now powers our email; it powers the anti-lock brakes in our cars. You know. And essentially, this new Third Era of computing that we are in, if we assume that the First Era was tabulation--so that would have been Ada Lovelace in the late 1800s--and a Second Era was programmable systems, which would have been those early IBM mainframes on up to the, you know, desktop computers that we use today. This next Era is AI. And AI, while we've seen it anthropomorphized in movies like Her and on shows like Westworld, at its heart, AI is simply systems that make decisions on our behalf. And they do that using tools to optimize. So, the challenge is that, right now, systems are capable of making fairly narrow decisions. And the structures of those systems, and which data they were trained on, and how they make decisions and under what circumstances, those decisions were made by a relatively few number of people working at the BAT [Baidu, Alibaba, Tencent] in China and at the G-Mafia here in the United States. And the problem is that these systems aren't static. They continue to learn. And they--you know--they join, literally millions and millions of other algorithms that are all working in service of optimizing things on our behalf. Which is why I agree with you that if we are talking about a self-driving future, it's good to have competition, because--for all the usual reasons. Right? We get better form factors[?]; we get better vehicles; we get better price points. But when we are talking about systems that are continuing to evolve, that grow more and more powerful the more data they have access to and the more compute they are given--more computer power. And as we move into the more technical aspects, there are things like Generative Adversarial Networks, which are specifically designed to play tricks, to help systems learn more quickly. We are talking about slowly but surely ceding control over to systems to make these decisions on our behalf. And, that is what concerns me. What concerns me is that we do not have a singular set of guardrails that are global in nature. We don't have norms and standards. I'm not in favor of regulation. On the other hand, we don't have any kind of agreed-upon ideas for who and what to optimize for, under what circumstances. Or even what data sets to use. And China has a vastly different approach than we do in the United States, in part because China has a completely different viewpoint on what details of people's private lives should be mined, refined, and productized. And here in the United States, a lot of these companies have obfuscated when and how they are using our data. And, the challenge is that we all have to live with the repercussions.

32:10

Russ Roberts: Yeah, I'd agree with that. Up to a point. I want to give you a chance to talk about some scary examples. I think the--I'll just say, up front, that for me, underlying this whole problem--there are many different proximate causes and concerns. But there is, it seems to me, a very significant lack of competition. We can talk about how much competition there is in the United States relative to China. But certainly--the concern for me here in the United States is that the Big Six[Big Nine?] here in the United States will stay the Big Six[Big Nine?]. Which will give them leverage to do a bunch of things that you or I might not like. I do want to add that whatever we do to regulate or constrain them, via culture or whatever, allows for the possibility that they don't stay the Big Six[Big Nine?]. And I think one of the challenges of any way to deal with these problems is that, if you're not careful, you are going to end up creating a cartel that--it's de facto right now, but that can change. But if you make it de jure, you're going to end up with much worse outcomes than I think we're going to have. But, to concede your point about concern: I do think the Silicon Valley ethos of ask for forgiveness rather than permission--because right now there's no one you have to ask permission for, generally. Users are not paying much attention. There's very little regulation of how your private data is being used. Obviously something happened on January 1st, 2019 because I get a lot of annoying bars on my websites saying 'Will you accept cookies?' and I stupidly always click 'Yes,' like I'm sure most people do. And now they've complied with whatever required them to do that, and they're moving along. So, you know, I do think that there are some serious issues here. And you give some examples in the book of where these corporations--or China--have done things, and they really pay a price for it. They just keep going. The Facebook/Cambridge Analytica problem. The example you give of China pressuring Marriott the way their website was designed in terms of territorial recognition of China's sovereignty over various places that are somewhat up in the air. Those are serious issues, I think. And, more importantly, they are just the tip of the iceberg. So, talk about a couple of those things that you are worried about, that I think are alarming. And, normally, the marketplace would punish these folks; but not much does.

Amy Webb: So, I love what you just said, which is that the market--so it's curious, right? Why has the marketplace not punished the Big Nine? Or at least the G-Mafia, right? Or at least Facebook?

Russ Roberts: They've been punished a little bit. I think their users are down. I'm thinking about deleting my Facebook page. And I'm sure--and I've switched to DuckDuckGo for my searching. It's a really small step. But these are things that maybe people are starting to do in a little, slightly bigger, numbers.

Amy Webb: Maybe. But, again, like I don't have access to the whole world's data. Thank God. But--and you--let's just reveal our biases: like, you and I are digitally savvy people.

Russ Roberts: You're kind, Amy.

Amy Webb: Well, but you are. I think the fact that you even know what DuckDuckGo is, that you are somebody who is using it, I think is quite telling. But, for how long have we continued to hear--like, how many breaches have we heard about of our trust, right, over the past 12 months? And we continue to hear outcries, and people continue to be really upset. And we just don't see significant drops in numbers that would suggest the marketplace is punishing companies the way that they might in other circumstances. I think that's curious. And I think the reason is not because Google, Amazon, Apple, IBM, Facebook, and Microsoft make our lives a little bit better, but rather that our lives don't work without these companies. Now, it's possible--you could argue that maybe Facebook could maybe quietly go away, and for some organizations and companies that run part of their businesses using that platform it would be pretty annoying. But life would go on. We don't function--modern society in America literally does not function without Amazon, Google, and Microsoft. Huge parts of the business world do not function without IBM. And, if you look at mobile phone and personal device usage, like, most Americans are using, in some way, Apple. So, the problem is: We can get all angry--like, we can get as angry as we want. But we don't have a choice, which is--

Russ Roberts: But is that true? I've got to challenge that. Just for a second--

Amy Webb: Yeh--

Russ Roberts: Sorry for interrupting. I might give you an example. I just bought an Apple XR. I don't know how you pronounce it--10R--the phone. I love it. It's fantastic. When I bought it, I forgot, that, actually my ear buds that I like are not going to work with the new phone because it doesn't have a jack. So, when I was at the store, and I asked if they had an adaptor, they did. To my relief, it was under $10. I was expecting--Apple, in the old days, it was the kind of thing they charged $32 dollars for; and you'd go, 'I've got a habit, I'd just pay $32.' I was kind of thrilled: I think it was $7.95. I was shocked at how reasonable it was. But, of course, the other view was, 'You're are telling me they are going to force you to buy an adaptor? Because you can't use your old earbuds?' And the answer is, 'Yeah. They're going to do that.' And I was happy to pay the $7.95. In fact, you could argue for people that don't have earbuds or are just going to use the ones that Apple provides with the phone, they shouldn't have to pay the implicit $7.95. So, it's all okay. And most of us, most of the time, are happy with the deal. Right? We're happy with--we don't care. That's the problem for me. One of the problems, besides the competition. My problem with your claim is that, most of us, just: 'It's fine. Okay, it's not great. [?] ask occasionally, we card data.' But most of us just live with it. Like you, I'm increasingly alarmed--

Amy Webb: and--

Russ Roberts: but I think it's hard for the average person listening: 'What's all the fuss about? I like Facebook. I love Google. I love--' These are companies that we don't just, like, 'Yeah, it's pleasant.' They make our lives sing. And most of the time, we are happy. So, what's the worry?

Amy Webb: I hear you. And so this is honestly--this is not just about privacy. I would argue this is about future competition and choice. And that is one of the things that concerns me most. So, let me paint a picture for you. A couple of months ago Amazon had a big Press announcement. They were talking about Alexa, and the developer kid--they were making a bunch of highly technical announcements. And at the very, very tail end of this Press event, they, almost as a footnote, revealed a brand new product. And that was an Amazon Basics Microwave. Did you hear about this?

Russ Roberts: Only in your book. Keep going. I had not heard about it.

Amy Webb: Right. Well, because it didn't make news. And, the couple of places--like, it showed up on Gizmodo, and a couple of, like super tech blogs. And the big deal about the $60 Amazon Microwave was that it has Alexa. And so that you can talk to Alexa. And, for the most part, that elicited snark. Right?

Russ Roberts: Yeah. Who needs it?

Amy Webb: What--right. 'Typical Americans: we can't bring ourselves to push the buttons on our microwave to pop our popcorn. We are so lazy, we need to talk to it.' And again, this was one of those--this was one of those times when I said, 'But wait a minute. Why would they do that? Why go through the headache and the heartache?' I mean, it's hard to launch a product. It's hard to launch a product that exists already in the marketplace that has a fairly significant twist which is going to cause you to have to educate consumers? Like, 'Why bother?' Right? And here's where I arrived. If one of Amazon.com's core functions at the moment is selling us stuff--like popcorn, right--we've noticed that lately you can subscribe to all different types of things. Why would Amazon do that? Because people tend to run out of things, and this helps them not run out. However, it also ensures, if I am subscribing to popcorn, that I'm not going to buy it at my local grocery store. So, now let's think this through. If I'm somebody who buys microwave popcorn, and I pop that popcorn in my Amazon, Alexa-powered microwave, one of the pieces of data that I'm revealing to Amazon is not just that I am a subscriber to popcorn, but that I, you know--how much popcorn I popped. So that Amazon can track how many bags I've gone through. And rather than sending me a monthly box of popcorn, which may not be enough, or may be too much, depending on the month, this is a way for Amazon to mine and refine my data in order to optimize that popcorn delivery specifically for me. And how magical would it be if Amazon knew exactly the moment that I was about to run out of popcorn and sent me a replenishment? Now, again, this doesn't sound like a bad thing, on the face of it, right?

Russ Roberts: Sounds pretty good.

Amy Webb: Like, it would be pretty amazing, if Amazon knew when I was going to run out of all my stuff and it just showed up for me.

Russ Roberts: It's the end of suffering. We never have to go through that popcornless night at the movies on that big-screen TV at home.

Amy Webb: So, now let's connect some other dots. Amazon has entered into a joint venture with JP Morgan and Berkshire Hathaway. And, it's no secret that Amazon, and Google, and Apple also, as well as IBM, are all looking at health care. They are all somehow involved in the health space. So, isn't it plausible that some day in the future, with all of my Amazon devices, Amazon has looked at my FitBit or whatever fitness device I've been wearing--has been monitoring my caloric intake, has seen that I haven't gotten on my, you know, fancy bicycle--

Russ Roberts: Amazon Basics Bicycle--

Amy Webb: That's right. And I put that bag of popcorn in the microwave; and guess what? The microwave won't pop it. Because, it has determined that I don't get to eat that popcorn today. Again, that's the kind of thing where I really do think it's going to show up; it's going to sneak up on us. And, I don't think that Amazon is hell-bent on making sure that all Americans are thin and svelte. I don't think that's what this is about. I think, again, we've got small groups of people trying to optimize decisions on behalf of us all; and these are the kinds of things that don't get thought through in advance. They are the kinds of decisions that people make and then ask for forgiveness later on. And as long as we're on this topic: Currently, our voice-based systems, as well as some of these other AI systems, are not inter-operable to some extent, because they use different programming languages. To some extent because they are literally on different types of silicon and they are parts of different ecosystems. So, if you are somebody who currently has a house full of Google home-connected devices and you try to introduce an Amazon device, they don't necessarily talk to each other. Conversely, if you are an Amazon home with a bunch of Alexa devices which I now realize--if you are listening to this in your house, I've probably set off your devices 15 times in the past three minutes--

Russ Roberts: 'Alexa,' 'Alexa,' 'Alexa'--

Amy Webb: I apologize. But, like, think this through. Isn't it plausible that in our lifetimes, in the very near future, because we didn't have some kind of forethought, we're going to wind up in Amazon homes? or Google homes? or, you'll be an Apple household, where all your devices are with just one of those ecosystems and our data are tethered to them. I mean, think of how much of a pain in the neck it is to change mobile operating systems: If you've ever tried to go from Android to Apple or vice versa, it's hard. Now we're talking about all this other data--the ambient data that's part of your daily life. All of it. Plus, we didn't even talk about health and diagnostics and all of these other things that are all tied into these systems. And if those data sets become heritable, you know, we're talking about a future situation in which your family could be an Apple family, or an Amazon family, or a Google family. And your children may decide they want to marry into other Google families, or other Apple families, because it's too much of a pain in the neck to swap otherwise. I know that sounds like science fiction, but it's very much within the realm of plausibility.

Russ Roberts: So, I just have to add--digress for a second here. Having said 'Alexa' a few times, I'm just going to mention Marty Feldman and 'Blucher,' for people who are Young Frankenstein fans; and if you want to look that up, folks at home--we'll probably put a link up to it, I guess; we'll deal with that.

46:19

Russ Roberts: So, I want to take your example seriously. It sounds comical, but I don't think it is. And I think it's actually quite important. I'm going to give you a version of it that you refer to in the book and see if you think it's of this nature. So, right now, I use Gmail--even though I use DuckDuckGo for search. I do use Gmail; and I use Google calendar, and I have said this before--I love that when I make a plane reservation, it puts it on my calendar automatically. I'm a sucker for that. Like talking to the microwave. I'm embarrassed, but I do like it. I think it's cool. And it's convenient. And it saves a little bit of time. The other thing that happens with Gmail that I happen to really like is it started adding these possible responses: 'Thanks so much!' 'No, I don't think so.' 'Oh, great!' And, about 1 out of 5, I just click the box that automates the response to an email; and I think, 'Well, that's pleasant. That's exactly what I would have said.' Sometimes I click the box and then I add a few words, or I take away the exclamation point or add the exclamation point. And, you know, sometimes I think, 'Well, that's not exactly what I want to say but I'm going to say it anyway. I'll just click the box.' And, I think this kind of--I would call it corporate nudging, which you reference in the book quite a bit--is what's--it's the slippery slope. So, it starts off: 'You sure you want popcorn today? You've had 3 bags this week.' And you're still able to hit 'Yes' and override it. But, is it possible that there would be a day where, because of my health care payments, and I've got a bargain on my health care insurance if I allow Google to cut me off from, or Amazon to cut me off from popcorn and pay an extra fee for that--there's all kinds of things there that strike at the heart of how we live our lives. So I definitely agree with you. Where I think I'm a little more optimistic than you is that I imagine our culture is going to change. Now, of course, it's going to change in ways that--it's already changed an enormous amount. I think young people feel very different about, say, privacy, than older people. They feel very differently about digital life, virtual life, relative to brick-and-mortar life, real life. So, it's already changing. A lot of these things that you and I might find alarming thinking about them, maybe people in the future will just go, 'Ehh. So they cut me off from my popcorn. It's for my own good.' Now, I look at that, and I think that's a diminution--reduction; I can't say the word, 'diminuition'?--diminution of human agency and life and choice. And I really don't want AI making my decisions about who to date and what career to take and how I ought to spend my weekend, right? So, right now they might say, 'Here's some restaurants you might like,' or, 'Here's a movie you might enjoy,' or 'Here's a book.' And most of those I love, because I find out about books and movies I didn't know about. But are we really going down a path where it controls what I do? You could argue, I guess, it already does.

Amy Webb: Well, so that's the--so again: How did we wind up at this point? Why would somebody think--this, see, constantly ask these questions. So, why would somebody have thought to make that? And you could argue that one of the things that the modern Internet brought us was tyranny of choice. Right? And we have access--you know, when I think of when I first moved to Japan in the '*ahem, ahem, ahem,*' mid-1990s, long time ago, you know, there was no Internet where you could buy stuff. There was an Internet; but e-commerce was very, very early. And if I wanted Crest toothpaste, I had to FAX my request to the foreign buyers' club and wait for a month. The fact that you can now order that on Amazon--you know, as well as like any other thing that you--

Russ Roberts: Express. You've got a lot of choices in some cities--

Amy Webb: Right. You could argue that using AI to make recommendations was simply an antidote to the tyranny of choice which we created for ourselves in the early days. And, one could certainly argue that that's not necessarily a bad thing. I mean, the big joke at Netflix now is that Netflix will literally green-light everything. Right? Which is why there's sooo much stuff on Netflix.

Russ Roberts: It looks that way.

Amy Webb: To the point now where--it's hard to know--if you compare Netflix 3 or 4 years ago, it's hard to find, to surface[?] great content. So, that's one side of the coin. The other side of the coin is: Nobody asked me what I wanted. And somebody somewhere made a decision that this nudging is best for me. And, let me give you a concrete example of how that manifests in my life, in the real world. I have not been in a car accident. I think I'm a pretty safe driver. I don't tend to break the rules. The car that I drive, when I back into my driveway, the sound automatically turns itself down. So, I have a--

Russ Roberts: On the radio--

Amy Webb: On the radio. So, I have a parking pad. I don't live on a busy street. I have a garage that's tucked pretty far away. And I always back in. And, somebody decided that it was best for me, as the driver, to automatically turn that radio down, regardless of what I'm listening to, regardless of what kind of driver I am--any time that I've got my car in reverse. There's no law saying--there's no Federal mandate or law requiring that. There's no statistic--as far as I know, there's not enough data saying that an accident will be prevented or some huge number of accidents will be--

Russ Roberts: [?]

Amy Webb: you know what I mean? So, like, just somebody thought that would be a good feature. And I can't override it. That may seems like--that may not seem very important. To me, this is like a paper cut. And, the challenge with paper cuts is that you get one or two, and you don't sort of notice them. Maybe they are annoying for 5 seconds, and then you kind of just learn to live with it? Right? And you don't kind of notice it any more? What we are talking about with AI and these systems built by relatively small groups of homogenous people who are making decisions intended to govern us all, working at 6 companies in the United States and 3 in China--the problem is that we are going to start experiencing paper cuts at a fairly rapid clip. You have one or two cuts, not a big deal. Suddenly your entire body is covered in paper cuts, and your life is very different. You know, you may still be alive; but, I mean, stop and like visualize and think about what that would feel like. Suddenly life is nothing like it was before. You are miserable. And you don't have any way to override those paper cuts, because they just keep coming back, seemingly out of nowhere. That's the kind of future that I'm hoping to prevent.

53:27

Russ Roberts: So, the normal way--I want to--I have two thoughts on that. And I'm not sure they are right. But two thoughts. One is, my thought about how culture changes. You know, if you put a, if you put my grandfather, born in 1898, into the modern world, he'd find it very difficult. There would be a lot of things he wouldn't recognize. There would be--in just a hundred years--20 years--when he was a young man of 20 in 1918, roughly a hundred years ago: Being 20 now is really different. It would be weird for him to watch people walking down the street looking at their phones all the time. He'd think they were probably mentally disturbed. Many of them would be talking while they are walking, with their earbuds. And it would be jarring. And, more than just jarring--you could explain some of it to him--just the things that gave him pleasure would be different. And maybe not available. Which is part of your point, right? The freedom to do all kinds of things. Some of them small, like listening to the music as you back into your driveway at the same volume. Some of them large, like you say: It's coming. There will be things coming. So, one of you says is that, 'If they come, maybe people aren't as bothered by it as we are,' in thinking about them. The second issue is, if you make a really bad decision--and a lot of your book echoes some of the concerns of Cathy O'Neil in her book, Weapons of Math Destruction; and she was a guest on EconTalk; we'll put a link up to that episode. As you say, it's a very homogenous group. Mostly white men designing these things. But if you don't--historically, in a capitalist system, if you don't design things well and take into account that people aren't like you, you don't do very well. Right? If you think everybody is like you and likes to sit and code all night in your room, you are going to be a lousy designer of products for people. What's scary to me--and the concern that I share with you--is that I'm not sure that the profit-and-loss motive is doing a really good job of constraining those choices. And I see it in lots of ways. Some of which you talk about in your book; some of which I see elsewhere. The freedom that Amazon and Google and Apple have to do things that are, just kind of, funky--I can't even describe them. The things that--normally a company couldn't do that, because they'd lose so much money, they'd go out of business. But there's an enormous cushion for these companies, in terms of their profitability. And so, let's turn--it's--I'll just tell listeners before we started this, Amy, that I said, 'We'll spend the first half on what the problem is, and the second half on what to do about it.' And we are now, oh, 55 minutes in. And so if we can go a little over an hour that would be great, talking about what to do about it. So, normally you wouldn't do anything about it. You'd say, the profit motive and competition will constrain these kind of ridiculous stakes, and forms of arrogance, and tribal weirdness that this culture has produced out of Silicon Valley and Redmond and elsewhere. But, I don't see it happening. So, what I naturally look to is: How do I inject a little more competition into the system? How do I change the incentives that these folks face to do a better job? Taking account of what I want, not what they want.

Amy Webb: Yeah. So this is where things get a little complicated. And, you know, I just want to be very clear: I don't think Big Tech is the enemy. I don't think that the G-Mafia are the villains. In fact, I think they are our best hope for the future. You know. And, introducing competition at this point may not elicit the same type of responses that you might see in other market sectors, in other industries. And I think part of the reason for that is that the technology that these companies build and maintain is the invisible infrastructure powering everyday life. It's not a single widget, or even a series of widgets--

Russ Roberts: Fair enough--

Amy Webb: And I think the challenge is that if you try to, for example, introduce competition in the Cloud Space, which might be the, you know--or even try to break up Amazon, a la Baby Bells, from years ago--

Russ Roberts: right--

Amy Webb: And I've actually heard that suggested before--you know, the challenge is that the technology that Amazon Web--like, the AWS [Amazon Web Services], the infrastructure and the technology that that entire system relies on and therefore--huge parts of the government and our largest businesses, that our customers--the challenge is that that technology bleeds over into other aspects of Amazon's core functions. There aren't solid walls. And so, if it's the case that at this point competition is not possible, then what are some other ways forward? You know, this March--so, very--I think it's March--pretty soon from now, is the 30th Anniversary of Tim Berners-Lee's, Tim Berner-Lee's [sic], seminal paper and suggestion to CERN [Conseil Européen pour la Recherche Nucléaire, European Organization for Nuclear Research] that sort of outlined the core premise of the Internet. And everybody--the idea--everybody at the time that saw that thought it was kind of a boring but interesting idea. And the challenge is that nobody thought that through--sort of, if the Internet becomes something beyond universities connecting to each other to share research, it becomes something else. And technology always becomes something else. Right? Then, how do we mitigate that? How do we prevent against plausible risk? Right? And, one way, I think, that I think that we could think about the future of AI, is to treat it, you know, similar to a public good, the way that we might treat air. Right? And I know that's a complicated--that's complicated, and I know it sends some shock waves into economists who would argue with me that I'm totally off base, and you can't possibly apply that. But, the public good concept I think works because it, first of all tells us that we all have a stake: that we are not just going with the flow. And it also then helps us think about global guardrails. And that, then--you know, I know it sounds like I'm angling for regulation. I'm not. I'm angling for widespread collaboration, with some very specific, agreed-upon tenets[?]. So, you know, principles that go beyond the obvious. Like, make sure that AI is safe. But that, you know, that everybody on the planet would agree to things like, whenever an investor invests money in AI for whatever reason, a part of that investment must be allocated to, um, making safety a priority. Or, cleaning up one of the training databases. Um, things like that. And having some kind of global body--again, I'm not usually in favor of huge government and big bureaucracies, but I think in this particular case, we can't just assume that these companies who have motivations that I don't think are always in line with what's best for humanity--we can't assume that they are going to take care of the stuff on their own. I'm sure your listeners know--like, a couple of weeks ago, Google had to assure, reassure investors, that enormous spend on R&D, was worthwhile. Like, people got spooked. You know, when we're talking about game-changing, huge, technologies, and research areas like AI--and we have no basic--we have no Federal funding. We have no basic funding research, or not anywhere near enough, in some of these areas, outside of military expenditures. Somebody has got to do it. And the challenges that investors expect, um--some kind of return on investment or some kind of shiny new widget that gets revealed, you know, on a quarterly basis, as though you can schedule big R&D breakthroughs--you know, we have to--so, if there was some global agency that acted a little bit more like the IAEA [International Atomic Energy Agency], with the caveat that I am not saying AI as a weapon--you know, then we would have some mechanism to think this through. We would need some kind of--you know, going back to those questions on tribalism and culture--I think we need to have some kind of global human culture or values atlas that is going to take time to build but describes and is not static. All--how we interpret things, culturally, how we relate to each other. Because, ultimately, these systems don't just live within the geographic boundaries of our countries. They travel. So, um, yeah: I think that there are a lot of solutions that are, you know, top down. But we individuals have to take some responsibility as well. Which means, we have to get smarter about what data we are shedding and when and how and where and why. We have to demand transparency. And I think it's possible for the big tech companies to be more transparent, without sacrificing IP [? Internet Protocol?]. You know? And our universities, I think, have to take more responsibility and shift their curricula to include difficult questions, not just any single ethics class, so that they weave questions and worldviews and, you know, other things into their core curricula. So this is like a--there is no single fix here. The good news is that there is something for all of us to do, and collectively if we can get it together, but to shift the developmental track of AI, I think the optimistic scenarios are possible. I really do. My concern is that everybody is going to say, 'I don't feel the pain all that much yet, so I'm cool waiting.'

1:04:04

Russ Roberts: Well, the first step is to pay attention. And I love your book for encouraging me to pay attention. And others, and anyone else who is listening, I think it does a great job of that. I think the solution--challenge, is quite--this is where it gets complicated: There's no--I can't think of a single example where this kind of global collaboration works out well. To me, it's like the United Nations. It's a really great idea; it's a beautiful idea, you have a nice quote from Isaiah, on the front of about beating your ploughs, your swords into plough-shares. And it just it--the distance between the ideal and how it works into practice is so vast that my view is probably better to not have to have it at all. But I can understand that you can debate that. But, I'm not optimistic that a "global collaborative effort" would work in any way that would make you happy at the end of the day. I want to try to suggest--well, maybe I'm wrong. But I want to try to suggest a different approach and see if you think there's anything to it. So, you said it's like a public good. You're talking to an economist. I don't have any problem with that language. I think what is certainly has public good aspects to it is the role of digital stuff in our lives. You can't say, 'Well, I want a digital world like mine,' and your digital world would be like yours. We kind of consume that one air that you are talking about; and I think that's very à propos about how to think about this. But is it possible, is it imaginable, that we could have a different way of interacting with each other digitally than the current way, that would allow a little more of what we might call a privatization? Or more choice? Or more options? So, right now, underlying all of this, is that this idea that some really bright people figured out some really clever ways to use knowledge about us to make money. And it's especially clever because it's free to us. On the surface. It's not literally free. It's not free in lots of ways, by the way. So, I used to--I used to say all the time, 'Well, Google's free. What's the big deal?' Well, it's not literally free in any sense. It's true I don't make a payment each time I do a search. But it turns out that of course Google uses the information that I use when I search, and access to me, in all kinds of ways, to charge people for access to me. And instead of me getting to charge access. And it's there pipe; so I kind of get it. So that's the way it's worked out. But we could imagine a different world--either through regulation--not my first choice either, obviously--but I think technologically--I want to come back to Arnold Kling said in a blog post recently. He said, 'You don't like Facebook, how they handle privacy? Make a better one.' And you could say, 'Well, that's really hard to do. It's almost impossible. Everybody's already locked in.' And network effects. And blah, blah, blah. But I think there--we have a lot of really smart people. And one way to get around these kind of scary, dystopian concerns is for people to say, 'I don't like the way the Internet is designed. I want a different one.' And, 'People smarter than me--I can't figure it out. People write about it occasionally that blockchain could be the basis for a different kind of Internet. I try to read the articles; they don't make sense to me. My fault.' But, I imagine that that could happen. And it seems to me that's the right way to fix this problem, and to build in a different relationship between me and these companies that create services for me that actually--they are exploiting.

Amy Webb: I think there is, for--if we are talking about the realm in which we as individuals have personal relationships with parts of the Big Nine, then yes. I think it is plausible, not impossible but certainly challenging, for somebody to develop an alternative to Facebook--you know, that promises initially to somehow get around a lot of the challenges that Facebook has had. At the end of the day, though, we are still humans. And, the parts of the digital infrastructure that we seem to complain about will follow us. This is the same reason why I don't think that colonizing--like everybody who wants to colonize Mars--it's like, 'That's terrific. It's a wonderful idea. It's not going to solve your problems.' The problems that you have on this planet are going to follow you to the next planet. Right? So, I think if we are talking about the realm of personal technology, sure. Some of these issues can be solved. Somebody can certainly start another Twitter. I would welcome somebody starting another Twitter that has a different approach to speech. So, that's fine. I'm actually concerned about these systems that mine our data in a much more broad sense. And not just our personal data, but our companies' data, our local traffic data. You know--all of these systems that are learning from us in real time. And, ultimately, these narrow artificial intelligence applications are beginning to gain some momentum. There are some terrifically interesting research out of a group called Deep Mind, which is a subset of Google. And, you know, I read one of their most recent papers. They've trained a system called AlphaGo--AlphaZero is the new version of the algorithm. Which is now capable of going from zero knowledge to learning how to play several games at once. And that may not seem all that thrilling to listeners. But, what it portends--and what's really, truly remarkable about this research--is that, without humans working hard to train systems, these systems are now capable of training themselves. And also creating child AIs to perform some of the tasks for them. And they are doing them in ways that defy our understanding. When we say 'artificial intelligence,' I think that that's actually a misnomer, because it assumes that the systems that we are building that are now propagating on their own remotely resemble the way that we think. We don't actually understand enough of our own human brains. What's probably a better term is 'alien intelligence,' not 'artificial intelligence.' And semantics matter. 'Artificial intelligence' makes us feel as though we still have some agency. My concern is that, as these systems propagate, they become more and more alien to us in ways that we don't understand. And at some point they start making more important decisions where the stakes are higher, on behalf of us all. And there is a God in that system; and that is the original group of people who created it. Upon which the foundation was built, and all the learning took place. So, if it's the case that we are in the midst of that transition at the moment, I'm hoping that enough people wake up: that we do not close our eyes just as the machines are gaining awareness. And that we ourselves wake up, and that we demand a change in the developmental track. And that doesn't mean that these companies can't make plenty of money. And it certainly doesn't mean that the companies are evil, or even that the people who work in these companies have some kind of nefarious plan. I believe--you know, Chinese, government withstanding--notwithstanding--I believe that the people who are in this, who are working on trying to solve humanity's grandest challenges. But they are doing so within ridiculous constraints that have to do with, um, the market, and the whims of investors, and what direction the wind is blowing in Washington, D.C. and who has decided maybe this is the year for regulation. That--those are my concerns. The personal relationship that we have to Facebook is of course a piece of this. But it's the bigger picture that ought to concern us all.

1:12:40

Russ Roberts: So, I think you and I--I assume--I know it's true for me; I assume it's true for you--we know a lot of these people. You know, I socialize with them occasionally when I'm out in California in the summer. And the people who work at the G-Mafia, they are wonderful people. But the incentives they face are what affect how they behave. And, in general, as listeners know, obviously listeners know--I like most of those incentives. But I do think--in this case, it seems to be a little bit different, potentially. I'll give you an example, and maybe we'll close on this; you can react to this. So, Facebook has had a bad run. They've had data breaches; they've had issues about whether they distorted what people saw in their timeline on political grounds--so, we haven't even talked about this: some of this is going to strike at democracy in a kinds of ways that we haven't even begun to think about or worry about. It's coming. It's going to be--I think it's going to get so much uglier than--we think it's ugly now; it's going to get immensely uglier. And I'm very, very concerned about that. And Mark Zuckerberg was dragged in front of Congress; and he did a few semi-mea culpas. But here's what he didn't do: What he didn't do was say, 'You know, a few years back, I was a really bright kid in a dorm room at a university with somebody; and we had an idea. And it turned into Frankenstein. It turned into something we certainly didn't plan, couldn't imagine; and now, we think we're steering it. It's kind of steering us.' The market, as you say, is going to make a certain amount of money. He's got investors who came on when the stock was already high. They don't want to be told that it's not going to go so well in the future. But, what he could have said, was--can't literally, but you could imagine a world where he would say, 'It's good enough. I like Facebook the way it is. It's a fabulous platform. And I'm not just going to run a bunch of ads,' like they did this summer that try to make us romanticize about Facebook and feel nostalgic about its early days--those cute, really nice ads to good music. 'I'm just going to give it up. I'm going to turn it over to a foundation and let the people run it without concern for whether it makes a lot of money. I'm going to turn it into what we could call a utility. But not one that needs to make money. That just serves people. And that foundation will be staffed by volunteers who love it and care about it, but who won't be driven to make money.' And this just sounds like the most heretical thing I've ever said on EconTalk. Because it sounds like I'm against making money; and we all--everyone who knows me knows I'm not against making money. But, in a world where there's not a lot of competition, that desire to make money is a nasty--can be a nasty thing. And, I don't see any signs that, that--as I said earlier, that Facebook--I mean, Zuckerberg's paid a social price. I'm sure some of his friends are embarrassed. But, it's a weird thing--that, you can't get off that horse. It's already gone public. It's not yours, any more. And, it seems to me we need to be thinking about ways to take knowledge, which is fundamentally what underlies these platforms--these brilliant, gorgeous, extraordinary ways that we interact with each other--and make them a little less about making money, a little more about doing something else. And, the people who created them lose control of them. And then they're stuck. But my joke is: I love Evernote. Evernote is fantastic. If it ever disappears, I'm going to be really lost. It's fine as it is! I don't need it to get better! I don't need it to get bigger! Keep it like it is. It's fine. It's great. Now, I understand it has to work with the new platforms, so maybe it's not as trivial a problem as it sounds, just to keep it as a sort of static, historical event. But, this idea that you need to just keep mining more and more stuff out of my life to sell it to other people that I don't know about, as you point out, is a little bit disturbing. I'm off of my soapbox now.

Amy Webb: No, no, no--how do you--listen, I was like virtually high-fiving you the whole time. But here's the--you know, how do we reconcile something that I think we all grok, like on some level, right? How do you reconcile what you just said with our market economy in the United States, where shareholders have been led to believe that Big Tech equals massive returns? I mean--

Russ Roberts: Well, you can't. You can't.

Amy Webb: Right. And the difference--so then what we're left with is China.

Russ Roberts: No! No! No. God forbid. No, no. There's a third way--

Amy Webb: But--

Russ Roberts: There is a third way. The third way is non-profit. It's a weird thing that we think that the opposite of government is business. The opposite of government is not top-down. And not top-down has two forms: Business, and non-profits--that's foundations, philanthropy, voluntary organizations. A bunch of really smart people, if they wanted to, could create an alternative to Facebook, soon, that would--you'd have to have a reason for it to exist. It wouldn't be enough that it would be called 'Nosebook.' That's my dad's name for it--it's a little inside joke for my dad, because he can't remember the name of it; he's 88. He actually does. He just likes to call it 'Nosebook.' It makes him laugh. But, you can't just say, 'We're going to create a Facebook that isn't Facebook.' You say, 'Here's a Facebook that isn't going to filter your news, isn't going to allow hate speech'--whatever it is--and let people gravitate toward that. Now, one of the challenges of this "solution" is you don't want a whole new tribe of people who are all getting together--like, I don't want Conservative Twitter, Liberal Twitter, Libertarian Twitter, Nazi Twitter--even though Twitter is a really ugly place at times, at least I get to see who is being ugly. Sort of. It's not--of course, it can be anonymous.

Amy Webb: Right. That's the consumer--so again, that's the consumer implementation. Those are people using the products that are built. The challenge is that if you turn--if you turn Nosebook into a nonprofit--Facebook becomes a nonprofit; they've got plenty of money--I don't know exactly how that would work. But let's say that's what happens. Then, where does the enormous sum of money come, to push forward and do all of the magical things, for example, that Facebook R&D is working on? Some of which may not need to exist--like Facebook Portal--that's their--

Russ Roberts: their [?]--

Amy Webb: right?--

Russ Roberts: I'm not letting that[?] in my house. Are you?

Amy Webb: Yeah. Yeah, exactly. But, there are plenty of other things. I mean, Google has pushed pretty far ahead on some--again, like, we cannot think of these exponential technologies in a silo. We have to think about the relationship between AI and genomic editing and CRISPR [Clusters of Regularly Interspaced Short Palindromic Repeats], for example. Or, AI--the relationship between AI and collaborative robotics and smart cities. The challenge is that our Federal government--we don't have a giant pool of money sitting around to fund the kinds of basic research that are going to not just propel our own economy, but do, like, like fulfill the promises a lot of us have been told about what our futures will look like once technology helps us out. This is the crux of the problem. And, again, why I keep coming back to--we have to allow these companies to keep making money. The only way forward--the only way forward, is if the G-Mafia can be the heroes to their shareholders, and if the shareholders can exercise some patience, and there is some courageous leadership somewhere in the investment community where somebody is willing to stand up and say, 'We're going to let these companies keep their heads down and work really hard; and it's cool if we don't earn huge margins over the next, like 16 quarters. We're going to be okay with that.'

Russ Roberts: Nyeeh: Won't happen.

Amy Webb: I know. But like, somebody, somewhere, is going to have to exercise some--

Russ Roberts: So, let me try a different story.

Amy Webb: Sure.

Russ Roberts: Let me try a different story. First of all, there's no free lunch. So, it would be great to have infinite innovation at no cost, always good for people. Doesn't happen. Technology always has these spillovers that are destructive. We cope with them, though. So, I think we have to have a little bit of faith in human adaptation. And I think part of your book is a warning that that's going to be a lot harder than you think--because that's going to be here before you know it in a way that you can't do anything about it. And I think that's a genuine concern. And I salute you for it. It's a real issue. I'm not suggesting that we are going to keep getting this innovation if we put these things in non-profit form. We won't. We're going to have to give up on some of those future miracles--that we are going to live to 140 or whatever the thing is. And the truth is, we human beings--we don't like giving up on that. We like making the world better in different ways. So that's not going to change, either. So, I'm left, at least for now, with the idea that: Culture changes. I don't think you are ever going to get a world where investors say, 'Eah, I don't care how much money I make.' Which you could get, though, as a world where people are ashamed to do things that are destructive of human flourishing and human agency and freedom. And maybe that'll help stem some of this tide.

Amy Webb: I'm heartened, at least, that you're willing to have this conversation and that people are willing to listen to the conversation. Because, I mean, I think as our spirited discussion points out right now there's no easy answer. So, I've come up with a handful of, I think, very pinpointed, practical ways forward. You know, you've got an interesting idea on a way forward. I think the key point here is: We need to think of a different way forward. Preserving the status quo gives China a strategic advantage that is going to become a problem for us the further along that we go. And, you know, the G-mafia, working on their own, competitively rather than collaboratively, I think also causes us problems and probably sets them up for a regulatory environment that will become problematic rather than helpful in any way--not just to them, but to everybody. So, we've got to--you know, we've got to stop fetishizing the future and talking as though AI was some distant off thing; and get real about the challenges that we are facing. And, in the middle of all of this, I am calling upon the brilliant women and men and, you know, gender-non-conforming people who live and work around these companies, to exercise some creative and courageous leadership to take us into the future. I don't know what more we can ask at this point.

Russ Roberts: My guest today has been Amy Webb. And what I didn't tell listeners is--I don't know, Amy, if you remember, but in the early, early days of EconTalk--I want to say 2006 or 2007--

Amy Webb: It was, yeah, a long time ago--

Russ Roberts: I brought you in to give me advice on how we could make EconTalk more successful. So, I'm giving you, well let's say half the credit, for our success of giving your suggestions; and I want to thank you for that and for a fascinating book and a great conversation.

Amy Webb: Thank you so much. It really was a--I learned a lot.