Explore audio highlights, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Stephen Karloff
Jul 29 2019 at 8:53am

I’m not sure she ever successfully answered Russ’s challenge to articulate the way people are being harmed (leaving aside potential use for political ends). The push back from Russ about current marketing being very similar to old marketing seemed correct and wasn’t answered well by Shoshana. I am sympathetic to the case that all of this data could be abused to harm consumers, and was expecting to agree with Shoshana, but I think she failed to formulate a case against ‘human futures’, a term which I think is not accurate and is being used to inflame outrage and shape the response of the people listening (ironically, considering this is the claim against those tech companies using behavioral data to shape human response).

Another great interview by Russ even with a more challenging guest.

David Zetland
Jul 31 2019 at 12:52pm

I’m replying to your comment, but I think that others have made the same point as you, i.e., “what’s wrong with the advertising revenue model?”

I personally think that ALL advertising is bad for consumers, so that’s where I’d begin, but I’d add that the internet allows LOTS of stealth advertising as well as manipulating the data we see, in ways that’s much more deceptive from traditional advertising.

Note that I am directly disagreeing with the “rational consumers can filter good information” that many free-market economists (like Russ) tend to take as realistic. I don’t.

Stephen Karloff
Jul 31 2019 at 2:14pm

I, too, do not believe people act rationally all the time (rationality is murky anyway, as rational people can come to many different, while equally logical, conclusions).

Curious to know what stealth advertising, and what deception, is taking place? I don’t notice myself being deceived or being acted upon without my knowledge in ways that are harming me. I know data is being collected that is used to target advertising for me, but so far I don’t, to my best knowledge, believe I’ve been harmed by it. I am completely open to believing I’ve been harmed. Are their examples you can share? I don’t ask to be snarky, I’m genuinely curious. I wonder if I am just unaware, or if we have a different definition of what being harmed looks like.

Steve

Floccina
Aug 6 2019 at 2:46pm

I personally think that ALL advertising is bad for consumers, so that’s where I’d begin,

What good is a product to you if you do not know that it exists?

Tom DeMeo
Aug 8 2019 at 4:05pm

Zuboff speaking style was painfully slow and she made long slow difficult arguments, but it isn’t hard to see a future where she is right.

Pre-internet, I think we all would be creeped out if someone stood outside our home on the sidewalk and watched our every move, following us wherever we went. We would find that threatening, even without any other gesture that conveyed malice. We are moving towards a world where we exhaust a cloud of data that explains every move we make in extreme detail.

The harm comes from the ability to apply that knowledge in ways we would find inappropriate, assuming we knew.

It is now possible for a relatively small sum for your employer/insurer/financial institution/government/politicians/vendors(anyone who is inclined to buy the data) to track and analyze:

Your health related activities

Your location mapped over time

Who you interact with and how

Your consumption patterns and interests

Your sexual activities and interests

Your political opinions

Your legal transactions

Precise information on your financial status and any vulnerabilities

Your family and relationship issues

Everyone else you may negotiate with

Your psychological profile

Any recent tragedies or difficulties

 

I could go on and on. You might not get a job, or a loan, or an insurance policy, or you might get charged more for a product, or you might get exploited in a moment of tragedy. You might not ever know how.

 

Miss Kitty
Aug 14 2019 at 2:24pm

I’m not sure she ever successfully answered Russ’s challenge to articulate the way people are being harmed

I don’t think we should simply accept Russ’s framing of the issue as the correct one. Why should we assume that the veracity of Dr. Zuboff’s message depends upon whether or not consumers are being harmed at this precise moment? And what would even constitute harm – whose definition should we employ? I think Dr. Zuboff should have pushed back more explicitly against the terms Russ implicitly laid out, as that would have made for a clearer discussion.

I think a better question than “How are today’s consumers harmed in this specific moment by these complex and quickly-evolving processes that we don’t yet fully understand?” would be:

So you’re not concerned that opaque organizations with unprecedented data-collection and utilization abilities, which also happen to possess and control the infrastructure that enables our 21st century lives (our online social networks, information-distribution platforms, maps, research tools, etc.), have already developed the ability to – without our consent or knowledge – manipulate our behavior in ways that extend far beyond anything achieved in the past two centuries of advertising (Russ’s implied analog to “this”) in less than two decades? Or that we seemingly have no recourse against this behavior, short of disconnecting from the internet and contemporary technology (a luxury unavailable to many) and hoping that enough of our fellow humans join us to prevent any potential society-wide implications from realization?

Does this not represent a new economic paradigm rife with moral hazard? Russ seems to assume that consumers necessarily benefit from these multifaceted processes because they ostensibly “choose” to engage with them, but that assumption seems completely unfounded when we consider (as Dr. Zuboff picks up on) that information asymmetry is built into this paradigm. Consumers generally don’t know what they’re “consenting” to, or even that they’re consenting to anything at all. On the production side, technological capabilities are increasing so rapidly that firms still haven’t figured out how far their data could take them – one could argue that they don’t even fully understand what they’re “asking” consumers to consent to.

Finally, you add to Russ’s question “leaving aside potential use for political ends.” Why should we (and how could we even) leave aside the potential for political manipulation using the tools developed by and for surveillance capitalism? Where do you draw the line between politics and everything else on the internet? If producers within this paradigm are allowed to deceptively exploit consumers in other realms, why shouldn’t they be allowed to engage with the political? Dr. Zuboff’s book doesn’t seem to specifically leave out political implications of surveillance capitalism, so why should our discussion? That brings this comment full-circle – Is surveillance capitalism only concerning if we can identify specific harms to consumers in this moment? Dr. Zuboff doesn’t seem to think so, and I don’t think Russ made a good argument for the case that we should.

Pierre Trepagnier
Aug 14 2019 at 5:30pm

Zuboff’s basic point, although not terribly well articulated, seems to be that differences in degree become differences in kind. That is, the amount of metadata aggregated by tech giants permits much finer-grained knowledge of our wants and needs than previously possible. (Russ’s retort would presumably be that that is wonderful.)

However, my complaint is that she could never come up with a policy prescription other than more government regulation, or even understand that government regulation might be problematic, due to well-known issues such as regulatory capture.

 

David Gossett
Jul 29 2019 at 11:07am

In the future, I think all your data will be on a device inside your home — search history, purchase history, social profile, resume, health records, etc. You will install apps on this device allowing you to connect to peers and the rest of the internet. You control who sees what. Everything will be siloed. One manufacturer will not know what you bought from another. You control what you share at all times. Zero ads, unless you subscribe to a marketing channel. You will have an app that connects you to searchable content on the web. If you do decide to let your data be mined, it will be sent without personal identifiers. Maps will be fully stored on your mobile device. Your phone will connect to your device.

Technologists understand that Facebook, Google, Amazon.com, etc… are just growing pains. They are not built to last. Peer-to-peer is coming. It just needs time. These intermediaries grew fast and they will fall fast. We are just waiting for this generation’s Tim Berners-Lee.

Robert
Jul 29 2019 at 11:30am

I found this episode hard to follow. Most of the time was taken up by a narration that had little to do with the concerns that the guest had. After listening to the episode, I’m still not clear what the concerns were. Other than finding the enterprise a bit creepy.

Dusty Jamieson
Jul 29 2019 at 11:55am

Mr. Karloff – you are bang on: she doesn’t even acknowledge his repeated requests to elaborate on how these developments harm us more than they help us. I will surrender that there may and likely are some goblins hiding out there somewhere, and that the benefits are more available for apprehension, but she doesn’t give us much to ponder. Mr. Gossett – I find mind myself leaning heavily toward your possible iterations of this ecosystem as it certainly aligns more with my agency-oriented perspective – vs the victim narrative on broad display throughout Ms. Zuboff’s version of our future. The more I listened the more it became clear to me that this was yet again a poorly disguised incarnation of yet another fundamental criticism of capitalism itself. Her entire premise is predicated on the assumption that citizens have no power over their own behaviour, and that the answer lay in some more heavy-handed intervention to “save us from ourselves”. She very carefully avoids saying this in so many words, but her overuse of collective terms such as “psychologists say” (which psychologists?); “economists say..” (which economists?); “people act like xxxx..” (which people? I don’t do that) – belie her deeper belief that humans are intrinsically incapable of looking after themselves in a dynamic environment. Because that is how she herself feels. It’s too bad that this book is getting any traction because there are legitimate concerns that we absolutely should be aware of, such as what would happen if the government were to acquire this data? That is far more frightening, yet she seems oblivious, or rather dismissive of the idea completely as I suspect she completely disagrees with me. At any rate, Russ was a gentleman throughout, as usual, but she missed a great opportunity to actually discuss these issues from both sides with someone who would have provided her a useful foil for many of her points, allowing her to refine and expand her knowledge and perspective – had she allowed him to contribute a little more.

see Evgeny Morozov’s excellent critique of the book: https://thebaffler.com/latest/capitalisms-new-clothes-morozov

Michael Joukowsky
Jul 29 2019 at 5:51pm

Dear Dusty

Thank you for the link.  Wow, that was the longest review I have ever read.  I actually became confused and finding myself fall a sleep!  But in listening to Ms. Zuboff and Mr. Roberts, I felt the same sensation.  So let me get this straight, people have data on us.  They had data before but according to Ms. Zuboff that was not data like this data.  This data they can produce an outcome and that produced outcome is valuable.  How do they produce the outcome?  Through subliminal interactions where the reader/observer/listener/customer is coerced into taking an action.  I am 55, I have seen this argument used about film, advertising, product placement, games, video games, etc.  I am not saying its not true that this has an affect on the individual, but even in 1984, George Orwell finds the individual who wants to be different and is willing to die for it.  So, Ms. Zuboff, thank you for your interview, but I do not get it.  I am not convinced.  You have missed the subtilties of culture, race, belief systems, moral systems and freedom and how each of these are not a predictable outcome.

Joseph Kozsuch
Jul 31 2019 at 10:09pm

Yes, she was unpersuasive, but you are misinterpreting her argument that we need protection. The Tech companies have the abilities to be psychologically clandestine. They are not robber barons or snake oil salesmen or basic advertisers, who all naturally and inevitably attracted some distrust, but given their unprecedented omniscience, omnipresence and utter lack of respect for autonomy, perhaps are something else entirely. With their methods, human rationality is disregarded from consideration. Unfortunately, the average person using these services hasn’t contemplated their negative effects: the manipulation of thought, behavior and ultimately action to serve economic purposes.

I agree that power should not be transferred to the government, but perhaps these companies should be split up so alternative models can bloom. These will never do so unless these behemoths lose their strangulating grasp on the Tech markets.

Ed Kless
Aug 1 2019 at 12:12pm

I agree that power should not be transferred to the government, but perhaps these companies should be split up so alternative models can bloom.

Sorry, Joseph, but this is a contradiction. Splitting up these companies is tantamount to transferring power to the government.

No, the better solution is for us as individuals to stop using these services. As is mentioned in several other posts and during the episode, one can start by using duckduckgo.com for search. I would also suggest switching to the Brave.com browser. It is based on the same code base as Chrome, but without the overlords watching.

Dylan
Aug 8 2019 at 7:54pm

Ed,

I’d like to agree with you about opting out and not using the services if you have a problem with the privacy implications, but this becomes more and more difficult every day.  I’ve been concerned about privacy pretty much from my first day on the internet.  When my library started allowing you to check out books and have them mailed to you via their BBS in the early 90s, I created a fake identity just to use so that a library wouldn’t have a record of what I was checking out.  I did the same thing when I created an account with Amazon in 1995.  I’ve been using extensions to delete cookies automatically, VPNs to hide my browsing history from my ISP, TOR for browsing sites that don’t require a login. I’ve never had an account with any social media company.  Until very recently, I hacked the smartphones I owned to remove all of the Google hooks into the OS, and only used FOSS.  The list goes on and on…and you know how much difference it has likely made on the amount of information these companies have on me?  No way to be sure of course, but I’m pretty sure the impact has been minimal, because they have so many ways of collecting information and associating it with you.

A few months back I had to get a new phone, and for the first time I just couldn’t be bothered to remove 90% of the functionality of the phone for some uncertain and likely minimal privacy gain.  Yesterday, shortly after getting home from a bike ride into the city where I listened to this episode, I noticed that their was a software update for my phone, which I downloaded and installed before bed last night.  This morning I woke up and tried to make a normal phone call, only to be told that the phone application wouldn’t work unless I gave the phone access to my camera, my location, and a whole host of other things that I had denied when I first set up my phone and that should in no way be needed to make a phone call.

Increasingly it seems that opting out in any meaningful way means completely removing yourself from society.  Because even when you’re not a customer of a company, they’re still collecting data on you.  How do I opt out of Equifax losing all of my data again?

I don’t know what the answer is.  I’m pretty skeptical of all of the government imposed solutions.  I have trouble seeing as how breaking up the companies gets us anywhere, since the natural incentives just would push them back together again.  Regulation along the lines of the GDPR just ends up being an annoyance for customers and favors the incumbents.  Having an ownership claim over your PII is interesting, but as Shoshana mentions, that doesn’t (and can’t) get us ownership over the predictions, which is the information that we want, without having to give the companies the data they need to make the predictions.

I was disappointed in this episode, since this is an area that is of great concern to me, and I wanted to hear both a better synopsis of the problem, and hopefully some more creative solutions.

Matt V
Jul 29 2019 at 12:41pm

This was an interesting episode and I am ultimately not sure where I come down on this tension between privacy and the proliferation of the (many wonderful) services that use our data.

But I have to agree with some of the other commenters that I dont think Shoshana successfully countered Russ’ basic question: why should we be more worried than hopeful? In fact I found it hard to follow much of what she was saying because it was all couched in terminology that seems unique to her book. Ultimately I think I come down on this one where Russ is and can’t say that this interview did much to offer any new ideas.

Angelo Romasanta
Jul 30 2019 at 3:33am

I share the frustration of other all the other commenters. The guest tended to use a lot of buzzwords instead of just answering directly. I am sympathetic to her viewpoint but she could have worded things much more simpler. For instance, why call things “computational products” when everyone uses a more familiar word.

Mads Lindstrøm
Jul 29 2019 at 12:43pm

Having the state protect us against privacy violations, seems a lot like having the wolf guard the henhouse. I could, of cause, be happily surprised, but so far government have seen all too happy to use the privately collected data.

Andy M
Jul 29 2019 at 1:39pm

I wish there was more analysis in this episode and fewer conspiracy theories.
Russ tried to steer the conversation to constructive ends, but there was a lack of depth to Ms. Zuboff’s analysis. It’s all smoke and mirrors, with many extreme claims and few logical arguments.
The topic is worthy of an EconTalk episode, but I’m afraid I did not find this episode intellectually stimulating.
(Still my favorite podcast!)

Ed Kless
Jul 29 2019 at 1:51pm

I am 51 minutes into this episode and decided to come here to see if I should make the investment. I keep thinking: “And your point is…” It seems Zuboff never does get there. This might be the first episode in three-plus years that I do not finish. Russ is charming as always and has done his best, but alas his guest keep dropping the ball.

David
Jul 30 2019 at 4:36am

Ed-

Couldn’t agree more. I kept hearing plenty of arguments to authority and leading statements (“You’ll agree that…..”), (“You know….”) but not much substance to actually prove harm or really make me see this as a potential harm. Full disclosure, I work in digital advertising, but I came here to check my biases and make sure that I wasn’t the only one frustrated by this interview.

Jim Howe
Jul 30 2019 at 3:48pm

I just finished the episode and I also kept saying “and your point is…” or “and your solution is…” or “and the real problem is…”. I never really heard any answers.

Jeffrey Stiles
Aug 1 2019 at 10:18pm

I did the same thing, only at minute 48…

I’m in cyber security so thought this episode would be another interesting collision of my two worlds–similar to the Bruce Schneier episode. She definitely provided some interesting background, but I was really looking forward to a specific list of harms from the current system.

Roger
Jul 29 2019 at 4:14pm

Despite Ms. Zuboff’s inability to fully articulate her concern’s, I would love to hear someone else’s take on this subject, including Russ. For all of the doom and gloom predictions surrounding the use of our data by these companies, I have yet to come across any reason to be bothered. I was sincerely hoping to gain some insight into the minds of these advocates, but sadly this episode came up short.

Dave
Jul 29 2019 at 4:51pm

I’d recommend that these conversations about data privacy and ownership start with a simpler question about the definition of data.  Data are measurements, and the measurements are clearly being made by the owners of these services (things that occur on their servers), even if they are measurements triggered by user behavior.  Even data on a user’s name is simply a collection of what the user enters (or otherwise shares), which may be misspelled or otherwise incorrect (and sometimes purposefully so).

If I run an experiment to measure the effect of water and shade on plant growth, and I measure the heights of the plants each day, do the plants own the data?  Do planets and stars own their astronomy data?  Even academic studies on people give ownership of that data to the researchers, with the subjects’ consent.  But that consent does not usually restrict the researchers to a subset of analyses they can apply to the data collected.  In fact, often academic researchers purposefully mislead subjects regarding the true purpose of experiments in an effort to observe more natural behavior about the true behavior of interest.

My point is not that privacy is unimportant or that people should freely give up information about themselves without concern.  But the presumption that users own the data measured by service providers doesn’t follow logically, and so users and service providers should be able to agree to whatever terms they feel are worth whatever trade-offs are involved.

It could be the case that more regulations are needed about the clarity of disclosures of such terms, or against the collection of data outside the scope of the services that users were never notified about.  Or perhaps there should be more severe punishments when companies allow data on users to go to unintended hands (through sale or data breaches).

Other than that, this sounded like a prime example of why I claim many academics lean left: academics consider themselves to be experts; and the explanation is that people are too naive/distracted/etc to choose what’s best for themselves and their individual preferences.  Therefore we need regulations that rely on experts (Harvard is full of experts, right?) to make decisions for us.  Who needs freedom when we have experts to think for us?

Tom Murin
Jul 31 2019 at 12:18pm

Excellent points. I’m as afraid of the “experts” like her than I am of the big tech companies. She makes some good points, but there was so much that I disagreed with and plenty that I think is wrong. I am amazed by how much individuals believe they can control outcomes. This is the kind of thinking of the current Communists – it will be different this time because they will implement it better than it was in the past – yikes!

Peter
Jul 29 2019 at 6:38pm

I think there are important ideas here, but Zuboff struggles to communicate them persuasively.

For a clearer, concise exploration of similar themes, see Stand out of Our Light by James Williams (former Google AdWords pioneer, now a researcher at the Oxford Internet Institute).
A couple of quotes from Williams below, which I take to be in a similar spirit to Zuboffs fundamental concern:

For too long, we’ve minimized the threats of this intelligent, adversarial persuasion as mere “distraction,” or minor annoyance. […] In the longer term, however, they can make it harder for us to live the lives we want to live, or, even worse, undermine fundamental capacities such as reflection and self-regulation, making it harder, in the words of philosopher Harry Frankfurt, to “want what we want to want.” Seen in this light, these new attentional adversaries threaten not only the success but even the integrity of the human will, at both individual and collective levels.
James Williams, Stand out of our Light: Freedom and Resistance in the Attention Economy, pg. xi, loc. 121-125

I’m not arguing here that the main problem is that we’re being “manipulated” by design. Manipulation is standardly understood as “controlling the content and supply of information” in such a way that the person is not aware of the influence. This seems to me simply another way of describing what most design is. Neither does my argument require for its moral claims the presence of addiction.2 It’s enough to simply say that when you put people in different environments, they behave differently. There are many ways in which technology can be unethical, and can even deprive us of our freedom, without being “addictive.”

James Williams, Stand out of our Light: Freedom and Resistance in the Attention Economy, pg. 99, loc. 2303-2309

[Comment edited per commenter request—Econlib Ed.]

Sabo Tosh
Jul 29 2019 at 7:00pm

A bit esoteric podcast. Luckily as surveillance capitalism practitioner that has thought hard about this topic – let me try to answer Russ’ question.

What is the problem, what is the harm?

Informed Consent – nobody is asking for data, they just take it. We are not being specifically informed about what data is being collected, how data is being collected, what companies the data is being sold to, and how the data is being used.

Choice – behavioral and personalization technology is neutral – it can be used to provide great value to consumers, it can also be anti-social used to influence consumers and voters beneath the level of awareness. Hidden Persuasion is not new to advertising, but the scale and efficiency at which this data is collected, analyzed, combined, and sold has never been so great. We need better choice mechanisms to prevent our personal data from being sold to potential bad actors. We also have no rights, no recourse, to review, edit, delete, or if we want to revoke the rights to use our personal data at a future time.

Lastly, and more big picture, to be more lovely we would do better with an economy based on Human Intention, not Human Attention – this idea is best articulated by this tweet by @naval “The modern struggle is lone individuals summoning inhuman willpower, fasting, meditating, and exercising, up against armies of scientists & statisticians weaponizing abundant food, screens, & medicine into junk food, clickbait news, infinite porn, endless games & addictive drugs”.

Of course attention works on our ape brains, but we need the option, the choice, to mute attention based signals and boost the intentional. The Slow Data movement.

If these ideas sound reasonable for a civil society, I stole most of them from the book below – highly recommended, highly concise and information dense focusing on the problem and harms with personalized advertising.
Stand out of our Light: Freedom and Resistance in the Attention Economy
Williams, James

Danny Kao
Jul 29 2019 at 9:31pm

I’m sympathetic to her concerns but also found her largely unpersuasive. Exceedingly few people would use Android or Chrome were free but lousy (e.g. Bing, Yahoo, and AOL are also free). And not all wildly profitable companies are evil; just as not all non-profits are righteous.

Sabo Tosh
Jul 29 2019 at 9:51pm

Apologies for the double post.

My friend @Dave has hit the nail on the head and this is so important that I take if for granted, did not mention. This may be where all privacy problems start.

“But the presumption that users own the data measured by service providers doesn’t follow logically, and so users and service providers should be able to agree to whatever terms they feel are worth whatever trade-offs are involved.”

Many very smart people at these companies hold this same opinion. I find it not only wrong both logically and morally, but dangerous to liberty and democracy itself. Personal user data is not the same as other data, people have rights.

If data is or can be attached to a user identifier id, it is personal data.

Personal data has to belong exclusively to the person providing the data.

No matter what value is created, a company should not be able to own your personal data, they can collect and use your personal data with informed consent, transparency, and the ability for the data owner to audit, edit, delete, and revoke use privileges – but your data ultimately belongs to you no matter what legalese is hidden in a 10 page legal TOS agreement.

That right to private citizen data privacy and data ownership needs to be codified in law.

Dave
Jul 30 2019 at 2:12pm

“Personal data has to belong exclusively to the person providing the data.”

This again shows a basic misunderstanding of what data are.

The data are events that occur on the servers rented or owned by the service provider.

If I give (or even sell) you my autograph, do I own it forever?  It’s connected to me by personally identifiable information (my name).  And it is (analog) data.  Maybe it’s even enough data for you to simulate my handwriting and forge documents.  But no, I would not own it forever just because it’s connected to my identity; I transferred it to you willingly.  If you did something illegal with it, or violated some condition or a contract we made when I gave or sold it to you, then you would face consequences.

Some data collected by service providers about users are sensitive.  This may lead such providers to provide more credible assurances about the data or invest more in security to gain trust.  And if providers with data that users consider sensitive start abusing that trust, the users can leave the services.  It’s also important to remember that different users will consider different kinds of data to be more or less sensitive, such that broad hurdles may suppress useful services where many users did not require protection in the first place.

That’s not to say there is no room for regulation, but I think it’s important to think carefully about what the data are and how they are collected before making broad reforms that could hurt consumers of many services.

Sabo Tosh
Aug 6 2019 at 4:05pm

“This again shows a basic misunderstanding of what data are.
The data are events that occur on the servers rented or owned by the service provider.”….

All examples… plant growth data, an autograph, event data. I think are pretty much just category errors. A plant is not a person, there is no argument of digital privacy here for the plant, an autograph is not the same as for example my shopping profile data. Server event data? who generated the event – was my personal data used to seed derived data.

TLDNR; There is no personal data without a person, a person created the data, a person owns their personal own data not a company. I am not saying personalization is bad, far from it, personalization has the potential to unlock vast amounts of human potential. But to have a world we want to live in, people have to come first, people are not products do be bought sold and traded. I don’t trust companies to do the right thing with my or your data.
There are very smart, well meaning people at all these companies, but we both know priority must always go to revenue for companies to succeed today. When people are the product there are bad incentives to always put privacy initiatives way below the line.

JFA
Jul 29 2019 at 10:21pm

Diane Coyle’s review of the book: http://www.enlightenmenteconomics.com/blog/index.php/2019/02/an-unpopular-confession/

My experience with this episode was much like Coyle’s experience with the book: I didn’t finish.

I smell obfuscation when your answer to “where’s the harm?” is to ramble on about how you need to set up some esoteric (most likely Marxist) framework rather than answer in plain English.

DB
Jul 29 2019 at 11:21pm

Boy, this was a rough one to listen to, and it’s too bad given that the topic is interesting. I was thinking from the very beginning “But what are the damages tech surveillance causes,” and Russ zoomed in quickly that that was an unclear point.
My own intuition is that there probably isn’t much harm – it’s just new technology that we somewhat fear because that’s how humans respond to new technology. But really trying to be open to the idea, my gut is that any potential harm may like in exposure to additional risk (better ad targeting is fine, but aggregating all of that data makes it easier for bad actors to somehow access it later to cause tangible harm). Or perhaps it is embedded in the system dynamics somehow (when we lose a sense of privacy, we behave differently in a way that externalizes harm to others – maybe we take fewer risks because failure is more public and we all lose out on the payouts of each other’s optimal risk taking).

I was also disappointed at how the guest was sometimes rude towards Russ, who was by all means a very polite and open interlocutor. I don’t think I would listen to the episode if she returned as a repeat guest.

Schepp
Jul 30 2019 at 12:15am

Thank you commenters for pointing out that Dr. Zuboff seemed dead set against any real exchange of ideas.  I put the podcast on 1.5 speed after she interrupt Dr. Roberts and said he has not giving her a chance to finish.  Followed by her soliloquy with a few “OK, Russ”s as if she would let Dr. Roberts actually communicate any meaningful discussion points.

I am sure there are some subliminal messages that act upon us.  I also know that when apps are trying to sell me the same item I purchased 2 weeks ago, it means big data still is pretty weak.  I certainly know that Waze and Map app on my iPhone are trying to sell me something.  Just as Dr. Zuboff was trying to sell us something on the podcast.  Sales pitches are a necessary part of markets and some acceptance and some rejection is a necessary part as well.

As pointed out by the commenters above, while some data practices seem to claim rights to our data. I would like more choices in how I pay so that I could choose to pay for service in money rather than in privacy.  I remain more afraid that government would want to control most data and the government insiders would use that data against me.

Gerald de Haan
Jul 30 2019 at 1:22am

A great episode.  I think Shoshana did a great job given the complexity of the topic.  Getting people out the front of McDonalds feeling hungry, all by remote, subliminal control is truely concerning.  Only a small step further and the SCs will be impacting voting decisions (or may they have already tried that?).  It is all very well to suggest that people can choose not to have location services on (for example).  But they don’t, just like they don’t decide to stop watching bad TV or eating food that is bad for them.  People are trading agency for convenience and sadly, the latter wins most of the time.  This is an interesting space.  No-one has all the answers, but the fact that the conversation has started is great.  Thanks Russ for giving this topic some “air-time”.  I used the Apple podcast last app to listen to it, so I’m sure the fact that I have listened to it will not be used for/ against me 🙂

Gabriel
Jul 30 2019 at 3:27am

I swear that at some point after Russ asked her again, “What’s the harm in all that?”, she was just going to say “Because they are making monkey!” Honestly that was all that it seemed to me after trying to process her rather obfuscated language.

I think it’s also an interesting case of Kling’s 3 languages of politics: her answer to “What’s the harm” was descriptive instead of argumentative, probably because in her circles it’s good enough to say that a company is big and very profitable to mean that there’s something wrong in that.

Just like Caplan say , these tech giants have gave us so much, and yet we hear that the world has never been worse. Layman people have the best intuitions here: they just compare what they have access now to what they had when they were kids, and they are just amazed they have all this things, for free.

[Note: See also Arnold Kling’s EconTalk episode, Arnold Kling on the Three Languages of Politics. –Econlib Ed.]

David Walker
Aug 7 2019 at 1:09am

I’d like to think it was all much more sophisticated than that, and I’d welcome an explanation I could relate to. But Gabriel, I think your diagnosis is pretty much right: her underlying belief is that if big business does it, it’s probably bad.

My own experience as part of the surveillance capital machine is that these technologies need to be scaled because they’re surprisingly ineffective.

 

Scott
Jul 30 2019 at 8:39am

For those of you that don’t see the harmful effects of these developments, please ask yourself this question:  Are you willing to have your behavior modified for the financial and political gain of  someone else without your approval or even knowledge?

If the answer is “Sure! as long as I’m getting awesome food delivered more quickly and having more fun on my phone, who cares?”,  then pick the moral question that means the most to you, and answer the question again if you knew with certainty that the entity benefiting from the control of your behavior holds the directly opposing view and is willing to minimize your ability to communicate about it, take action on it or even undertake the activity itself.   While your agency as a free economic actor in markets that exist to satisfy your utility may not be as valuable to you as whatever moral question you fit into the foregoing question, our freedom to choose how we allocate our personal resources is arguably one of the most critical factors in our existence.

But if you’re still willing to let your behavior be modified, where do draw the line?  Under what circumstance are you willing to say “No!  I won’t allow a third party to know that about me, or to control my behavior.”?

The idea of extreme transparency was explored in the book Circle by David Eggers.  Note that the movie and book end very differently.  The book isn’t a great work of literature, but it is worthy attempt at the logical extreme thought experiment.

The incentive for a small subset of humans to control the masses is not a conspiracy theory, and calling it thus only shows your ignorance of the human condition through history until the founding of this country.  Why was the Fourth Amendment to the U.S. Constitution put into place? The evidence is clear that technological authoritarianism is real.  See present-day China.  I agree with Ms. Zuboff’s conclusions, but the solution appears as bad as the problem.

DavidG
Jul 30 2019 at 9:50am

Russ was devastating. With the very few words he spoke throughout the podcast, Russ demonstrated the vacuity of Zuboff’s argument. It is not always necessary to spell out your point in detail. Instead, by zeroing in on the most important question, and politely asking “why is it a bad thing that the data is being used in this way?” Russ made his point loud and clear.

It was a lot of work to follow Zuboff’s dense style, but it was crystal clear that she had no answer to Russ’s question.

Andy M
Jul 30 2019 at 12:57pm

Unfortunately, that doesn’t mean that there is no answer to Russ’ questions. I imagine most of us are listening for an informed debate, and there was little information or debate present. I’m not interested in anyone getting “owned”, as that usually means they didn’t bring anything to the table.

I personally think that most of the concerns are overblown, but I would love to hear reasoned arguments to the contrary. We are merrily building some great tools for future (or present) police states — that worries me a lot more than Google and Facebook finding alternative revenue streams.

DavidG
Jul 30 2019 at 3:04pm

I agree that Zuboff’s lack of response doesn’t mean that there is no possible response to Russ’s question. However, it does suggest that here is probably no easy response. And like you, I had been hoping for an informed debate.

Harry Robinson
Jul 30 2019 at 10:34am

These systems don’t stop consumers from creating consumer groups or even creating their own mechanisms for doing similar things, just with greater privacy.

I think her repeated claim of threats to democracy is very enlightening. I guess she doesn’t realize why all democracies in history have failed and that we are by empirical evidence an oligarchy.

I can also easily make the arguments that we are a fascist oligarchy at least from an economic standpoint, if you consider overall tax rates, the total number of redistribution of wealth programs at the various levels of our government and the massive military-industrial complex and police state, the primary economic components of those historical regimes considered to be fascist.

She’s worried about Facebook, Google and YouTube and their lack of openly sharing their data with society and she wants our fascist government to regulate them? Hell, the oligarchs are going to most likely buy the data from them if they are not the ones that initially financed them.

AD
Jul 30 2019 at 11:51am

As Russ said many times, it is possible to live without these services. At some point, can you keep saying that “no one” knows about all this data mining? By all means, keep making it known.

I think the one substantive concern she made was about facial recognition on the street. Everything else was just jargon, “digital architecture” and “human futures” (still not sure what means.)

Rob
Jul 30 2019 at 1:30pm

The ‘human futures’ phrasing is a clever way to make it sound like they’re selling rights to own/control people themselves which will kick in in the future — rather than merely forecasts of what they’ll want to buy in the future.

Jeremy
Jul 30 2019 at 12:30pm

The intrusion of the private sector into our private lives gives most people a “creepy feeling”. Like Russ, the thought of businesses mining data from inside my home makes me uneasy. My presupposition is that the continuous deterioration of privacy is bad. I was looking forward to a discussion about some concrete/ specific consequences to solidify my vaguely negative feelings. Russ did his best to get her to address this but she seemed unwilling.

I was at the bookstore today and saw Ms. Zuboff’s book….. it’s thick. I would assume a chapter or two focus on these questions but I guess she doesn’t want to give away anything for free. That’s a bit ironic considering her not so capitalist feelings about services provided by Facebook and Google.

Blackthorne
Jul 30 2019 at 1:09pm

This was an interesting episode, and I think both Shoshana and Russ made some good points. I agree with the other commentors that Shoshana didn’t do a great job of outlining what the specific threat of surveillance capitalism is. At the same time, I think most people feel somewhat uncomfortable with the amount of data companies collect, and there is a reason companies collect information discreetly/covertly. I’m surprised there wasn’t much discussion about the political influence of large tech/news companies, since this to me seems to be one of the obvious consequences of providing companies with data. Ultimately, it’s a “tragedy of the commons” type of problem, the return I get outweighs the harm of providing my individual information, but with enough users a company can potentially harm our social environment. Though I also think firms tend to overstate the capabilities of their algorithms.

Robert Wiblin
Jul 30 2019 at 1:26pm

I see other commenters have had the same reaction as me. The problem with this one is similar, though less extreme, to those with the episode ‘Matt Stoller on Modern Monopolies’ from two years ago — one of the all-time most frustrating episodes of the show.

I feel like my view on this topic is the same as Russ’. I’m nervous about the commercial implications, but not sure if there’s actually that much to worry about.

But on the political side, the idea of a few organisation’s having a very strong and invisible ability to shape the opinions of hundreds of millions, and potentially get them to support very bad ideas, seems more obviously risky.

If she chose to, I think Zuboff could draw out a vision of the future where this technology of monitoring, and opinion & behaviour shaping, becomes gradually much more powerful, and is ultimately used for harmful purposes.

It could, for example, be used to get people to vote for terrible political candidates, using messaging targetted at whatever topic each person thinks most irrationally about — focussing different and contradictory messages on each of us. We see a bit of this now, but what if it increased in scale 10, 100, or 1000-fold?

But for some reason she isn’t inclined to join the dots for listeners. None of the things she described is that bad, at least as yet; using traffic data to coordinate a smart city sounds good to me and I love Spotify’s recommendations every week.

So I’m forced to try to figure out what plausibly can and can’t go wrong for myself, and I presumably can’t do as good a job as someone who researchers this professionally.

Earl Rodd
Aug 3 2019 at 6:04pm

Well said. I would only add that every harm I could see is due to monopoly. The “surveillance capital” itself was not the root. And yet the guest preferred only a regulatory solution – I was seeing a world in which EVERY web page made me “I agree”.

Josh Hardison
Jul 30 2019 at 1:44pm

The topic is interesting, and these issues need attention, but I found her unpersuasive.  There were some points that I thought Russ could have pushed on more.

I don’t know what “a new economic logic” means.
This “unprecedented market in human futures”… has she never heard of life insurance, or really any form of insurance?  Existing futures markets all point to some human activity eventually.  What’s the difference?
“Clickthrough Rate” = “Prediction” is backwards.  Clickthrough measures human action.  You might make predictions based on that measurement, but the measure itself is not a prediction.  If clickthrough makes predictions, it was never explained how, or why it was significant.
Russ touched on this, but haven’t advertisers tracked human responses forever?  Granted, they have much more data now, but there’s nothing fundamentally different.  I’m guessing that Russ just thought it would be cruel to push on this point any further.  You’re a better man than I, Professor!
I can only hear “incredibly” and “unprecedented” so many times without getting more skeptical.

JFA
Jul 30 2019 at 2:30pm

If you ever hear someone use the form “______ logic”, it is usually some sort of social theory jargon. As far as the click through, her conversation on this topic (like everything else she talked about) was incredibly unclear. I think what she meant is that Google could use their data to predict click-through rates, allowing them to design more targeted/well-placed ads. This is what allowed them to dominate the internet ad market.

DB
Jul 31 2019 at 11:44am

“Clickthrough Rate” = “Prediction” is backwards.  Clickthrough measures human action.  You might make predictions based on that measurement, but the measure itself is not a prediction.

I thought this was odd at first as well, but halfway through it clicked for me what I believe she meant. Even in the earliest days of web advertising you could measure CTR and adjust ads based on that performance, but you would have to pick where the ad would display yourself and bid on that.

One of Google’s innovations was to predict an ad’s CTR before it displayed and track/refine its prediction of an ad’s performance over time. This meant that it could offer performance based ad inventory, where you would set a max bid and let Google choose where it would display based on its prediction of CTR. Google will even sometimes choose ads that have a higher predicted CTR but lower bid than others, with the idea being those will generate more cash with click volume and be more relevant to users.

One interesting thing is that this transition to “Performance marketing” also was driven not my monopolistic tech giants, but by advertisers pushing back against pay-per-impression ads, preferring only to pay for actual clicks.

I think the guest could have more accurately explained that distinction.

Rod Cain
Jul 30 2019 at 1:49pm

I found this podcast personally interesting, as I left the publishing industry in 1999 during the monopolization period preceding the transition to the controlled circulation business model (free distribution in exchange for rights to your personal information for third party monetization) and found myself re-living my business experiences with Amazon.

At the turn of the century, the publishing industry was struggling with how to utilize the world-wide-web to evolve our information or entertainment products (as very few publishing businesses were selling information as a service at this point). 100% paid circulation publishers also believed our wholesalers were not effectively reaching all possible retail market share opportunities, so we were seeking alternative options (competition) to serve this market. Mixed circulation and advertising publishers also believed there was additional market reach within the new media channel but were facing economic challenges of how to extend reach without being legal responsibility for royalties from a market with NO revenue streams from reach (as none of our third-party business customers were willing to invest incremental budget for this reach). These centuries old publishing models eventually partnered with Amazon and Google, which eventually led to them replacing these publishers and ironically with their own business model.

For example, in the late 90’s while at a major book publishing firm we partnered with Amazon as an attempt to increase retail sales volumes. In our desperation to reach a new market first, we made the mistake of first matching and then surpassing the wholesale\retail pricing contracts to Amazon. Only to learn we just cannibalized our existing profitable retail market share at a lower margin. Then we started to understand our distributor (Amazon) was also beginning to leverage our “lifestyle” information and entertainment business model and started expanding their product and service offerings from our analytical process.

About the time of the Y2K Crises and Dot Com Bubble Burst, we also learned the marketplace winners had no intention of replacing their conquered markets by producing any durable goods, finished goods, or services. Thus sparking the re-birth of the controlled circulation model, which instead of employing people to create and service the content – instead utilizing digital footprints as a self-selection mechanism defining the controlled circulation via the world-wide-web.

The concerns raised by Shoshana Zuboff in this podcast, reminded me of how I used to focus upon developing new products and services to improve or enrich people’s lives and left me wondering who is going to be developing and producing the goods and services for the digital distributors. . . . .

Scott G
Jul 30 2019 at 4:19pm

What’s the privacy concern?  For most people the biggest threat is still government, and not just one.  There are probably a few to be concerned about.

Some tech companies are a concern too, but it seems that the market is responding well to privacy concerns….and the good old fashioned alternatives still exist.  For example…

DuckDuckGo.  A search engine that works well and doesn’t track you.   https://duckduckgo.com/spread
Apple.  They’ve gained my trust via statements like this one: “Every Apple product is designed from the ground up to protect [your] information. And to empower you to choose what you share and with whom.”  As a result I switched to an iPhone. I also like the facial recognition feature on my phone as an alternative to using passwords at each website I log into.  https://www.apple.com/privacy/
Facebook lost my business for a number of reasons, privacy being a big one.  I now use email, iMessage, the phone, and blogs to keep in touch with people.  I do miss reading Bryan Caplan’s Facebook posts and wish Russ would have posted more on Facebook, but I can live without.
In addition to my iPhone, I have a Vonage phone, which isn’t that different from a good old landline, and has better audio quality than a cell phone.
Gave up LinkenIn.  Doing fine without it.  Kinda weird that people post their full resume on the web for the world to see?
I plan to give up Gmail and switch to iCloud email when I have time.  https://support.apple.com/en-us/HT202303
Flip phones.  I used one until a month ago.  The battery comes out.  Some have a full keypad for texting.  Inexpensive.  Drug dealers use them for privacy reasons.
It used to be that a “large” number of Google employees could read the Gmail of anyone they wanted.  Google changed that.  Not sure how many or if any Google employees have access now.

Michael
Jul 30 2019 at 5:26pm

Many have voiced the concerns I have about this episode. I will not repeat them, however I have 2 things to say.

1. I commend Russ on his restraint dealing with this guest.

2. When she talks of outlawing markets that trade in “human futures”, my small business with 5 employees  utilizes these “human futures markets” to make sure we can sell our products. We work in a creative endeavor and without the targeting data that we utilize our company might not be viable.

Also, I think the idea that we don’t know what is done with our information is as she said when challenged, “silly.”

I much prefer this power in the hands of companies how have a clear profit motive as opposed to the government. If you give this capability to the government, then quickly it could be like it is in China where this ubiquitous surveillance is used to suppress certain thought and political action.

I would go so far to say it is the duty of Google, Facebook, and Amazon to fight against government attempts to gain control of these capabilities regardless of the ways the government tries to do so.

 

Bee
Jul 30 2019 at 5:31pm

I found the interview unsatisfying for two major reasons: one, Zuboff did not clearly articulate her key points; Russ’s question while simple is too broad to support a simple response.

Having read Zuboff’s book I summarize her arguments along two dimensions – seizing property rights of others, and using the information in ways that manipulate users.  On the first front, it is that the failure to protect consumers in these exchanges, they enter contracts that they do not fully understand.  More information is disclosed or provided than consumers realize. In banking and lending laws are in place to protect consumers from asymmetric situations.  What looks free is not.  The opportunity cost is not seen given how it’s disclosed.  On the second front Zuboff argues that the information is used in ways that may harm a consumer because its use pushes behavior.  I see both points as valid.

 

My issue with Russ’s question is simple – he doesn’t engage deeply in the issues raised.  He says he’s sympathetic but doesn’t engage the two key questions that would warrant a moral discussion.  He has spent the past episodes talking about larger issues but here misses the larger issue. Unfortunate.

 

 

Dave Smith
Jul 30 2019 at 5:45pm

This is the first econ talk that taught me nothing.  Google knows you.  Sells things it learns when you interact with it.  It is sneaky about it.

Greg
Jul 31 2019 at 1:12am

As others have pointed out, despite repeated efforts by Russ to get the guest to get to the point, she does not. One would expect better of a Harvard professor.

the concern is this: when google knows I travelled to foreign country X, that can be used against me for political ends. Google has a blatant left wing political bias.

Say youtravelled to Taiwan, which proves you don’t support mainland China. Or you travelled to Nepal. Or you visited Japanese cemetery for persons considered war criminals by China…    and what if you travelled somewhere the extremists at google don’t approve of?  When will they use that against you? What if you bought a book that goes against google’s politics?

The dangers of using a consumers travel history or purchases against them, using social pressure, is obvious and dangerous.

Why can’t a Harvard professor articulate this danger?  Social credit scores determined by political extremists, even if one agrees with some issues, should scare the pants off any rational person.

everything you do and everything you buy will be logged forever and used against you years or decades later.

Yonathan Randolph
Jul 31 2019 at 4:50am

I think Dr. Zuboff could have made some good points if she had given specific examples of harms instead of inventing alarming but vague catchphrases such as “surveillance capitalism,” “trade in human futures,” “behavioral surplus,” and “economies of action.” And contrary to what she claimed, I don’t think any of her complaints are unique to the 21st century tech companies (as compared to 20th century companies doing marketing, credit rating, transaction processing, health insurance, etc.) Therefore, I think the solutions to these problems comes not from creating new theories for each of her vague terms, but instead from making analogies to existing solutions such as regulated utilities, PCI and HIPAA, and antitrust enforcement.

An example of a specific argument that could be discussed: One problem is that large companies give you take-it-or-leave-it Terms of Service that allow your click data to be used and shared for targeted advertising. Solution: just as credit card companies must offer a choice of preventing sharing, large companies must offer the option (possibly for a reasonable fee) of preventing your click data from being used for ads or combined with other data sources.

Wesley
Jul 31 2019 at 8:59am

A rambling episode that suffered from too much exposition.  Nothing can kill a show like too much exposition.

The guest, I think, got closest to her point towards the end.  As I understand it, the concern is that American (or Western democratic) society assumes a robust and dynamic democratic system, that is able to translate individual intent to governmental and bureaucratic functions that exercise this intent.  Leaving aside whether this assumption is valid, which Russ addressed (very briefly), the guest’s point is that by allowing this “surveillance capitalism” to have increasing influence on how our society functions, we are sacrificing our democratic control with limited benefit to ourselves.  We are trading a government that provide more benefit to citizens than it accrues to itself, for a private company that necessarily does the opposite. (again – not sure if this is true).

I’d have appreciated more discussion here.

Adam S.
Jul 31 2019 at 11:36am

When Shoshana started building towards the effect on democracy point, this (maybe?) got more interesting, but ultimately it didn’t go much of anywhere. Let’s say we have mostly autonomous cars, choosing the route we take, and businesses pay to use the route that passes them, rather than the quickest route. Or that routes are sorted based on the passenger’s behavioral history. I’m not sure that’s a bad thing, or that influencing driving patterns is any different conceptually than building or buying storefront in a shopping mall, but it’s at least worth the discussion. That said, what I would have liked to hear her talk about is why this is any different than, or worse than, government nudges? Those aren’t “democratic” either. Or other government decisions that herd people, like where to locate a highway. I’d like her to address why a city based on predictive behavior is any less democratic than one guided on, say, zoning commissions.

I also think Russ’s point about the cookies pop-up being annoying is important. If we think that, after being informed, most consumers are going to press accept, shouldn’t the default be the option that creates the least, not the most, transaction costs? Isn’t it better for the one out of ten thousand people who don’t want their location tracked to spend the time fiddling with their settings, rather than increase the required clicks of the other 9,999 people?

And if we move past informed consent and start talking about monetization of the data acquired measuring people’s personal information and behavior, of course that just increases costs. Among other reasons, Apple can charge more because they aren’t selling your data. Duckduckgo can’t invest as much into the technology as Google. There are always trade offs, and it’s not clear why consumers can’t make those decisions, or why companies can’t monetize consumer preferences for increased privacy if that is truly what consumers want. The “issue” is that people genuinely don’t care.

The ability to dramatically influence things like mood seems the most concerning, although that veers into the cultural and spiritual issues Russ has been focusing on lately, and I didn’t hear a convincing argument for why (or what type of) regulation is the proper fix for this.

Zhongzhong H
Jul 31 2019 at 11:52am

Ms. Zuboff’s lacking of compelling cases has weakened her argument why the future is concerning and I want to provide you an example here. Basically she is worried about some information aggregators can become the Juggernaut. They become the only player in certain domain to distribute tailored information to the customers or citizens, so people’s choice set will be limited. What’s wrong with limiting people’s choice set? Thinking about the Great firewall of China. This is some giant information aggregator not only preventing most Chinese people to have an access to outside ideas, but also censoring different opinions within the country. If you want to take all the information this wall gives you as given, you life will be at ease. If you want a different opinion, you will have to spend a lot of time digging out the things on your own, conditioning on you are educated enough firstly. Except for some additional information, you almost gain nothing by going through the hassles to climbed the wall. What you gave up is you long run ability to make choice again.  I think indeed the economic power makes the huge difference between a salesman and these juggernauts in influencing your life.  If you dislike a salesman, you can find another salesman with a completely different product. With these juggernauts, you only get things belong to the quantile of your distribution.

John Bicknell
Jul 31 2019 at 3:48pm

I am sympathetic to the dissatisfied tone contained within most of the comments so far. Perhaps this general dissatisfaction is an interesting point to consider in an of itself? I think most listeners (myself included) were intrigued by the thesis, believed that there’s potentially a “there there,” but were disappointed when the guest was unable to surface solutions relevant for the everyman. Instead, governmental-level solutions were presented which will take time or which may never happen.

Lurking behind this conversation is the worrisome and growing information warfare threat. Most people think of information warfare as an activity among enemy combatants confined to traditional military campaigns in far away theaters of operations. Truly, information warfare has been around for thousands of years. However, the information age has turned the entire planetary ecosystem (I’m including the domain of space, as well) into a battle space where there are no rear areas. Quite literally, every handheld device is now an information warfare attack vector delivery vehicle. Information warfare turbo-charged with weaponized AI is, I believe, the unmentioned “there” which is most definitely “here” right now.

What is information warfare? Timothy Thomas, a thought leader on Russia’s and the former Soviet Union’s brand of information warfare infused with reflexive control, defines it as: “conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action.” It is an inherently cognitive and highly analytical activity. And, as Dr Zuboff points out, it is insidious and goes undetected. It is also designed to control the party (person, society, military unit, corporation, critical infrastructure sector, etc) being attacked.

Corporations like Facebook, Google, and many others mentioned in the episode may be guilty of some unethical studies, R&D, and even questionable implementations which needed to be walked back or corrected. However, in my opinion, I don’t think they are the immediate worry. As mentioned in this discussion thread and in the episode, these organizations are providing technological services in response to the market. People are better off. It’s a glorious thing.

Rather, we should be concerned about state and non-state actors who are promulgating information warfare campaigns through the many social platforms owned by these organizations. These information warfare campaigns are carefully crafted messages (attack vectors) which are choreographed across numerous platforms in order produce an expected outcome and maximize effect. “The mind has no firewall,” as Tim Thomas observes. These attacks go directly into the minds of global users–many of whom are unsuspecting.

Everyone is talking about Russia’s 2016 US election interference via targeted Facebook advertising. That’s a perfect example. It is also four years old. Advancements, as described in this podcast episode, may enable terrifying scenarios which have already been observed in places like India where mobs of people have been mobilized based upon social media “marketing.” Campaigns don’t have to mobilize mobs to be effective, either. Stochastic or “lone wolf” initiations are possible as well where social messaging prompts a single susceptible person to get up off their key board or device and go commit an atrocity with zero advanced notice. There is strong evidence that the anti-vaxx movement is, at least, partially stimulated by state-sponsored information warfare designed to weaken Western societies with diseases which haven’t been a significant challenge for decades. Information warfare may be used similarly to retard economic growth for entire critical infrastructure sectors by dissuading or tainting promising research efforts.

Anyway, I think that’s a more vivid elucidation of the problem. It is here. It is now. In response, cognitive security and analytical information warfare defense efforts are emerging to help educate and protect society and critical infrastructures.

Kevin
Jul 31 2019 at 3:48pm

I think in the guests attempt to formalize these issues for academic presentation she often developed jargon/buzzwords that tended to obfuscate not illuminate.   When she took a break from her prepared material she tended to use this jargon less and employed terms that were more usefully descriptive and engaging.

I am sympathetic to the concerns but also that most people think the tradeoff is worth it.  I quit google search and most services (still trying to quit gmail) but thats my preference.  Everyone is still effectively tracking me through other mechanisms.

I think many have made a distinction that they don’t mind what Google does in the market but worry about their political influence or meddling.  How will we know when that occurs?   When they start micro-targeting on behalf of specific politicians (or against) in their search results and in other mechanisms will there be some red flag?  Or we be so far boiled we will not be able to tell?  In our modern world of crony capitalism the distinction between government and corporations is too blurry.  Google and Facebook have both done pro-bono work for politicians and political organizations.

I have not read all the comments but Zhongzhong H I think has the gist of it – large dominating services will essentially create barriers to real information and choices.  Google and Facebook will think they are our saviors as they limit us to what is “acceptable knowledge”.

I also think some of these problems as discussed have been solved in the courts.  Do I own my face?  Not as stated and the courts have established that because if I go outside in the public domain I am already giving my face away to everyone.  We will need new laws to say that no one is allowed to obtain a digital map of my face without my consent and use it for digital mining because that really is a new problem.

Kevin
Jul 31 2019 at 3:53pm

Sorry for the second post- I forgot two items.

First I thought the digital age would be an advertising dystopia so have not been surprised as she said we would all be.  I found the part of “We deserve…” absurd.  My football coach had that right, “You don’t deserve…”

Second, the guest suggests that democracy is much more democratic than it is.  Most my professions interaction with the government is driven by bureaucrats that don’t listen to a word I say but can daily impact how I spend my time, resources, and how I might be paid.  I cannot quit that and have no say over it – well I have a pathetic mostly meaningless say in it.  In this way even the deceitful lying google doesn’t strike me as worse – it strikes me as more competent.   If govt gets as efficient as Google my guess is we will find very little of what we consider the democratic process will survive their attempts to drive things in their preferred direction.

Matt Hines
Aug 2 2019 at 9:46am

Contrast this interview with any Tyler Cowen has done promoting Big Business, more specifically when push back on privacy and tech get brought up. Tyler answers the questions head on using clear and simple language. I get his points very quickly. After this interview with Shoshana I still did not understand why I should be overly worried about Google and Facebook having my data. She answered Russ’ objections with very long responses often using language that made it difficult for me to follow her points.

Andrew Beauchamp
Aug 2 2019 at 2:06pm

Does anyone else remember the “Subliminal Seduction” book from the 1970s?  Both the author of that book and Professor Zuboff are convinced that we are easily duped.

Putting the word “sex” in an ice cube or suggesting I go to McDonalds because it’s on my way home doesn’t rob me of free will.

Kevin Remillard
Aug 3 2019 at 6:32am

MyLife pays Google to list reputation scores about you from data they collect “making the Internet safer by allowing people to know the truth about the others they do business with, are friends with, or neighbors, or even date”.  FICO once charged for personal information they collected.

Earl Rodd
Aug 3 2019 at 6:13pm

As I was listening, I kept thinking that every harm talked about, e.g. private power in “smart cities”, was due to monopoly and was not inherent in surveillance capitalism at all. Monopoly seemed like the elephant in the room until the very end. And then Zuboff seemed to want only a regulatory solution. She said that having 10 little surveillance capitalists would be just as bad or worse! History does not bear this out. The reason competition does not arise is that given the size and market share of Facebook et. al., the barriers to entry are enormous.

When Zuboff proposed regulatory solutions, I had visions of virtually every web page now asking me “to agree” like the silly agree to cookies requirement.

I found myself constantly agreeing with Russ in pointing out how little harm there is in the current arrangement with regard to providing free service in exchange for commercial advantage.

However, I think there is another huge potential problem, again due to monopoly, and this is influence over what information is shown and found. The possible ways in which this monopoly power could be used to influence elections is rather scary. But the root is monopoly, not the underlying tools and technology – i.e. not “surveillance capitalism”.

I’m old enough to remember at least as bad of things being said about all sorts of advertising in the past. What was different was the lack of monopoly.

Gregg Tavares
Aug 5 2019 at 2:14am

Like others I got to around the hour point and then came here. Trying to decide if I should listen to the last 40 minutes.

The Pokemon Go didn’t alarm me maybe through a lack of imagination on my part. Companies have had promotions forever and people go. 1000s of people show up to RedBull’s various events like their soapbox races or Flugtag. So was Redbull manipulating the public in some evil way? I guess the point is supposed to be that for Pokemon Go’s events the sponsors were not announced? Was that really the nefarious? I mean for example there was a festival last weekend in my city. 100s vendor rented booths and sold merchandize and/or food and drink. If you want to spin that then Google are the festival organizers and the booths their their clients and the rest of us got a free festival. Little did we know that really the organizers are evil, their clients, the booth renters are evil, and we were all duped into attending a fun free festival and were instead sheeple for the booths owners. Is that a fair analogy?

I’m trying to think of some harms of surveillance capitalism. Like others I’m scared of it but find it hard to name actual harms.

There’s the dystopia shown in Black Mirror, Season 3, Episode 1 where your social score decides what you can and can not participate in. At the start of the episode the main character wants to rent a upscale apartment but the apartment complex tells her they have social ratings standards and only accept people rated 4 out of 5 or higher. At 3.8 she doesn’t qualify. China has apparently implemented that at a government level whereas in the Black Mirror episode it’s presented as a natural consequence of all “like”, “+1”, “👍”, ” 👎” culture.

I can say for myself it somewhat creeps me out that youtube or netflix knows what I watch. Maybe I want to watch some sleazy 70s sexplotaion movie because Quentin Tarantino said it was an inspiration for his work but I don’t want the fact that I viewed it stored by Google or Amazon or Netflix nor do I want it shared with their “business partners”.

Similarly as a gamer there are certain risque games I might like to at least give a 10 minute trial but if they are only available on Steam then Steam now keeps a record that I bought the game. In fact it’s considered a feature that friends and even strangers can look at the list of all games I’ve owned and how long I’ve played each one and even if you opt out of sharing that data, Valve, the company that owns Steam is still tracking all of it.

If the game is a VR game then Facebook which owns Oculus (a VR device company) records all activity on the device including the names of all games you play and possibly all videos you watch even if you didn’t purchase them through Facebook’s VR Oculus App store.

In the 80s a law was passed that video rental stores can’t share your video rental history. Today Amazon, Netflix, Youtube, Hulu, and Pornhub share all of this data.

 

Floccina
Aug 6 2019 at 2:52pm

I do not understand her position. I’ve been a computer programmer since before the internet and so maybe that is why. I have always understood anything on a computer attached to the internet can possibly be revealed to the world. All should act with that in mind. I do not mind it, it’s like you cannot do things out in the street and expect them to be secret. It’s hard in any case to keep thing secret.

And BTW the real threat is not Amazon or Google but hackers.

Brian
Aug 7 2019 at 3:05am

Zuboff makes a weak case.

It was brilliant of Russ to ask her what she would have us do.  Zuboff wants guys with guns aka government bureaucrats(guided by academics such as herself of course) to take and dictate: “awakening the sleeping giant of antitrust law. In that pointed at the tech sector”, socialize profits “fund our public education system”, fix this “market failure” etc.

depressingly familiar

Jose Almeida
Aug 8 2019 at 3:30pm

I also have to agree with others that Ms. Zuboff’s inability to focus directly on Russ’ questions and answer them directly made the entire podcast difficult to follow.  It’s a shame because there are real concerns regarding what I think Ms. Zuboff has correctly coined “Surveillance Capitalism”.  Here’s something to consider.  G Suite for Business basically gives Google eyes and ears to the goings on of every business subscriber.  Imagine the natural language processor in Google Assistant reaches a point it can pretty well pick up purchasing requests by mangers to staff in emails, texts and phonecalls, and imagine Google then giving direct recommendations for purchase FROM A GOOGLE STORE.  Google’s span is such that it could automatically look at the pricing of competitors, then source and price below all competitors before making the recommendation.  The effect of this means that Google could put itself in a position where it drowns out its competition in almost a race condition fashion.  So there are questions like, how do small businesses compete in an environment like that?  Do they have to become Google affiliates to have their products recommended, thus Google brokers 80% of sales by the technology linkages?  It would be one thing if Google did not sell direct to the customer but provided companies a way of providing inventory, pricing, and availability information so that Google’s recommendations give a list meeting basic customer criteria – its quite another to have the inside on company demands BEFORE order requests were sent public and Google determined what the lowest price was and they sold directly.  The worry for me isn’t “prediction” as Ms Zuboff kept mentioning (none of the stats collected are truly predictive,  they just result rules of thumb –  how many of these ads to this demographic makes the campaign profitable – and even that is not always reproduceable), its Google becoming the path of least resistance all business transactions before a proper open market call to tender and shutting out all competitors in so many areas.

Marilyne Tolle
Aug 20 2019 at 2:47pm

I wonder if the concern with Big Tech should be the risk that the emergent, bottom-up surveillance it’s instigated is co-opted by governments and subsumed into top-down surveillance, in what would be a soft version of China’s state surveillance.

At the moment, the bottom-up surveillance manifests as benign (though annoying) corporate nudging, where people’s decisions are being “optimised” for them.

In her EconTalk on AI, Amy Webb talked about how when she has her car in reverse, the radio gets automatically and systematically turned down, no matter what she’s listening to and what type of driver she is. There’s no Federal law mandating that.

This “optimisation” of people’s decisions could be (will be?) much more intrusive. She goes on to conjecture that given Amazon’s foray into healthcare, it could well be that if you have an Alexa-powered microwave, it will refuse to pop your popcorn because your FitBit shows your calorie intake has been too high recently.

That’s the personalisation angle of corporate nudging.

But there’s also an element of collectivist utilitarianism (maximisation of aggregate utility) in the design of Big Tech products. For instance, Google Maps doesn’t get you to your destination the fastest way possible; it trades off getting you there fast enough against overall traffic fluidity (the collective utility of the other drivers). This same principle lies at the heart of Google’s Smart Cities.

I would think it’s very tempting for governments to team up with Big Tech to harness these tools of social optimisation, and slowly but surely lay the ground for top-down surveillance, however soft and well-meaning it seems on the surface.

Comments are closed.


DELVE DEEPER

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode: A few more readings and background resources:

A few more EconTalk podcast episodes:


AUDIO HIGHLIGHTS
TimePodcast Episode Highlights
0:33

Intro. [Recording date: June 14, 2019.]

Russ Roberts: My guest is... Shoshana Zuboff.... Her latest book and the subject of today's episode is The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.... What is surveillance capitalism?

Shoshana Zuboff: Well, let's begin with a definition of surveillance capitalism. And, for that I'm going to provide a little bit of description for our listeners. One of the things I write about is ways in which surveillance capitalism diverges from the history of capitalism and some of its well-known elements. But, there is a very important way in which surveillance capitalism emulates this history, and that's where I'd like to begin. So, those who study capitalism have long described a process by which capitalism evolves, claiming things that live outside the market dynamic, and bringing them into the market dynamic, creating commodities that can be sold and purchased. So, famously, industrial capitalism, broadly speaking, claims nature for the market dynamic, reborn as real estate, as land, that can be sold and purchased. Well, surveillance capitalism emulates this pattern, but with something that is an unexpected and even dark twist. And that is: Surveillance capitalism claims private human experience for the market dynamic. And that private human experience is reinterpreted as a free source of raw material for translation into behavioral data. So, this is how it works. These behavioral data, some of those flows, are fed bad to improve services, products. But there are other data streams that are hived off. And these are selected for their rich, predictive signals. And these parallel data streams are what I call 'behavioral surplus.' Because these are behavioral data that are more than what is required to improve services and products. These data streams, behavioral surplus with their rich behavioral signals, are then flowing into supply chains--think of pipes, think of conveyor belts--that converge these data streams with the new means of production. The production facilities, in this case, are computational. They are what we refer to as 'machine intelligence,' 'artificial intelligence.' So, now we have behavioral surplus with its rich predictive signals converging with advanced computational capabilities, production. And, of course, out of this comes: Products. What are these products? These are computational products. But the simplest way to describe them is that they are computational products that predict human behavior. I call them prediction products.

Russ Roberts: Give us an example of that.

Shoshana Zuboff: Well, the first and most famous, widely known, prediction product is what was called the click-through rate. The click-through rate, we think of it as confined to online targeted advertising. But, in fact, the click-through rate, when you just zoom out a little bit, it's easy to see that it is a computational fragment that is a prediction of human behavior, where people are likely to click in relation to certain kinds of ad content. So, the click-through rate was the first widely-successful prediction product. In a similar way, I draw an analogy to the invention of mass production a century ago. Where, the whole mass production logic of high volume and low unit cost--there were elements of this that, you know, existed in a variety of different organizational settings even in the late 19th century. But, the whole, the whole comprehensive puzzle finally came together, the Ford Motor Company, early in the 20th century. And, of course, the first great, most famous, most successful product in mass production was the Model T Ford. And of course, looking back on it, it's obvious that the mass production--mass production as an economic logic--was not limited to the Ford Motor Company or the fabrication of the Model T Ford. That was an economic logic that had legs. It could be applied to anything and over time it was applied to just about everything, from hospitals and schools to factory production. Okay. So, we have a similar situation now, where the click-through rate was the Model T of this era, if you will--the first wildly successful prediction product. But, surveillance capitalism is no more constrained to the online targeted advertising market than mass production was constrained to the Model T. So, that's an example of a prediction product. And, who buys these products? Well, these products are not fed back to the populations from which the raw material for their fabrication was initially derived. These products are sold to business customers who have an interest in what we will do, not only now but soon and later. So these are business customers who have an interest in our future behavior. And, they constitute a new kind of marketplace that trades exclusively in human futures: prediction products. And again, the first well-established markets in this vein were the online targeted advertising markets, which as you know have grown substantially, have produced a great deal of revenue for, especially, the two pioneering surveillance capitalists who pretty much own those markets. Still.

Russ Roberts: Google and Facebook.

8:56

Russ Roberts: I want to--I agree with some of your concerns in this book. It's a book of concern, I would argue. It's a little bit frightening. And I think there are things to be frightened about. But, on the surface, I want to at least start by pushing back. I'll agree with you some, later, on this. But, what's wrong with that? So, for example, because of the data that I provide, often unintentionally, to Google or Facebook or Twitter or whoever, they know things about me. They know what I search for; they know what I buy, perhaps. If it's Amazon they know a lot about what I buy. And so they are able to tailor what I see based on my behavior. They can sell the right to get access to my clicking of [?] folks. And when that comes across my screen I can choose to buy it or not. It can be annoying. I bought a watch this year, so I did a lot of searches for watches when I was still using Google. Now I use DuckDuckGo, but in those days I used Google, and so I started getting ads for watches, most of which I wasn't interested in. And occasionally, maybe I looked at one of them--I can't remember--but after I bought a watch, I kept getting the ads because they didn't know I'd bought the watch--either because they weren't paying attention; the algorithm doesn't know; or, I think I bought it actually in a face-to-face store. I can't remember actually, but if I did, obviously that would make it harder for them to know about it. And that was annoying. But it wasn't scary. In fact, I could have been happy about it. Often when I travel Google knows where I'm going, because my Gmail has the receipt for the airfare, and it will suggest places to go to when I get there. I find that a little bit creepy. And, I don't--and kind of cool, that they anticipate my experiences. But, I can choose to use it or not. So, why does it--what is frightening about this? Why is it a source of concern?

Shoshana Zuboff: Yeah. Well, I like the word 'concern.' You know, 'frightening' is perhaps not the word I would use, in the sense of, you know--frightening is when there is a knock on your door at 3 o'clock in the morning and the soldiers in jackboots are there to drag you away to the Gulag or the concentration camp. That's frightening. This is not that, Russ. This is a different form of power. And, as you know, I go to some lengths to distinguish this power from, you know, the kind of power that our world came face to face with in the 20th century when we confronted the totalitarian nightmares, both in Germany and in the Soviet Union. You know, totalitarianism is a form of power that rules by terror. It rules by--

Russ Roberts: But you paint a very dark picture of this turn in our economic activity.

Shoshana Zuboff: No, I do indeed, but my point here is that this is a little bit more--it's--grasping what is at stake here and what is of concern, you know, requires a different sensibility from pure fright.

Russ Roberts: Fair enough.

Shoshana Zuboff: Because, in the confrontation with totalitarian power, fright was essentially the means of social control. And that's not what's going on in this new world. The means of social control here has to do with dependency and identification and the foreclosure of alternatives. There are other channels here for social control; and the erosion of human agency. So, let's talk about that a little bit. What are the real causes for concern? So, I've just described to you a sequence and an economic logic from the unilateral claiming a private human experience ultimately to be sold, quite profitably, in markets that trade in human futures. And, you point out that, you know, these--the first experience that all of us had of this new logic takes place pretty much in the online environment with targeted ads. And the funny thing about your description is that, if this--we're speaking right now in the year 2019. If we were having this conversation in the year 2005, even--2004, 2005--I would bet you that our conversation would be different. Because, back then, you know, folks experiencing what you just described felt really, really uncomfortable with it. And disturbed. You will recall, because I know you study this, that in 2004 when Google came out with Gmail, which was a massive step forward in email functionality, you know, and storage--because stuff was going to be stored on the cloud and not on your computer, and so forth--ability to search within your emails; all kinds of things that were breakthroughs for the whole email domain. And then, suddenly, people were seeing ads, like the ones you are talking about, Russ. But they were ads that were triggered by something that you wrote in an email.

Russ Roberts: Yup.

Shoshana Zuboff: And very quickly, the ball of yarn unraveled, and it became clear that Google--and we haven't gotten into our economic comparatives yet; and we'll talk about that in a minute--but, Google was now using email content as a new source of raw material, as I've described, from which it would be able to scrape predictive signals. And these now email messages--all the email content that we produce--was now feeding these supply chains of behavioral surplus. That's essentially what was happening. And of course all of this occurs--and this is where we compare it to the jackboots--all of this occurs remotely. All of this occurs through the medium of digital architectures. It's not, you know, somebody coming and threatening you with a gun. So, it's all happening in this remote, robotized medium. Nevertheless, the yarn unravels, and it quickly becomes clear that, 'These guys are scraping our emails for more behavioral surplus to target ads.' And people all over the world, as you will recall, Russ, were mobilized in outrage. Such was the outrage that an important California legislator immediately drew up legislation to outlaw this unilateral taking of private human experience. What I just wrote to my mom in an email. Okay. Now, what happened? The Google team mobilized. And it created a war room. And it was there in the heat of that Gmail contest they first developed the strategies and the sequence of tactical operations that they would use over and over again across the last 15 years, Russ. And that operation goes like this. The first thing is, what I call, Step One: Incursion. In other words, they take what they want. They take your email; when it came to Street View [trademark via Google Maps], they take your house. They can get WiFi data out of your house, they take that, too. They just take what they want until somebody tries to stop them. When somebody tries to stop them, they go into Phase Two, which is called: Habituation. Habituation is, they try to draw out the process of contest for as long as possible. During that time, they are trying to--they are using backstage operations to try and stop any real impediment. But, what they are doing publicly is they are explaining, they are apologizing; they are saying, 'Well, we'll do this. We'll do that.' And they, you know, they may be fielding court cases. They may be fielding hundreds of court cases. They let those court cases, you know, drag on for as long as possible. And suddenly the weeks and months and years start drifting by. It's at the end of that time, there is still any kind of protest, they will make some superficial adaptations. In many cases, by the end of this habituation period, people can no longer remember why they were so upset in the first place. Because, this is what happens. Habituation sets in. You know, things that were, that violated norms, that violated boundaries, that we greeted as outrageous, you know, they become fixed things in our life. It's like we're seeing that so much in the political domain now. You know, boundaries that could never be crossed. Norms that could never be violated. Now, when they are violated routinely, we grow numb. This is called psychic numbing. We grow numb to these outrages. And that is what allows habituation and normalization to set in.

20:40

Russ Roberts: Well, let me challenge that a little bit, or give a different, maybe, framework for it. Okay? And let you tell me why this is not representative. So, I am alarmed by some of this, particularly in the political realm; and I think, you know, the ability of Facebook, Twitter, etc. to control what I see and read is very concerning. And robots and other things that stir up anger and tribalism, I think, are not good for our country. And how we deal with that is a very tough question; and we are going to come to that, I hope, toward the end. But let me give you a different framework for this. So, my washing machine breaks. Now, a washing machine is an expensive thing to repair. I may need a new one. It may not be worth repairing. But let's say a guy--there's a new service available. And, what the guy says is, 'Here's the deal: I'm going to come to your house. I'm going to either fix your washing machine or give you a new one. No charge.' Wow! Fantastic. 'The only thing, though, is that when I'm in your house, I'm going to take some pictures of the inside of your house, and I'm going to learn about you. And I'm going to sell some of that information to other people to help send you targeted ads and other things.' So, they come into my house; they find other things that are broken. They may see that I love wine or that I like to read or that I'm a photographer. And so I start getting, in my mail, I start getting ads for cameras and ads for wine. I can get over the Internet and special book offers. And so on. And then they say, 'Oh, by the way, I'll also,' let's say, 'I'm going to give you your choice of free wine every week. But if you are going to, I'm going to have to--to get that service, you are going to have to let me see who you mail stuff to.' Okay? And they start doing that. And eventually, yah, I love inexpensive wine, or I love wine without charge; I love having my washing machine repaired with no charge or getting a new one. And these incursions into my privacy of taking a few pictures--you know--I don't leave everything out, obviously, when I know the washing machine guy is coming. But, they do take advantage of the fact that they wander through my house. And there is something deeply creepy about it, right? And I don't--and, by the way--I should have made it even clearer: They don't really tell me that's what they're going to do. They just do it. So, I just think--

Shoshana Zuboff: Oh, okay. That's a big difference.

Russ Roberts: Right. Of course. I get--so it's like I'm getting a free washing machine. And then I notice I'm getting a lot of ads for cameras. And I think, 'I guess that washing machine guy used the chance to be in my house to notice I have a lot of photographs up. He knows I like photography.' Etc., etc. And so, I have a choice, right now, to use these services. Now, I agree it's really hard to give them up. I am habituated to them. I love them. There are many of them. Not all of them. Many of them I love, though. I love Waze [owned by Google--Econlib Ed.]. It's just, it's really improved the quality of my life. I really enjoy it. There are many other things I like about Gmail. It is a fantastic service. And the deal--and here's part of the problem--was that it was never an explicit deal--the implicit deal was, 'Well, by the way, while you are using these "free services," they aren't really free because, lots of reasons. I'm going to use your data. I'm going to charge advertisers for the opportunity to buy, to get access to you.' Which means the price could be higher than it otherwise would be. But, I have a choice. And, if you ask me, how do you like it: Well, I don't like it that they take pictures of my house when I'm in here. It is kind of creepy. But I do like the services a lot. And here's the key point: Nothing really horrible has happened, so I put up with it. And I think it's basically a good deal for me. And I would argue that, for young Americans in particular listening to this program--of which there are many--they think a lot of this concern about privacy and data--some of them, people, don't even get it. But, for me, it bothers me. I find it alarming, and particularly in the political realm. Because it could change how people vote, and what they do. But, is it really as, as frightening--again, as concerning? Isn't it mainly successful not because they have power but because I've given them that power? Because I like the deal?

Shoshana Zuboff: Heh, heh, heh. All right. Well, let's talk about two things. If--because I need to lay out some terms of reference so I can really answer your question. The first thing I want to talk about is what puts the surveillance in surveillance capitalism. Why is it called 'surveillance capitalism'? And the second thing is: Let's talk about the economic imperatives here. Because, unless our listeners, and especially our young people who are listening--and if you are listening to this--Listen up close. Because this is about your future. And, uh, you are the person that I'm most concerned about in this conversation. So, I want to talk about the economic imperatives. And I think that will give us a framework, Russ, for then coming back to your example and looking at it through some other lenses. Is that okay with you?

Russ Roberts: Sure. Go ahead.

Shoshana Zuboff: Okay. So, the first thing I want you to know is: Why is it called 'surveillance capitalism'? So, this takes us back to the early invention of surveillance capitalism. At Google. I don't need to go into all the details of this origin story, Russ. If you don't need me to. Suffice to say that, *cough, cough* in the heat of financial emergency, during the dot com meltdown in the early 21st century, the years 2000, 2001, Google, like most other startups in Silicon Valley, faced tremendous investor pressure. Startups were going bust left and right. Really smart people were losing their businesses. They could not monetize fast enough for the demands of the impatient capital represented by the, um, the venture capital firms that largely--led investment in the Valley.

27:25

Russ Roberts: They weren't making any money. They [Google?--Econlib Ed.] had a search engine that was fabulous. They kept gaining market share. But they hadn't monetized anything. And it wasn't obvious that they could. And I think--carry on. Your story is exactly right.

Shoshana Zuboff: That's true. Don't forget that Amazon wasn't turning a profit for quite a while. But, you know--ultimately it became an extremely successful business.

Russ Roberts: [?] which helps. At this point I don't think they had much of any revenue.

Shoshana Zuboff: Well, they didn't. But they had very substantial plans on the table. There were different models being considered. There were, you know--there was an intense and creative discussion about how these problems were going to be tackled. So, it's not like they were just sitting back and twiddling their thumbs--you know, happy to be doing [surgeon?] and nothing else. There were a range of alternatives on the table. But they didn't have time to explore those alternatives. Because when the dot com bubble burst, the pressure rose. And, even though it was widely understood that Google had the best search engine and many people considered it to have, you know, the smartest founders, even under those conditions, its very sophisticated ventures, backers, were threatening to pull out. So, here was a situation where the founders had publicly rejected advertising. They regarded advertising as something that would disfigure search, disfigure the Internet in general. But now, in the heat of financial emergency, they essentially declared a state of exception. And, again, without going into a ton of detail, people in the company knew that there were collateral streams of behavioral data that were being produced when people searched and browsed. Data that was not being used for the improvement of the search engine or the creation of new services like spell-checking or translation or so forth. These data were sitting around, unorganized, on servers. These data were referred to as digital exhaust or data exhaust. A few people had been experimenting with the data and began to understand that it had a lot of predictive value in it. Anyway. Long story short. State of exception, induced the founders to now turn to these abandoned data logs. To mine them for their predictive signals. To compute them with their advanced computational capabilities, that even back then they referred to as their 'AI'. And to come up with the first major prediction product, which was the click-through rate. And, that became the new model for the ad market. Until then the model was pretty continuous with the way advertising had been in the past, in the sense that advertisers were still picking where their ads went. And so, in that choice there was the continuity of still trying to align where your ad appears with the brand values of the company whose ad it is. Okay. So, now all that changes; and Google says, 'We've got a black box; we're not going to let you see inside the black box. But if you buy the product that comes out of the black box you're going to make more money, and so are we.' That was extremely successful. So successful, by the way, that between 2000 and 2004 with their IPO [Initial Public Offering] documents going public, the first time we got to learn exactly what the impact of this new logic was. And the impact was a revenue increase of 3,590%, just during those years 2000-2004. All right. They quickly understood that these leftover data--these behavioral surplus--were available in all kinds of places hidden in the online environment. And, this was a very fertile time for patents coming out of Google. And, when you read some of those patents--and I discuss some of them in the book--it's very clear, you know, what the data scientists are saying, what they are celebrating. First of all, they are celebrating the fact that they can now hunt and capture behavioral surplus all over the Web. And they can use that surplus to learn things about "users" that folks did not intend to disclose. They can also use those surplus data to aggregate and create inferences that give them insights into people that people did not intend to disclose. So, that's Number 1. Number 2 was, they celebrated the fact that they could do this while with methods that were undetectable. Undetectable. Because they understood that if users knew what they were doing, there would be resistance. And resistance means friction. And friction slows this whole thing down. So, from the start, there was an intentional, explicit strategy of making sure that these methods were backstage methods: were indecipherable and undetectable. In other words, designed to keep people in ignorance of what was really going on backstage. That's why the Google Gmail example is so interesting, because the backstage operation broke through quite quickly there. Whereas before that, they had managed to keep this stuff very, very hidden. Okay. So, right from the start, we are beginning here with the social relations of the one-way mirror: They can see us; we can't see them. They can see us; we can't see them seeing us. And so forth. All right. I'm putting this out there now because it's going to be very important to come back to this in a minute. So, I want to establish this. It's something that was baked into the cake--

Russ Roberts: Yeah, I agree--

Shoshana Zuboff: right from the beginning. Okay. So, now, we talked about the fact that these prediction products, beginning with the click-through rate, are sold into markets that trade in human futures. I'd like to talk about this for a moment. Because, you know, one way to look at what I've done in this book, especially in Parts I and II, is to reverse-engineer the competitive dynamics of these markets. If you've got business who are competing in selling the human future, what is the nature of that competition, and what kinds of imperatives emerge from those competitive dynamics? So, you've got to think about this, this way: What are these businesses selling? They are selling certainty. They are not selling certainty about, you know, oil futures or pork bellies or whatever. They are selling certainty about future human behavior. They are trying to get as close as possible to being able to guarantee outcomes to their business customers.

35:49

Russ Roberts: Isn't that--but my question is: On the surface that's good for me. So, there's--again, I'm sympathetic to your view. But, you could argue: This is no different than what's happened to advertising in the past. So, somebody had a genius idea to advertise beer, not, say, during daytime soap operas, but during sports events. Now, that's because someone had the insight that beer drinkers are--people who watch sporting events are more likely to be beer drinkers than people who watch daytime soap operas. That's okay! There's nothing bad about that. In particular, there's something good about it. I'm watching the game and I see something I actually care about. And similarly, this idea--I think the confidence in Silicon Valley of their ability to predict human behavior is way overstated, but let's give them that assumption that it's true. The fact that they know what I want--that should be, on the surface, a feature and not a bug. The fact that my--that the advertisers who send me stuff are people that I want to buy--it's better than sending me ads I'm not interested in. So, I don't see why there's anything alarming about this--about that. Now, I think there are alarming things. In particular, I think your insight that the product is no longer what I'm consuming--I'm consuming the search engine or the Gmail. But the profit is elsewhere. The real consumer of those for profit reasons is the advertiser. And that creates a gap, obviously. And the normal feedback loops of markets aren't there. So, that's disturbing. And yet, no one's actually disturbed except professors right now, in the commercial part of it, as far as I can tell. They are concerned about the political part, big time. But in the fact that these searches and my online activity is profitable for Google because they can sell me, they can sell someone else, access to me, doesn't seem to harm me. And that's what I'm pushing you on. Where's the harm?

Shoshana Zuboff: Well, you're pushing me, but you are not letting me finish my points. So, I want to lay out economic imperatives, because that gives me the framework for responding to this--your complaint that this is no big deal. And just as an aside, I would say: The very thought of comparing this to advertising earlier in the 20th century or advertising in the 19th century, you know--that's just silly. Because, yeah, of course. There is no argument here, Russ, that would say, that, you know, persuasion is something new to the human experience. Persuasion is as old as humanity. There's, there's no, there's nothing new about people wanting to persuade one another to do things they'd like them to do. Of course. Right? The situation here is utterly different, because the situation here revolves around companies that have amassed enormous amounts of capital. And these capital fund a digital architecture that has become increasingly ubiquitous and increasingly global. And this digital architecture is the basis for these huge asymmetries of data, information, knowledge, and ultimately the power that accrues to that knowledge. These are not just hidden persuaders. These are hidden persuaders with billions--billions of, of, of developers--deep capital resources who are funding and controlling a digital architecture and the data that flows through it. To an extent that it is not, now, an exaggeration to say that the Internet is owned and operated by surveillance capital. So, you know, I just don't think that the analogies to historical advertising work here at all. We need to be able to recognize discontinuity when it's real. And here, historically, materially, there is a profound discontinuity. And that's, that's where the concern begins. And, by the way: this is not an argument about digital technology. My argument is not about digital technology. And I know you read my book and so I know you appreciate that. Because, my argument is just the opposite. We entered the digital era, uh, looking for the resources, the support, the information, the individualization in the economic world in our roles, certainly as customers. And to a certain extent, you know, just in our roles, our everyday life just trying to live our everyday lives effectively. Which has become extremely difficult over the past decades. As employment has become more competitive, and wages have stagnated or diminished. And families are under great pressure. Students are under great pressure. So, you know, we went to the Internet looking for these resources that would be the, that would be the counterpoint--in the sense the antidote to the institutional pressures that we feel throughout our lives. And, we deserve that. We deserve digital technology. We deserve the individualization, the personalization. We deserve the data that allows us to live more effectively. We deserve the connection. We deserve the voice. All of that. And indeed, at a societal level, we deserve the "the big data" that allows us to understand the patterns of disease and more quickly and effectively tackle chronic problems, like cancer and diabetes. Chronic social problems. Climate, catastrophe. You know, the digital--the promise of the digital is something that we all deserve. My argument is that we deserve it without the strings that are attached, that begin with privacy, but go far beyond privacy. So, that's where the economic imperatives come in. And I'm going to try to do this quite quickly. Simply to say, where I was before, we look at the idea of markets in human futures, and the competitive dynamics of those markets, actually produce a set of predictable and now deeply institutionalized economic imperatives. Okay. The first one is pretty obvious. We know if we going to, if we are going to feed machine intelligence with data in order to come up with predictions, we need a lot of data. So, the first imperative has to do with extracting behavior surplus at scale: We need economies of scale. You, you, you may remember that just last year, there was a Facebook memo that was leaked. And in that Facebook memo we got a little bit of a window into the production process--the new means of production at Facebook. And that's their AI [Artificial Intelligence] Hub. And, among other things, in that memo--by the time I saw this document, I was--my book was complete. I think at the time I was finishing revisions and just in the concluding chapter. So, all of the things I've said to you were already, you know, conceptualized, written about, and so forth. But, here in this document, they describe: What do we do in the AI Hub? And, what they say is: We produce--"predictions of user behavior." That's what they produce. And it talks about the trillions of data points that are ingested by the AI on a daily basis. And the AI's capacity to now produce 6 million predictions of behavior, per second. Six million predictions per second. So, when we are talking about economies of scale, this is a very serious business, to be able to, you know, have these flows on conveyor belts of trillions of data points and to be able to produce 6 million per second. This is serious economies of scale. Okay. In the second phase of competition, the insight was: Scale was essential but it's not enough. We also need scope. We need varieties of data. In order to get varieties of data, we have to leave behind this online environment that we've been talking about, Russ. We need to get people out into the world. And we need to follow them into the world. We need to know where they are and where they are going. Who they are going with. What are they buying? What are they eating? What are they doing? Where are they driving? What are they doing in their car? We need to know as much as we can about the different environments in which they are operating--their homes, their automobiles, the city streets. So forth. So, this is now scope. Also, the more we can know about how they feel, the better we can predict their future behavior. So, 'We want their faces, because that gives us the ability to analyze the thousand muscles in the face that actually produce very accurate, affective predictions.' And we want to be able to see people--not just who is on the street, but how are they--what's the angle of their shoulders? Their posture? Their gait? All of these things become crucial sources of behavioral surplus. So, we give people little computers to carry in their pockets to take with them out into the world. The apps on those computers are constantly streaming--and again, hidden, behind the one-way mirror. You download a diabetes app, it grabs your contacts. Some of them grab your cameras; some of them grab your microphone; some of them grab your location. All of them streaming those data to third parties, and most of those third parties are owned by Google and Facebook. But, on to third parties, and more third parties, and so forth. All right. So, now we have your little pocket computer. And that is combined with sensors and cameras that are increasingly saturating public spaces, saturating our homes, our cars. These are the economies of scope. But in a third phase of competitive competition, of competitive dynamics. There's a new insight. And that insight is: The most predictive behavioral data, the most predictive signals, are achieved, when we can actually intervene in the state of play. Actually begin to actively, now, tune and herd human behavior in the direction of those guaranteed outcomes that we seek. So, this is new. Because, anybody in the business world has heard of economies of scale. Heard of economies of scope. But this is something new. And it mirrors what the data scientists describe in the world of the internet of things as the shift from monitoring to actuation. So the idea becomes that the technological architecture isn't just producing data flows about what's going on, but it's actually enabling feedback loops. So that we can affect what's going on: We can not only monitor, but we can effect it. So that's a shift from monitoring to actuation. And that's what we're seeing here in these competitive dynamics. So, this is what I call economies of action.

49:39

Russ Roberts: But what's wrong with all this? I--those are all great descriptions of how the online world has become, uh, profit streams for different aspects of our lives. And I'm 64 years old. I find it somewhat either unnerving or disturbing. I can imagine consequences of this that are very negative. But, right now, they aren't. Most people, I would argue, think this is great. And if I don't like it, I can turn of 'Location' on my phone--which I often do, because I'm 64 and I don't like it. I don't have Alexa in my house--for that reason: I don't like that it's listening. I'm not sure why I should care, but I don't like it so I don't do it. If I wanted to, I could not use a lot of these things and get by without them. Now, I would argue, I think that most of the people, again, listening, love these things. Not only do they not find them concerning : They like them. So, I'd like you to try to argue why they shouldn't. And, in particular, if you don't like them and if you think they are bad, you're going to have to come up with another way to get those things you think we deserve. Because somebody's got to pay for them. They are not free. Now, you are arguing essentially we've made a deal with the devil. And you might be right. And I'm worried about it. But, it's not obvious, I think, to most listeners that that's the case. So, try, to convince them.

Shoshana Zuboff: Well, as I've said, and--so, I'm in the middle of describing economies with action and how these are achieved. Because, economies of action are something new under the sun. This is, again, this is not the persuasion of old. This is persuasion, now, that has to be effected, actuated, through the medium of a digital architecture. So, this is a new kind of problem, a new kind of challenge. And this required some experimental work. And, I have to hypothesize that most of the experimental work has happened consistently hidden from the public. But, there's some of the experimental work that comes through to the public view. So, let's talk about that for a moment. One prominent domain of this experimental work was at Facebook; and we got wind of this in 2012 when Facebook published what it called its 'massive-scale contagion experiment.' So 'massive scale' means at the scale of populations--contagion experiment. And what they did in 2012 was to see if they could use subliminal cues on Facebook pages to affect real-world behavior--in this case, getting more folks to go vote in the mid-term elections. When the news broke about that, again, there was a wave of outrage around the world. And Facebook went into its usual process--

Russ Roberts: 'Mea culpa'--

Shoshana Zuboff: making apologies, and so on and so forth. As they were doing that, the ink was drying on a second massive scale contagion experiment. This one was designed--again, these subliminal cues on their pages to see if they can make people feel happier or sadder. All right. In both cases, these studies were published in very prestigious scholarly journals; and when they were published, the researchers celebrated two facts. And this is where I want our listeners to remember that the patents from the early 21st century that I described before, and the one-way mirror. So, now we're in 2012, 2013, and what the researchers celebrated in both of these scholarly articles were two facts. Number 1: We now know that we can use subliminal cues in the online environment to effect, to actuate, real world behavior and feeling. Emotion. That was Number 1. Number 2: We now know that we can do that in ways that completely bypass user awareness. Undetectable. Methods that are undetectable. Okay. So, this is experiment in economies of action that are happening, hiding in plain sight, in Facebook. Now, a few years later, we come to understand that Google--the other pioneer of surveillance capitalism--because, by the way, Google invented surveillance capitalism, but the first company that it migrated to, [?] now, was Facebook. It quickly became the default economic model for the tech sector, but by now is spreading across the normal economy. So, we're seeing this economic logic in insurance and retail and health and education, finance--across many sectors--coming full circle now back to production. We can talk about that later if we need to. All right. So, now we see--in Google--that there's also, then, several years of experimenting how to achieve economies of action. Google chose to bring its experimentation to the world through an augmented reality game called Pokemon Go. Pokemon Go was incubated in Google for many years. It was developed by a man named John Hanke, who, before that had been the boss of Street View; before that Google Earth. He was the invention of the satellite system keyhole which became the basis for Google Earth, when Google bought from the CIA [Central Intelligence Agency] and Hanke came with it to Google, he was someone who had a long history of rejecting the claims of folks in towns and cities who didn't like Street View cars coming through their neighborhoods and capturing their houses and their neighborhoods for Google Maps. Okay. So, this is John Hanke, many years for his laboratory inside Google, called Niantic Labs. Developing this augmented reality game. And then, at the very last minute, spun off from Google, brought to market as if Niantic Labs was an independent company--of course, it's primary investor remained Google. We learned about Pokemon Go--most people know Pokemon Go as a huge worldwide success. A lot of folks went out and looked for Pokemon creatures. Families did it, you know, getting out through the city, in the parks, on the streets, and so forth. What we learned eventually about Pokemon Go was that Pokemon Go was an experiment in economies of action. In this case, Niantic Labs was grossing its own futures markets. It had established--well-known establishments--institutions from McDonald's to Starbucks--but also, Joe's Pizza and the tire place in town. These establishments were paying fees to Niantic Labs. In the--analogous to online advertisers. But, in this case, they are not asking for click through rates. Which would be the online equivalent. In the real-world now, with real life, real bodies, they are asking for footfall. They want the actual bodies with the real feet on their floors, in their establishments. And, what the game did, was it used the incentives intrinsic to the game to learn how to herd people through the city, to the establishments that were paying Niantic Labs for footfall--guaranteed outcomes. Now, none of those was known to the public when it was happening. It was--surfaced later, and came out in an FT [Financial Times?] article, initially, where Hanke was interviewed. And, even today, the vast majority of people have no idea that Pokemon Go was monetized in this way. Okay. So, here we have a basis now for this highly capitalized experimental work, learning how to shift, shape, modify, tune, herd, direct human behavior in ways that are designed to be undetectable. Bypassing awareness. 'Bypassing awareness'--what does that mean? Well, in psychology, psychologists talk about the fact that awareness is essential for what psychologists call self-determination. So, you can't be a self-determining individual, an autonomous individual, without having awareness of your situation. And of your own action. So, here we now have surveillance capital intervening in human behavior, at scale, learning how to shape human behavior at scale, in ways that are undetectable. Which--and which are designed to bypass awareness. And, you know: Right to say that people don't realize this, and maybe at the moment do not care. But a lot of the 'do not care' phenomenon is explained by the fact that these operations are specifically designed to bypass our awareness. To keep us in ignorance. While entertaining us with the game. Heh, heh, heh. Entertaining us with the Pokemon creatures, and so forth. Entertaining us on our Facebook pages. But indeed, there are systematic highly capitalized operations that are a direct response to economic imperatives. This is not James Bond, with some, you know, Evil Empire cooking up this stuff. These are people who are now committed to an economic logic that requires these kinds of practices. You mentioned Waze an moment ago, and that it had, that it enhances your quality of life. And I'm sure it does. But it's also important for folks to know that Waze is a Google application that is part of a larger--a larger vision of a smart city, what Google used to call a Google City. And how a smart city can function. And the Waze Application, right now, has also recently clarified that they, too, have established their own futures markets. So, they've got McDonald's and other establishments that are paying them to send folks their way. And these dynamics are not transparent. They are not disclosed to drivers.

1:02:50

Russ Roberts: But, when I see the ad for McDonald's, I kind of figure. You know. I kind of get it.

Shoshana Zuboff: Waze is priding itself on its ability to gather data about drivers that go far beyond, you know, where you are on the highway right now in your commute.

Russ Roberts: Sure. They have ambitious goals, like you talk about. Many of them are going to be good for human beings, and some of them probably are going to be good for Waze and Google and not so good for human beings. You know, when we think about--

Shoshana Zuboff: Right. So, let's--

Russ Roberts: Well, I want to--

Shoshana Zuboff: So let's go back to your fundamental challenge to me, which is: Why is this a cause for concern? So, the causes for concern here: First of all, you see this, you see this progression from the subliminal cues, the remote herding, now, to the actual application in something like Waze. This is--these are the mechanisms and methodologies that accompany, like, Alphabet/Google/Sidewalk Labs. This is intrinsic to their vision of a smart city. The idea that now, these computational conclusions, these computational analyses replace the frictionful, messy, often-conflictful back-and-forth of municipal governance. And that, you know, we run cities in this move[?] frictionless way by having these immense data flows, the trillions of data points, the millions of predictions per second. And we compute; and we can herd populations through the cities, through the towns. We can tune and shape and modify in ways that maximize the outcomes that we see. And, of course, the problem is that this is being prosecuted under the aegis of private capital--specifically private surveillance capital. There is no democracy here. There is no self-determination here. There is no shared citizen, solidarity, governance, democratic power here. This is a completely different kind of future, and a different kind of solution for our future, that is profoundly anti-democratic. So, let me come back to the concerns. 'What are your concerns, Shoshana? Why are you concerned?' When we put all the pieces together here, I've said to you earlier: This is not soldiers in jackboots coming to tear you from your bed in the dark of night. This is a different kind of power. And it's a power that is profoundly anti-democratic. And it erodes democracy from below and from above. It erodes democracy from below because its own economic imperatives produce a requirement for economies of action: Behavior modification of populations. At scale. All of it[?] mediated by digital architecture. Which is now hijacked by this economic logic. And this digital architecture is the means through which power operates remotely. But this is a direct assault on human autonomy, on human agency. And it wasn't that long ago that within our own government, there was a clear understanding that these kinds of methods and mechanisms, by impinging on human autonomy, violate individual sovereignty. Our threat to freedom. And actually, all of our thinking about the fabric of a democratic society and what is required for democracy--all of our thinking turns on the idea that we have people who are, who have fundamental agency. Who can be self-determining. But now we are looking at global architectures that are aimed at those human capabilities. Okay.

1:08:15

Russ Roberts: Well, let me--hang on--

Shoshana Zuboff: How does it--

Russ Roberts: Shoshana, hang on one sec.

Shoshana Zuboff: Sure.

Russ Roberts: I want to take this in a--we don't have a lot of time left, and I want to make sure we talk about something we haven't talked about. Which is: It's all interesting. Could be true. And, as I said, I'm somewhat worried about. I will observe that the democratic process that currently runs our cities is not terribly successful. I think that's worth mentioning. But I certainly don't want to replace it with a corporate-run alternative, without competition. Certainly one of the challenges of these, of the profit motive in the realm we are talking about, is that competition is what usually protects consumers from the rapacious aspect of the profit motive. And without that competition we are vulnerable. And I think there is a serious concern there in these areas that's real. But, now the question, the harder question, is: Let's say you are right. Now what? What do you want to do about it? Do you have an idea that--other than the fact that you don't like that the profit motive--and I agree, it is consuming; it is out there; it is encouraging the monetization of all kinds of things; it's the way of the world, in most, in modernity. Should we stop it? Should we create a foundation that runs our search engine? Should we create a utility that is run by government that oversees these organizations? Should we break them up? So, or should we just have people who write books and have podcasts who encourage people to look for alternatives that are less disturbing? Which would be another that we've solved these problems historically. What are your thoughts?

Shoshana Zuboff: Well, look. The big picture here--and, the other angle through which to really understand the threats to democracy is that we're about the enter the 3rd decade of the 21st century. And, our expectation was that in this digital future we would be enjoying the democratization of knowledge, and all the emancipatory potential of the digital. In fact--and this relates to your theme of competition--in fact, we are entering this 3rd decade with a social pattern marked by asymmetries of knowledge and power that actually harken back to a pre-modern kind of societal pattern. A pre-Gutenberg societal pattern. Now, we have a very strange situation on our hands. Nearly all of the world's information has shifted from analog to digital. And yet we only have a handful of institutions. And, in that very short list, they are all privately owned surveillance capitalists. A handful of institutions who are even capable of computing the vast amounts of data that exist. So, when we talk about, you know, the trillions of data points per day, and the six million predictions per second, we are talking about asymmetries of knowledge that are intolerable for a democracy. Okay. So, are we going to fix this by breaking the companies up? Are we going to fix this by imposing privacy law? Are we going to fix this by creating government-run utilities? These are all amazingly important questions. My view is this: People say, 'Oh, gosh, you know, I learn about this and I feel so depressed and I feel helpless and I feel resigned.' And, you know, 'How are we ever going to fix this?' I feel differently. I feel extremely optimistic about our situation. And the reason is that we haven't even tried to fix it yet. Surveillance capitalism is 19 years old. During those 19 years it has essentially been unimpeded by law. It has had a free run. I have a section in Chapter 11 of my book where I ask the question: How do they get away with it? And I answer it with 16 reasons that are analyzed in depth in the book. The point is that, key among those reasons, is that these operations have been so unprecedented, so hidden--our ignorance has been so comprehensive--that we haven't created the kind of law or regulatory frameworks that would tame these operations. And, temper these destructive aspects of these operations--you know, temper this capitalism to the real demands of flourishing democratic society. So, if we were going to talk about law, what kind of law would we talk about? Well, privacy law is incredibly important here. And privacy law, people begin with principles of data ownership. Data portability. Data extensibility. The problem here is that while we may get ownership and access to the kinds of data that we give these platforms, we are not going to get access or ownership to the kinds of data that they produce within their production processes. We're not getting access to those trillions of data points. We're not getting access to those 6 million predictions per second. So, privacy law is a [?] that doesn't take us far enough. Antitrust Law. There are many, many grounds on which surveillance capitalists are also ruthless capitalists. And there are serious problems of monopoly, and anti-competitive behavior. And we need to, we need to get serious about these dynamics. And there is a lot more discussion today, as you know, Russ, about, sort awakening the sleeping giant of antitrust law. In that pointed at the tech sector. My concern is that, to a certain extent, Antitrust Law is designed to respond to the harms of that we countered in the late 19th and the 20th century. And not respond to the new and unprecedented mechanisms and methods that we've been discussing, associated with surveillance capitalism.

Russ Roberts: Yeah, I agree. I don't think it's designed for it, and I don't think it gets to the heart of the problem. Which is, to me it's the property rights problem. But there's obviously different ways of looking at it.

Shoshana Zuboff: Yeah. So, you know--and what we don't want to do is break up--for example, break of Facebook, break up of Google, and end up with 4 or 5 smaller surveillance capitalists which will simply create more competitive opportunity in the field. So, increasing competition through more surveillance capitalism--isn't going to solve the problems that we've been talking about. So, here--

1:16:48

Russ Roberts: Well, it might, if--it's possible--which is, to take an observation from Arnold Kling: You have a lot of smart friends. You probably have more than I do. I have a couple. But, you have a bunch. You've been at Birkman[?]. You have a lot of talented people there, in Silicon Valley, who are uneasy about the state of things. Why won't they start a Google or a Facebook that doesn't have these characteristics from the beginning? As, like DuckDuckGo at least claims to do. And collect all the data for good reasons; make them public; don't hide behind the black box. So, get donations rather than profits. Or get users to pay fees rather than monetizing their behavioral surplus. Wouldn't that work?

Shoshana Zuboff: Well, this is what I’m saying, Russ. That, if breaking up companies but not challenging with law and alternative regulatory frameworks, not challenging the fundamental mechanisms and methods I've been describing, then we leave the field open for a more intensified competition, among surveillance capitalists. And new surveillance capitalists' entrance. Because we haven't confronted these mechanisms and methods. So, the next step in my reasoning is that, beginning to think freshly for the 21st century for these unprecedented conditions about what law and regulation might look like, that then opens the space for the competitive solutions that we desperately need. So, let me give you an example of what I'm talking about. I describe the sequence that begins with claiming private human experience as free will material and ends with the dynamics of human futures markets. I think there are opportunities to intervene in the front end and in the back end that would make a substantial difference, and open up the field for the kinds of competitors who want to, sort of, re-, uh, re-direct our trajectory toward the digital future. In a way that produces the, the good outcomes that we see without the costs that we've been discussing. So, for example, if we, if we got real about saying that, uh, 'You are not allowed to take my experience,'--for example, right, you know, you may know that a couple of years ago there was a multi-stake holder process that was hosted by the Commerce Department. You had the NGOs [Non-Governmental Agencies], you had the companies, you had the government, kind of agree on facial recognition. And the talks were bound, because the companies insist that they should be allowed to have cameras and sensors on the streets that can take what they want, translate it into their facial recognition software, and have our faces. They insist that they have that right. And the government did not fight them on that. So, I want to say that is fundamentally incorrect. That, the companies have no right to my face. And that I have a right to walk on the street without my face being taken without my knowledge--certainly without my permission. And used in whatever way they choose. So, this is, right at the beginning of this process, that we say, 'No, you can't simply take people's experience, and you can't do it in a way that is hidden and deprives them of decision rights.' Okay. At the back end, we can say, that we outlaw markets that trade exclusively in human futures. Why not? Because everything that I have described to you arises from a competitive dynamics of these markets. So, when we say we do not allow markets to trade in slavery, we say we outlaw markets that trade in human organs. Why not say, we outlaw trade in human futures? Because these markets--and you said something really important before, Russ, and we didn't really come back to it--but, these markets, these are not the markets that Schumpeter--when Schumpeter talked about creative destruction, which, as you know has become a sort of [?] for all of this activity--he talked about creative destruction as a small and tragic consequence of the [?] creative process. Creative destruction was the unfortunate consequence of what he called the creative response. And the creative response was supposed to be an economic mutation that really moved the dial of economic history. And his standards for that were very clear. His standards for that were that you can tell an economic mutation from just another innovation because it's such a profound breakthrough that it really benefits all of society. It lifts all boats. It raises the standard of living for the great majority of people. This is not how surveillance capitalism operates. Its profits circulate in a very narrow domain of the companies, their shareholders, their business customers who operate and gain value from these futures markets. But, these are not profits that circulate back into the economy. These are not profits that [?] the middle class or that help us fund our public education system, or anything else. These are markets where the revenues are essentially parasitic, because they are based on taking raw material from us without asking.

1:23:46

Russ Roberts: Well, we do get something in return--

Shoshana Zuboff: We do. We do get something in return--

Russ Roberts: And that return goes to every single, almost every single person--rich and poor. They all watch YouTube. They all are using Waze. They are all on email. They are all using the Internet in their pocket. Everybody's got a smart phone, rich and poor: almost everyone. It's kind of extraordinary. So it's a complicated thing. And I agree with you that we need some new ways of thinking about it, and--

Shoshana Zuboff: But this is--what you are saying, Russ, is by design. That's the whole point. It's by design. The whole idea of free was by design, in order to establish invariable, dependable supply chains--

Russ Roberts: yep--

Shoshana Zuboff: of behavioral surplus. So, you know, for example, Android. When they developed Android at Google there was a bunch of people at Google who said, 'Great. Now we finally have something, we can sell it with a hefty margin and we can finally compete with Apple.' But other minds prevailed. And those minds said, 'No, no, no. Just the opposite. If we can give this away, let's give it away. Because this is going to be our most powerful supply chain interface. This is going to be the way,' you know, 'we'll claim that it's the mobility revolution, and this is going to be the way that we stream data from all over the place. This is going to free us from the desktop.'

Russ Roberts: Yeah. But if we didn't like all that streaming, we wouldn't use the free phone. That's the only point I want to make--

Shoshana Zuboff: But that's not true, Russ--

Russ Roberts: No?

Shoshana Zuboff: because we don't know about it. This is the fundamental issue. You know, I read about this research that was published in the American Journal of Medicine where they investigated a bunch of health-related applications--specifically in this case, diabetes applications. That are approved by the FDA [Food and Drug Administration], because now the FDA actually approves certain applications. And they discovered--and this requires forensic analysis--people don't know this because it's designed so that they can't know it--every single diabetes application that they reviewed was first of all, streaming data to third parties that had nothing to do with the health domain. And again, many of those domains, the majority of those domains, owned by Google and Facebook. But they are also doing other things, the second you just download an application, a diabetes application. They are doing things like taking your contact list; in some cases then they use the contact list to contact those of your contacts, and they take those contact lists. Many of them commandeer the microphone, the camera; learn about other applications on your phone--your messages, your email. This is happening through these innocent--so-called innocent--diabetes apps. No one knows that these things are going on.

Russ Roberts: Well, now they do. And I guess the question is, as you point out--the question you point out is it[?]: Maybe we should change the regulatory environment so that it's, we have to give them permission to share that information? And, for better or for worse, I think many people will choose to give that information away. And I think a lot of what you are talking about--which, again, I'm sympathetic to--is, our culture, is part of this challenge. And, we have to decide whether we are going to restrict the choices of individuals to give that data away, through regulatory restrictions--which might be a good idea. Or, whether we are going to rely on people to choose to do it voluntarily or not. You know, there was a big stink at the beginning of this year that you had to give permission to use cookies. So now I get these annoying ads--annoying information bars--saying, 'We use cookies on this site? Is that okay?' And, of course, I click 'Yes.' Maybe I shouldn't, but I do. I knew they were using cookies before; but now they are required by law. I think that's a mindless and unhelpful regulation. But, you know, it gets at what this--fundamental price[?]--you know, it's like saying 'You've got to disclose.' So we have a disclosure statement that says, you know, 'You have to check this box.' Nobody reads it. They don't want to. So I think that--

Shoshana Zuboff: Right. Let me just--I'm going to have to go. So, but I would like to just end on this one note: that, I appreciate what you're saying, but I don't think the research bears you out. Because, when you look at the research on the users going back to, really, you know, as early as 2003, 2004, 2005, survey research, other kinds of participant research, when people learn about these backstage operations, historically, they are appalled.

Russ Roberts: Yep.

Shoshana Zuboff: They are outraged. And they don't want anything to do with them.

Russ Roberts: Yeah. And Facebook lost a lot of users this year. And maybe they'll lose more. I don't use it. I don't recommend that people use it. I encourage people to do other things. It's a good point. I think we need more of that, probably.

Shoshana Zuboff: So, yeah. But then, you know, folks, despite those reactions, keep using.

Russ Roberts: Yeah. They like it.

Shoshana Zuboff: And so, you know, the companies have pointed to this over the years and said, 'See,' kind of like what you're saying, 'people really like this.' And, 'There's nothing wrong with what we're doing, because people continue to participate.' Again, this is my, part of my 16 Reasons about how they get away with it. But, so this contrast between how people feel and then how they behave has been referred to as the Privacy Paradox. But, in fact, my argument is it's not a paradox. And it's not a paradox because we know what we want. And we know what we reject. But we are living in a world where the alternatives have been systematically foreclosed. So that, I'm in a situation now where I want to get my kids' grades from the school; I want to get my health results from my doctor's office; I want to organize dinner with my family, friends, at a restaurant--just for these basic operations of daily effectiveness, I am required to march through the same channels that are surveillance capitalism's supply chains--

Russ Roberts: Yep--

Shoshana Zuboff: where hidden operations are working on my action. And are scraping my experience for predictive signals. And this is ubiquitous. So, we are increasingly, you know, in this world of no exit. And, from an economic point of view, from a business point of view, from a competitive point of view, you know, it's hard not to see this as some kind of giant market failure. Because, in fact, the disconnect between supply and demand, to me, is a better[?] explanation than to call it a Privacy Paradox. It's not a paradox. It's a disconnect. Because, what people want is not aligned with what's on offer. And so, my view is that if we actually got serious about these regulations that were right[?] on surveillance capitalism, that opens up the space for a new kind of competitor to come into the space, form alliances, create a new ecosystem, that really takes us on a different path to the future. And that really--you know, sort of gets us back to the kind of thing that Schumpeter talked about, which is: What is entailed for a healthy, flourishing capitalism that, such that we can have the concept of a market democracy, and it can make some kind of sense?

 


More EconTalk Episodes

Search Econlib
MORE OPTIONS