Benkler on Net Neutrality, Competition, and the Future of the Internet
Apr 5 2010

Yochai Benkler of Harvard University talks to EconTalk host Russ Roberts about net neutrality, access to the internet, and innovation. Benkler argues in favor of net neutrality and government support of broadband access. He is skeptical of the virtues of new technology (such as the iPad) fearing that they will lead to less innovation. The conversation closes with a discussion of commons-based peer production--open source software and Wikipedia.

Explore audio highlights, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Tom Vest
Apr 5 2010 at 9:14am

Re: Everything is relative to the baseline… including history

Hi Russ,

Great podcast — very interesting to hear you directly address the industry that provides the primary context for much of my own economic thinking. My first question is prompted by your skeptical counterpoint @34:00 about country/case comparisons being fundamentally inconclusive, if not incommensurable (i.e., divergent growth trajectories in different countries can *always* be explained by something other than the explanatory theory under discussion). It seems to me that that kind of skeptical stance begs the question of how one could ever derive the sort of “first principles” that one might use as you suggest to rebut empirical claims. If Adam Smith could not legitimately argue for the superiority of (more) open international markets and competition over mercantilism based on actual observations of different countries, then doesn’t that implicitly but absolutely limit the legitimate scope of the economic discipline to a sub-branch of personal ethics, i.e., literally “moral philosophy”? It seems to me that that line of thinking could have many broad and not-entirely constructive implications… e.g., it skirts uncomfortably close to criticisms leveled in some circles that economics has become a kind of “religious thinking,” and as such should not be accorded the level of consideration and deference that it currently enjoys in public policy-making…

[P.S. For the record, the existence of consistent, global-scale, automatically/technically collected (daily or even higher frequency) time series data on Internet “production” — data that can be aggregated to the enterprise and/or national level as needed, and has now been accumulating for the last 12+ years — provides a kind of “God’s eye view” of this industry that really does make some elements of the sector more amenable to certain kinds of empirically-grounded claims]

Seth Young
Apr 5 2010 at 10:07am

Thank you for linking to the October 2009 draft of _Next Generation Connectivity_. For those who are interested, the February 2010 final (plus data) can be downloaded at

Mads Lindstrøm
Apr 5 2010 at 12:38pm


From the podcast (Russ):

“Most of the time–all the time in technology markets in my [Russ’s] lifetime the great monopolist who threatened to ruin the world turned out not to be much of a monopolist after all. It was IBM in the 1960s-1970s who was going to destroy the world; then it was Microsoft; now it’s Google; yet somehow competition always comes from an unexpected source.”

Even if the monopolist changes every 20-30 years, you still have a very inefficient market. Yes, you get competition in transition periods and a new and better technology is being used by the new monopolist. But still, you have 20-30 years intervals with little innovation (as compared to a competitive marketplace).

That said, I do not think the changing-monopolist-story is accurate. What happened was not that Microsoft replaced IBM, but that Microsoft (the PC if you will) developed/conquered a new market. Remember, IBM’s mainframes is still going strong and most financial institutions still runs their backend systems on IBM mainframes. Of cause, there is overlap between potential PC customers and potential mainframe customers, so the situation is better than I describe it.

I think, a more accurate story, is the one of many smaller monopolist. Monopolist within their section of the market. SAS for statistical software, Microsoft for the desktop, Mainframes in financial institution, …

However, government regulation is not very good either. A better strategy might be for the government to insist on open standard based software in its own purchases. Removing software patents, which is one tool used to limit competition would also be helpful.

Apr 5 2010 at 1:07pm

My take-away from this podcast was the mindset of those who want fairly strict regulation at the lower levels of “the stack”. First, I don’t know if Russ was playing coy for the sake of the conversation or his audience, but at first, it didn’t sound like he understood “the stack”. And most people don’t. The arguments and proposals sound arcane and disconnected. Open access pertains to that last-mile copper or fiber loop. Net neutrality pertains to the actual bits and services once you have an Internet connection established over that loop (or over wireless or whatever).

But above all of the technical, Benkler sounds like a dogmatic libertarian on the social aspects. Some business models will die and some will thrive and people’s working arrangements will be freer and more diverse. We’re all working for the same ends. I think what the people so focussed on open access and net neutrality need to see is how difficult vertical integration has proven. Wouldn’t Time Warner AOL Turner have figured it out a decade ago if it was actually doable? Firms are just far more profitable in the layered engineering model by focussing on a few things they do really, really well than by trying to own the whole communications spectrum.

Apr 5 2010 at 1:44pm

Very interesting Podcast. Your questions and challenges to your guest were perceptive and helpful.

re: the high fixed cost of paying people to dig trenches. I believe part of the National Broadband Plan proposes federal subsidies for this to bring about “universal access.”

But, it surprises me that there seem to be limited market forces affecting this problem of unserved areas. There seems to be one fixed price for broadband connectivity, $20 a month for Verizon’s dsl, for example. If it cannot be delivered for that price, it is just unavailable.

Since in a rural area population density is too low to justify the infrastructure expansion cost, dsl is not available at all. Why isn’t the price variable, dependent, for example, on population density? This would allow the user, the one who benefits, to pay for all or part of the infrastructure expansion.

Tom Vest
Apr 6 2010 at 8:43am

Re: Mads & BoscoH comments about monopolies

Mads’ historical observations about many small, serial monopolies sounds pretty realistic. However, the existence of such “niche” monopolies in completely new industries would not be likely to create the problems discussed in this podcast because the monopolies would not, by definition, be immediately imposing any new barriers to commerce or market entry on “downstream” firms or industries. In general one might say that most monopolies rise to the level of public policy problem (only) at the point when they represent a bottleneck to growth and innovation in/by other economic sectors.

BoscoH’s observations about the AOL+Time Warner merger difficulties ring true (at least to one direct participant), but that particular level of integration is neither necessary nor particularly relevant to the kind of “vertical integration” problems that are widely discussed in this context. While Internet protocol-based communications (aka packet switching) has been revealed to be more efficient than the connection-oriented service delivery paradigm that dominated the pre-Internet era, the Internet’s true economic significance derives from its capacity to support decentralized technical management and participation. Whereas the old system was broadly characterized by *one* monolithic physical communications facilities owner per territory, who was also the exclusive manager, decision maker, and value-added service provider across the whole of that physical platform, the core Internet protocols make it possible to decouple management decision making capability from physical control/ownership of the physical platform — and potentially to decentralize and distribute that capability on a segment-by-segment basis to multiple independent decision makers. Looking across the increasingly diverse patterns of regional and national-level Internet development over the past two decades, Benkler’s recent study reaffirms the common observation that Internet growth and innovation has been most pronounced in times/places where that potential for decentralized management has actually been fulfilled (and thus created the possibly of accelerated/decentralized experimentation that Benkler described). By contrast, in times and places where direct ownership of physical communications facilities segments (esp. segments that are extremely difficult if not impossible to duplicate) continues to be a prerequisite for meaningful commercial participation in the Internet, growth and innovation tends to stagnate.

Zachary Ward
Apr 6 2010 at 8:58am

Wow, the 2nd half of this podcast was extremely interesting. So many dynamics in play I loved the back and forth.

Future podcast request: I’d like you to interview someone and discuss ideas behind why some countries peg their currency to the US dollar. Why some people have problems with a ‘undervalued’ yuan and what the its implications are on both China and the world. Its hard to get well informed information on these issues besides basic ‘weaker the currency the more competitive exports are’. I know there are many more secondary results to this, i.e. effects of carrying around huge or too little foreign currency reserves. That would be great! Thanks,

John Ives
Apr 6 2010 at 11:51am

Hi Russ,

The portion of the discussion regarding the Kindle was interesting.

I am curious what licensing issues remain to be worked out between Amazon and Apple regarding the Kindle app on the iPad that was referenced by Benkler? I have been using the Kindle app on my iPhone for nearly a year without a licensing hitch.

Justin P
Apr 6 2010 at 2:55pm

Interesting podcast to say the least.

When Benkler talks about the the US vs Europe in this statement.

If the position that open access doesn’t work were correct, then the United States was poised in the best possible position in 2001. We were the ones that had cable in more than 50% of the market, had regulatory environment that allowed every company to control its own infrastructure; we had existing infrastructure; we were the leaders and we had the regulatory environment that built on existing investments and didn’t require access.

Is he talking into account the enormous amount of public money in the UE that distorts the market? He makes a coy reference to it, but then tries to play the whole thing off as a “Natural Experiment.”
How can it be an natural experiment when you have the US, with essentially no public money subsidizing it’s model, compared to a European model that highly subsidizes its market? It’s not clear, from my listening, that he ever makes that distinction. (He could have, I was working in the yard while listening so I could have missed it.)

Re: Net Neutrality, seems like the court has told the FCC to shove it on this one.

Do you think we are going to see Congress attempt to pass laws trying to regulate the Internet because the FCC has no authority? Governments would love to regulate and tax the Internet, but they haven’t been able to pass anything. Is this a way for the camel to get it’s nose under the tent?

Gavin Andresen
Apr 6 2010 at 4:45pm

Good (and timely!) podcast, although Professor Benkler sure likes to use Really Big Words when little ones would suffice.

The discussion of the high capital costs required to enable competition for the “last mile” to the home reminded me of a (bad) experience I had with a satellite TV provider. It took two guys several man-hours to get one of those mini-satellite-dishes installed on my roof, and they ended up putting one of the jacks smack-dab in the middle of a hallway. I was unsatisfied with the service (it didn’t play nicely with my Tivo and so was unacceptable), canceled, and paid nothing beyond a little wasted time.

I bet the same people who argue that private companies can’t POSSIBLY do last-mile fiber would’ve argued twenty years ago that private companies couldn’t POSSIBLY do satellite TV because the capital costs are so high (private companies creating fleets of SATELLITES just for TV?!? And think of the hours of work and insurance liability and equipment costs to put a satellite dish on top of a house!)

Open up the telephone poles and utility tunnels (and sewers) to competition and I think in twenty years most of us would have the option of two or three fiber line providers in our houses.

Apr 6 2010 at 6:13pm

Looks like cable companies arent the only ones who can dig trenches. It is almost as though profits signal entry…gasp!

Looks like competition remains, not a gentile flower, but a stubborn weed.

Apr 6 2010 at 10:45pm

I generally love the podcast, but Russ, I wish you’d done a little more background research before this one. Understanding the basic terms, such as “net neutrality” and “open access,” would have really helped the discussion. And the contention that there’s any real dispute about the falling position the US holds in terms of broadband access compared to other nations really doesn’t hold water, especially when you’re arguing against an expert who has just completed an extensive study on the subject. Sadly, I was reminded of climate change deniers when listening to this part. Though the causes and solutions may be in dispute, broadband speed and penetration are clearly advancing much faster in almost every industrialized nation.

Russ Roberts
Apr 6 2010 at 11:46pm


Alas, even experts who have completed extensive studies are human. There have been extensive critiques of the Berkman Center study that Benkler led. Some of these critiques are also from experts. Here is one:

Google around and you’ll find further critiques and a defense by Benkler. I’ll post some of these.

In general, it is very difficult to do reliable international cross-section time-series analysis. It requires lots of assumptions about comparability and lots of opportunities to reject and exclude data. I am highly skeptical of the reliability of such studies without details of how the analysis was done and how decisions were made about what to include and exclude.

Keith Wiggans
Apr 7 2010 at 6:49am

The podcast reminds me of a discussion I had with a friend in the past.
He insisted that the way to improve consumer welfare was competition, forced or otherwise. The closer an industry is to a perfectly competitive model, the better for the consumer.
I countered with the proposal that competition is not the important idea, but instead free entry and exit into markets. Just the threat of competitors can discipline an incumbent. The fear in this case is that the fixed costs are so high for entry into provision of high-speed internet, that incumbents will be able to behave more like a true government monopoly.
But why is that the case? I beleive the entire reason this open source argument is occurring today is due to previous government regulation curbing the way communications would otherwise have been delivered to the home.
Without local charters, there could have been hundreds of cable or phone providers supplying consumers. Not only that, but there is no reason in my mind that every part of the network needs to be owned by the same company. Entrepreneurs may have bought land, laid cables and rented the broadband to the providers. The arrangements could have occurred in an infinite number of unpredictable ways. I agree that wireless provision of broadband can’t match cable wires, but the FCC has stifled development of technology utilizing the EM spectrum since the 20s. What kind of advances would have been made if all parts of the spectrum were subject to the innovation machine that is private property?
I beleive Russ is justified in his scepticism regarding the government’s ability to force a competitive and efficient market. If government regulations placed us in this position to begin with (analogous to the healthcare debate), in what universe could it possibly fix the problem?

Apr 7 2010 at 7:03am

Benkler’s worry that a cloud service can lock customers in has a grain of truth in it. These services differ from regular hosting services in that they offer program interfaces to customers – customers write code to a specific API, hence code usable on one cloud service is not usable anywhere else. However, usually these services are a mix of closed / open source solutions. So yes, even though moving the proprietry part to an open system would be hard to do, it is not impossible to do. Google cloud service for instance, has Python, web, Bigtable database services, 2 of which are completely open. Bigtable is propietry behind an interface, and someone could offer their own implementation for this interface and start running their code on their servers instead of Google’s. I am not worried.

Tom Vest
Apr 7 2010 at 11:42am

Re: Dr. Roberts’ doubts about cross-national data, Justin P’s assertion of US exceptionalism (aka incomparability), & Keith Wiggan’s fatalistic response to the challenges of inheriting a less than (market) perfect world:


To the extent that skepticism about the empirical foundations of cross-country comparisons are rooted in doubts about the consistency, “objectivity,” and/or completeness of government-collected statistics, Dr. Roberts and others might be interested in the University of Oregon’s Route Views Archive Project:

The archive contains what are, in effect, automatically collected daily (+) aggregate snapshots of multiple views of *the entire universe* of Internet protocol number resources that are concurrently used by public Internet networks to interconnect and exchange traffic with each other. Appx. three-quarters or more of that universe (at least in the most recent views) is composed of IP number resources that were distributed in specific quantities by technical authorities based on criteria that are isomorphic to the Quantity Theory (note: to date, the velocity term in this context ~0). No central authority of any kind, public or private, has filtered or otherwise influenced the form or substance of this IP address usage-related data in any way. The global scope data archive starts in November 1997, at a point in time when appx. 60-70 countries still had no independent operational presence in the global routing system, so at least for some parts of the world it really does represent the closest thing to a “God’s eye view” of a single industrial sector that we are ever likely to see.

Note: parsing and interpreting the Route Views data in way(s) that are consistent with actual IP addressing usage practices requires a fair amount of technical expertise and operational familiarity with how the Internet sector “works” in different market and regulatory environments, but for those who can overcome those hurdles the data really is worth it.


Re Justin P’s assertion that the US and European countries are incomparable because of government subsidies in Europe

This is an inverse variant of the genetic fallacy: no two things that emerged under different conditions can ever be deemed similar or comparable thereafter, regardless of any other characteristics that they may share. The factual grounds for such a claim in this context are highly dubious (as Keith Wiggan’s comment makes clear), but even if the European telcos were all Soviet-style industrial collectives and the US telecom sector emerged straight out of the mind of Ayn Rand, the rebuttal would still be fallacious. One could mount a slightly more legitimate critique by weighing the undeniably superior elements of Internet access in some European markets to the fully loaded cost of those features in terms of broad but marginal/incremental additions to EU tax burden, narrow “takings” impacts on incumbent US monopolists, etc. — but that’s a much tougher argument to make.


Re Keith Wiggan’s dissatisfaction with US history

I heartily agree with your insight about free entry/exit, and am happy to stipulate for the sake of argument that post-1934, entry into the telecom sector was made incrementally more difficult because of FCC policies. But exactly how “open” do you think opportunities to enter the unregulated, privately monopolized market were before that date, and what would you say was the FCC’s marginal (negative) to that secular level of (non)openness? I imagine that those who established the FCC felt the same kind of frustration that you express here; history to that point had presented them with a menu of bad choices, from which they felt compelled to choose what they what they believed was the least bad combination, i.e., honor property rights in the main, but recognize that market power is fungible and in some cases self-reinforcing, and that telecom is too important to permit to go very badly wrong, and solve for all of the above with limited, sector-specific regulation.

I also sympathize with Dr. Roberts’ skepticism, but to his frequent observations that (roughly) “some people always seem to be too eager to just do something, anything,” I would ask: at what point does “doing something” that involves making tradeoffs, e.g., based on best guesses about least-worst alternatives, become the appropriate response?

IMO anyone who can answer that question with “never” has arrived at that extreme end of belief where faith shades into fatalism.

Thanks again for the great dialogue…

Dave Peterson
Apr 7 2010 at 3:01pm

I can’t help but be a bit skeptical of someone who holds up 100 years of telephone regulation as an example of good policy.

Apr 7 2010 at 4:54pm


You seem to be implying that the FCC and its regulations were born out of a frustration with existing markets. Yet in the case of net neutrality, regulation is being proposed out of fear of a future (and seemingly reversible) market failure.

How often is politics really about policy and economic outcomes? Does anyone know enough about the history of the FCC to say if its regulations were caused by market failure?

Tom Vest
Apr 8 2010 at 6:33am

Re: Dave Peterson’s remark

I’m with you — now all we need to do is find someone who actually “holds up 100 years of telephone regulation as an example of good policy” (c.f. fallacy of composition).

But since you and I and all of other EconTalk fans seem to be able to communicate with each other quite effectively despite (or more likely *because* of) the fact that these exchanges are being mediated by no less than 4-5 completely independent, competing network services providers, perhaps you share my appreciation for the little things that the FCC did “just do” early on which made it possible for gifted, dedicated technologists (and considerably later, far-sighted entrepreneurs) in the US to successfully deploy an internet. We certainly didn’t have a monopoly on gifted technologists, then or now, nor was the US the only place where internet protocols were being developed. What we *did* have back then, however, was a telecom regulatory environment in which the status quo ante was not fetishized as natural or perfect, that was not reflexively assumed to be an industry-level representation of the best of all possible worlds — and in that narrow sense the US might actually may have been “exceptional” at that particular moment in history. Without that orientation, and the legal/regulatory artifacts that it inspired (e.g., FCC 60_2d [1976], which stripped Ma Bell of some prerogatives to refuse to sell “private lines” when they might be used as inputs for some commercial service offered by a third party), the internet would likely never have existed in anything like the diverse, open, dynamic, and vibrantly experimental form that it ultimately took. More likely some elements (e.g., packet-switching) would have become just another incremental productivity enhancement under the hood of the monolithic (public or private) national telco service platforms, and that would have been the end of the story.

Note that that’s exactly how “the Internet” did look in many countries for a decade or more after the net first took root in the US, until (very recently in some cases) pro-competitive regulatory reforms similar to (and sometimes directly inspired by) those pioneered in the US were more widely embraced.

Re: Grant’s questions

There are no non-controversial answers to either of your questions. Under the hood, different IP networks don’t all look or work the same (apart from generally working well with each other), and even among people who are very familiar with those variations, views about what distinguishes reasonable vs. unreasonable network management practices vary considerably. Some would not willingly implement the kind of network management practices that have been highlighted in recent lawsuits, but others would have few if any misgivings about implementing even more aggressive (or if you prefer “intrusive and restrictive” or alternately “reliable and secure”) mechanisms. I have generally found the communities of network operators and technologists that I’m closely acquainted with to be composed of people of good conscience with a strong ethical sense about how their work impacts the wider world — but they could not be said to constitute anything like a representative sample of the decisions makers who will ultimately determine how “neutral” the future Inter-provider operating environment will or will not be… and the line is quite fuzzy (as well as being virtually opaque to third parties).

Your second question is a variant of the one I posed to Russ in my last message. Judgments about the legitimacy of Regulatory Decision X vary wildly for almost all X. For some people, if X can be characterized as “regulation” then no argument for legitimacy is possible. For others, the existence of a “real monopoly” might warrant some kind of regulation, but they reject the notion that any (non goverment-imposed) monopoly has ever actually existed in human history. For yet others, perceptions of legitimacy will be shaped by more situational considerations, e.g., the narrow/proximate “facts” of relevance to specific cases.

Since you seem to be comfortable with phrasing this as an (honest) question, it sounds like you might be a candidate for any of these (or other) overarching perspectives. You’ll probably have to read at least a couple of books (from different authors/different publishers) on the history of the FCC to find out for yourself…

Apr 8 2010 at 2:36pm

Re: Fiber to the home.

Interview with Ivan Seidenberg, Chairman and CEO, Verizon Communications.

INTERVIEWER: You’re investing in wireless at the same time you’re putting fiber optic cables into people’s homes. Do you have to do both, or is one going to end up as the preferred method for —

SEIDENBERG: … we have three businesses … one is … a wireless business, which you all understand. Then we have in Internet backbone business — which is global, we’re in 128 cities; … And then we have a third business, which is this local-access business, which is FiOS [fiber] to the home.

So, contrary to what Benkler said, fiber to the home makes good business sense to at least one company.

Seidenberg also had some strong comments about US vs. the rest of the world in broadband …

INTERVIEWER: … Penetration rates in other countries were higher — are higher than they are in the United States. And so there was a sense that we needed a government policy to assure that we could compete with the rest of the world.

SEIDENBERG: That’s not right. That’s not true. The facts in here don’t support that.

Tom Vest
Apr 9 2010 at 4:35am

Re: Seidenberg comments

Granted whenever a CEO flatly declares that something is just “not true” without even bothering to back up that declaration in any way, that assertion pretty much trumps any possible counter-argument — ipso facto, it must be so… end of argument!

Sadly, among US policymakers and the general public there seems to be a recurring pattern of failure to heed the wisdom embodied in such claims. To give just one example, back in 1977-1978 the CEO of AT&T (at that time still the monolithic national telco monopoly) argued vigorously that the “resale rule” (the pivotal FCC ruling prev. cited as FCC 60_2d) should be rescinded, because permitting multiple decision-making commercial entities to not only coexist with in the same territory, but actually to cohabit the same physical network facilities platform, would be grossly, catastrophically at odds with US national interests, not to mention the interests of AT&T shareholders. Those arguments were conveniently packaged in a draft legislative proposal that AT&T dubbed “the Consumer Communications Reform Act,” but which ultimately came to be known simply as the “Bell Bill.”

If only we had listened to that CEO way back then, life would be so much simpler and less less contentious today. No endless arguments about net neutrality, “bandwidth hogs,” throttling, or any other contentious Internet issues… in fact, no Internet at all. Maybe now, with the advantage of decades of accumulated evidence and experience to go by (plus 200 +/- concurrent “natural experiments” against which to compare that record), we’ll finally learn our lesson and give such claims the full measure of deference and respect that they so clearly deserve?

See the 1977 Washington Post article below for a bit of related historical color.

“To AT&T: ‘Show Me'”
The Washington Post
March 27, 1977, p C6.

THE NEW SENATOR from Missouri, John C. Danforth, got it just right the other day when he told the chairman of the telephone company, “Show me.” The chairman, John D. deButts, was claiming that residential telephone rates are now subsidized by long distance service. But the Federal Communications Commission says it has seen no evidence of such a subsidy, and AT&T hasn’t kept its books in the past in a way that reveals one. So Sen. Danforth, quite properly, told Mr. deButts that the “burden of proof” is on the telephone company to justify its claim.

The subsidy is central to AT&T’s persistent campaign for legislation cutting off competition in the telecommunications field. It has built a massive lobbying effort around the idea that more competition will mean poorer service and higher rates for residential customers. The FCC and the Department of Justice, which have been promoting competition, insist that neither result is inevitable. And no one appears to have the economic data to back up the assertions.

That lack of data has not deterred AT&T’s efforts on Capitol Hill, however. Mr. deButts asked last week for an immediate prohibition against new competition for his company while Congress studies the whole telecommunications field. And the “Bell Bill,” as a piece of legislation entitled “the Consumer Communications Reform Act” has become known, is still being pushed vigorously from all over the country. The force of that effort should not be underestimated. The Bell system reaches into 48 states and employes a million people. It should have been no surprise that its pet legislative proposal picked up more than a third of the members of Congress as sponsors when it was first introuduced last year.

That bill, which would grant AT&T an effective monopoly in its industry, does raise many complex issues. The telecommunications field is changing rapidly and its future is not yet defined. As computers become tied to communications circuits, a whole new kind of daily life begins to unfold. It is not clear, at least to us, what form this exploding industry should take or what role government should have in it or in its regulation. But it is clear that no one company ought to have a stranglehold on it and on the introduction of new equipment and new techniques. That is why Congress should proceed with all deliberate speed in dealing with the questions opened up by the Bell Bill and why Sen. Danforth has the right approach.

Tom Vest
Apr 9 2010 at 3:49pm

Anyone left who’s not already sick to death of this discussion would be well-served by listening to the excellent Nov 2008 EconTalk interview with Thomas Hazlett, which covers many of the same issues from an approximation of the opposite perspective to the one that animated Benkler’s remarks. The interview also provides implicit answers to many of the questions raised in the course of this post-Benkler exchange.

IMO Hazlett provides a terrific description of the subtleties of competition in industrial sectors made up of independent firms that are unavoidably bound to each other in decentralized production and distribution chains. One could argue that that modular + dynamically adjusting + open organizational form *is* the defining and essential feature of the Internet; it’s what makes it especially important (i.e., more than “just” another very central and important intermediation mechanism that helps to bind the global economy together). The architecture of the TCP/IP protocols create the technical potential to sustain this unprecedented scope and scale of voluntary, decentralized, uncoordinated, but amazingly fruitful endeavor (with some 30k+ independent entities collectively conveying an est. avg. 45-50 terabits of data every second to hundreds of millions of individual devices and software processes scattered around the world). However, from an empirical/historical perspective, the time/place where that potential was first widely realized on the ground (i.e., outside of campus CS labs) is very suspiciously correlated with certain (US) pro-competitive but positive regulatory innovations (i.e., they imposed some new, narrow limits on the free agency of an incumbent monopoly in order to make the basic exercise of free agency possible for a whole new class of class of commercial entities — a class that now encompasses tens of thousands of independently competing and evolving network-operating institutions, including many that are both highly productive and incredibly innovative).

Lots of other relevant+interesting insights there, including a few claims that would seem to be somewhat at odds with history and/or not borne out by subsequent developments. One point that I personally found especially interesting and relevant was Hazlett’s claim(s) to the effect that commercial Internet service providers tend to be highly specialized, and to become progressively less successful whenever they try to range afield from their ‘first, true, home market.’ The comment seems to have been inspired by critical reflections on the now widely disparaged AOL-Time Warner merger (or more accurately termed, the AOL acquisition of Time Warner).

The first/empirical part of this claim is basically false, or at least the real-world phenomena which provide some limited foundation for the claim are completely orthogonal to the larger point that Hazlett was seeking to illustrate. Many, perhaps the great majority (?) of small and medium-sized commercial IP networks are indeed highly distinctive — in part because of diverse market conditions in the various localities that they serve, and in part because 100% of the meaningful/useful technical information about how the commercial features of commercial ISPs actually “work” are invariably protected as trade secrets. In addition, there is no easily accessible or authoritative public knowledge base of ISP “best commercial practices” to which aspiring new entrant ISPs might easily refer. Thus, being unable to learn all that much from each other (except through informal, imperfect mechanisms like staff migration), and unable to refer to anything like a basic model or template for ISP business, in general every new ISP has to (re)discover much about how to run a commercial ISP all over again, all by itself — and that (re)discovery process no doubt contributes to a (perpetually) rich variety of solutions.

No doubt that dynamic has some quite virtuous side-effects — however my purpose in describing it here is not to praise or criticize, but rather to contrast the diversity of business models that it helps to sustain among smallish, newish ISPs, with the increasingly standardized, homogenous form that much older and larger ISPs gradually come to possess. There are lots of perfectly rational market incentives that encourage large commercial ISPs to gradually become more diversified as they get larger, the details of which are irrelevant for this discussion. The important insight is that as each individual ISP gets larger, and pursues its own private incentives to diversify, the net effect is that all very large ISPs tend to gradually take on an increasingly similar mix of business interests. So, going back to Hazlett’s original observation, the commercial logic of the Internet industry is what *causes* successful ISPs to become less specialized over time. Of course, it might also be true that as ISPs get closer to the “ultimate fulfillment” of this evolutionary process, their failure rate increases dramatically — but that’s an empirical question that to date (AFAIK) no one has seriously addressed. It could be that the only thing that actually changes is the level of superficial drama associated with “high profile” commercial ISP failures.

If that sounds like the kind of industrial dynamic that one might observe in other sectors (e.g., banking and finance), it is not a coincidence — and that brings me to another interesting facet of Hazlett’s original comment. Large albeit highly questionable M&A transactions — many invariably involving cross-market or cross-sectoral investments — seem to be a perennial, ubiquitous feature in the history of every sector of the economy, in almost every economy for which we have records. If memory serves, the balance of (at least recent) academic research on M&A seems to suggest that the majority of all such transactions have always — and perhaps *will* always — ultimately be judged by posterity as failures, gross errors in judgment, etc. Given that observation, is it possible (and likely) that a nontrivial share of the most successful business leaders in every era of recorded human history were (are) actually so stupid, so imprudent, and/or so irrational that they are unable to accurately judge whether a high-stakes M&A opportunity is a good idea or not? If so, how could so many have (continually) risen to such lofty positions of authority in the first place? Alternately, is it perhaps more plausible to assume that the rational decision-making calculus that dictates which M&A opportunities are pursued vs. passed over includes some important variables that generally don’t make it into the historical record? If market power — e.g., even when limited to one narrow product or service sector — is in fact fungible, as I’ve suggested here and elsewhere, then it would be perfectly rational for corporate executives to believe that it might be employed with a reasonable chance of success to break into completely unrelated markets. Moreover, just because an M&A deal is ultimately judged to be bad, e.g., for shareholders et al., that doesn’t necessarily imply (much less entail) that it was equally bad for everybody.

P.S. A final, personal request to Russ: please, no more especially great/irresistibly compelling interviews, at least for a couple of weeks!

Apr 10 2010 at 3:56am

Comparison with Japan doesn’t make sense; Japan has much higher population density, the price to lay the last mile per consumer is much lower.

That indeed does not make much sense to me. As Russ mentioned, the incentive for the last mile depends on price *and* profitability. You would expect high profitability in cities and high competition. I am not american; do you have that? Why don’t you have that, there is no reason for that!

In the countryside, the last mile might not be worth building; however, what if the customer himself decided to OWN the last mile? Wouldn’t that make more sense? He would decide on the profitability; he could reap the capital benefits for future sale of the house. Why doesn’t *that* work? Regulation?

I don’t know about other countries, but here in Czech republic the history was pretty funny; first, there were 10 years of monopoly, so that the one monopolistic company could build (they were required to do it everywhere) the infrastructure. So we payed 10 years inflated, monopolistic prices. Once the infrastructure was built, the laws requiring ‘open access’ were adopted.

In the meantime the prices for calls in the US were hugely lower; in the meantime *some* competition was allowed in mobile phones, which resulted in the fact that nobody in the countryside wants fixed line.

To get back to the US; we were basically obliged (through monopoly prices) to subsidize the building infrastructure. Now it is build and we are happy to have reasonably fast internet. However it seems to me wrong to conclude, that therefore the US should have adopted a similar model. Remember the ‘what is seen and what is not’ – we could have devised many other things for the money we didn’t need to pay for those 10 years…

Apr 10 2010 at 4:24pm


I am normally just a lurker enjoying the podcast, but I just had to intervene and comment since the topic is in my professional area. Two myths need to be corrected here:

Myth #1: The most prohibitive cost factor for “open access”; i.e. getting the last mile of connectivity to new customers, is the cost of digging, etc. NOT SO. Securing permits and right of way access is the biggest problem. It is partly a property rights issue and partly an issue of government interference.

I’ll give a personal example. I live in a relatively rural area across the river from an affluent county that has plenty of broadband access to the Internet. I have been pleading for years for the local cable provider, Cox Communications, to drop some fiber in the river to give our small community broadband access. Cox has repeatedly told me that dropping fiber in the river isn’t the issue. They could easily recoup those costs in terms of profits from folks like me. What stops Cox is that it would take several years and $$$ to secure all of the environmental and archaeological impact studies and permits. Now, maybe all of those expensive permits are indeed important and necessary. But the cost of government interference is significant. As the story turns out, Verizon used its existing right of way permits — secured years ago in the AT&T regulatory years — to provide me broadband over wireless just a year or so ago. It isn’t as good as cable or fiber to the home, but much better than dial-up. But, now I can hear Russ Roberts!

Myth #2: “Net Neutrality” is essential to promoting competition. NOT SO. The whole spat over Net Neutrality arose because BitTorrent was designed to take advantage of some deficiencies in the original TCP/IP protocols. Essentially, BitTorrent was getting favored access over older Internet protocols by “cheating” on the Internet standards. So Comcast, at first, was just trying to level the playing field by restricting BitTorrent. Now, perhaps Comcast should not have *completely* blocked BitTorrent — though I’m not convinced that they were really doing that. BitTorrent, as it turns out, is technically difficult AND expensive to block. As a result, Comcast was trying to accomodate customers with applications that were playing by the Internet “rules”. The problem is that the “rules” of the Internet are enforced by common consensus, NOT by government intervention. Perhaps the “rules” of the Internet will change, but should it be the Internet community that decides or the FCC?

Howard Jacobson
Apr 10 2010 at 6:29pm

I have been a technologist for most of the last 20 years and a user of broadband services from the earliest availability of Road Runner to mobile broadband to multiple OC-3s at work. I listened intently to the discussion about Open Access and Net Neutrality and offer the thought that the discussion’s focus on Open Access emphasized the wrong issue.

Open Access Is an Academic Concern

  • The definition of Open Access has been clarified well in earlier comments. Those clarifications make clear that Open Access concerns the right of any and all bandwidth providers to deliver their service over the last mile.
  • Concerns over Open Access could go away entirely if we changed our national paradigm for construction of the last mile. Today, real estate developers and municipalities push that cost on the utilities / cable companies because the latter are accustom to installing the last mile and enjoy being monopolists. Suppose we just require the real estate developer or home builder to pay for the last mile installation and convey ownership to the homeowner and / or homeowner’s association and / or municipality? Eminent domain could address ownership of existing infrastructure, and I would argue that no compensation is required because the value of the last mile is in the monopoly power it conveys.
  • Concerns over Open Access presumably are really concerns over the cost and speed of bandwidth to the consumer. I pay $45 per month for the highest level of service from Road Runner. How much less could I really expect to pay if more providers had access to the last mile, and is a savings of $10 or $20 per month really worth all the cost of addressing Open Access issues?
  • I do now know a single residential consumer who believes that the bandwidth available to him / her is inadequate. What would we do with a gigabit of bandwidth to the home? My 10 Mb connection already allows me to watch HD video in real-time and download entire DVDs of content in a reasonable time.
  • The bandwidth constraint is on the upstream side of the Net (i.e., at the content provider / hosting company) end of the connection. Upload speeds are much slower than download speeds over business cable and DSL connections, and completely full duplex connections at high-speed (e.g., an OC-3) are enormously expensive. Hosting companies also throttle bandwidth to ensure equal access for all their hosting customers.

Net Neutrality Is a Genuine Concern for Consumers and Content Providers

Net Neutrality is a vastly more serious concern for me. I am far more concerned about having access to any and all content even if just at the existing available bandwidth speeds. The temptation to bandwidth providers to throttle download speeds for certain content types (e.g., video) or certain content (e.g., a competitor’s web site) poses far greater threats to free speech, the free exchange of ideas, and innovation by content providers such as Google, Facebook, Twitter, and many others.

Our regulatory focus should be on ensuring that the last mile monopolies do not use those monopolies to filter content. Content filtration should not be allowed directly (i.e., by blocking certain sites or IP addresses) or indirectly (i.e., by imposing fee premiums on certain content).

More on Open Source

Lastly, I hope that you will devote an entire podcast to a discussion of the economic and policy implications of Open Source software development. Oh, and one correction to a comment I think I heard on the podcast — Open Source software does not mean free software. Open Source refers only to the model by which the software is developed and the licensing regime for the source code.

Apr 13 2010 at 2:56pm

Howard Jacobson wrote:

Net Neutrality is a vastly more serious concern for me. I am far more concerned about having access to any and all content even if just at the existing available bandwidth speeds. The temptation to bandwidth providers to throttle download speeds for certain content types (e.g., video) or certain content (e.g., a competitor’s web site) poses far greater threats to free speech, the free exchange of ideas, and innovation by content providers such as Google, Facebook, Twitter, and many others.

If there is monopoly, the incentive is to price discriminate: if people who watch videos value the bandwidth more than those who don’t, you sell an extra “package” to them.
(I’m not saying this would be a good thing, just saying what basic econ predicts.)
Anyway, if it didn’t happen already, there isn’t such a strong “temptation”.

I personally would like to see ISP and portals censoring nudity and provocative speech. Why? Sooner or later people will demand that such filth be censored, and the alternative to that is some FCC-like regulator to be created. You like listening to Howard Stern or some other trash? Too bad, you can’t go to a website or choose a ISP that won’t censor it; the law is the same for everybody. Internet will be the next radio.

The podcast was pretty good, but there were a couple of econ errors. e.g. both guest and host seemed to suggest at several times that no matter what policies, if it creates more competition, it will be a good thing. That’s insane: so a law that says you can only serve 5 houses, or that you can only employ 3 people is a good thing? It would create competition for sure.

If there are natural monopolies in the Internet provision, then the appropriate thing is to either regulate them, or to have the city create (or directly contract out) the infrastructure. [1] You don’t want to create policies that increase the average cost, but that reduce the profit-maximizing price. Even if there is such a natural monopoly phenomena, as the host points out this will be an open invitation to rent-seeking, so I hope at the very least this would be done at the municipal level. Not only is it political sensible for that reason, but it is more economically sensible as well: since they have more information about the local average costs.

Apr 15 2010 at 4:12pm

> * I do now know a single residential consumer
> who believes that the bandwidth available to
> him / her is inadequate. What would we do with
> a gigabit of bandwidth to the home? My 10 Mb
> connection already allows me to watch HD video
> in real-time and download entire DVDs of
> content in a reasonable time.

I am a consumer that feels that residential service is vastly inadequate in both speed and price. Among those I know and work with, none consider any cable/telecom provider to be remotely satisfactory. It doesn’t matter who you have for your service: Cox, TimeWarner, Comcast, or any of the big media conglomerates they all behave in the same anti-consumer manner. There is no such thing as enough bandwidth, anyone who says otherwise works for a telecom or cable company. We can’t always imagine what innovations increased bandwidth will allow but that does not mean we should stop investing in more capacity.

Apr 19 2010 at 6:18pm

(First, I should stress that I don’t mean this as a comment on Mr. Benkler personally at all, just general observations from the discussion.)

I was a little disappointed in the interviewee; his argument was generally not grounded in principles or deep understanding of growth. While he was obviously quite knowledgeable about about particular historical and contemporary facts, I think some of the deeper economic issues Russ brought up (albeit perhaps too timidly!) were entirely lost on him. He was using the right words… but the arguments were simply incoherent. Perhaps I should be easier on him as he is not a trained economist, but then perhaps he should not be suggesting policy recommendations. It makes sense that he was “baffled by the metaphor” about health insurance policy intervention.

His explanations of “net neutrality” and “open access” were almost evasive.

The claim that the iPad, an innovation itself, can hinder future innovation requires one to think he can anticipate the optimal trajectory of technological growth and implement it…

Also, I’m thankful Russ pushed a bit on the somewhat careless inferences from the international market comparison.

Apr 22 2010 at 6:08am

Tom Vest
Re: Seidenberg comments
(Posted April 9, 2010 4:35 AM):

You said, “Granted whenever a CEO flatly declares that something is just “not true” without even bothering to back up that declaration in any way, that assertion pretty much trumps any possible counter-argument — ipso facto, it must be so… end of argument!”

I put a link in my comment so that people could read the whole interview with Seidenberg. In it he said …

“So this FCC decided that speed of the network was the most important issue. So that’s all they measured.

“So they will say, if you go to Korea or you go to France, you can get a faster Internet connection. Okay? That could be true in some companies — in some countries. The facts are that, in the U.S., there is greater household penetration of access to the Internet than any country in Europe.

“In Japan, where everybody looks at Japan as being so far ahead, they may have faster speeds, but we have higher utilization of people using the Internet. So our view is, whenever you look at these issues, you have to be very careful to look at what the market wants, not what government says is the most important issue.

Apr 22 2010 at 6:21am

Much of Benkler’s arguement rests on the high cost of adding fiber to the home (FIOS) in order to return competition to the market. New technologies are coming on-line that could make telephone copper competitive with cable.

“Alcatel-Lucent has found a way to move data at 300Mbps over two copper lines, the company said on Wednesday. However, so far it is only in a lab environment — real products and services won’t show up until next year.”

Yes, this is in the future and predicting the future is hard and all that. However this is coming from a big player that is putting serious money behind it in order to make their product competitive.

Benkler said that in earlier competition, “A lot of their upgrade had to do with electronics, which was much cheaper.” This technology also has to do with electronics.

Tom Vest
Apr 22 2010 at 10:52am

Re: Speed’s comments about Seidenberg (and Benkler)

As a former long-time Tokyo resident, Japanese speaker, and international network builder/operator with direct, first-hand knowledge of the events leading up to and following the miraculous transformation that occurred in Japan between 2001-2002 (and broad comparable experience building IP networks in several other countries in Asia, Europe, and South America), I can assure you that I don’t take my cues (or even know exactly) “what government says is the most important issue.” Actually, I’d be quite interested to know what (the US, or any other) government says is most important, and how one comes about that knowledge…?

Your claims about European household penetration sounds very iffy, and the comment about “utilization” in Japan seems almost intentionally ambiguous. Are you talking about absolute numbers or national shares, about utilization in terms of casual “users,” residential users (e.g., households x family size), subscribers, bits delivered, or something else altogether? Does the “higher utilization of people using the Internet” include utilization at work as well as at home, mobile as well as fixed access technologies? Granted a higher percentage of Americans do not have passports and have never traveled abroad, but I suspect that a substantially larger-than-average share of EconTalk fans have had some firsthand experience using the Internet while abroad. For those of you who haven’t, the experience can be (e.g., in an ever-increasing number of places throughout Europe and Asia) more than a little disheartening, especially given the huge quantitative *and* qualitative lead that the US had in Internet access only a decade ago. To say the very least, that contrast no longer reflects very favorably on the state of US Internet access — despite the endless and predictable but quite understandable attempts by interested parties to muddle that fact.

Your comment about “what the market wants” is also very disheartening. Since “the market” has no way to clearly and forcefully express what it wants in the way of Internet access (short of moving abroad), I’m also very curious to know how one arrives at that insight? Given the structure of the Internet access market in the US, the suggestion that what we have now is even an approximate reflection of “what the market wants” is indefensible — it’s kind of like saying that a diet of rice-flavored water and bark is actually what the North Korean “market wants.”

I’ll hold my comments about the Benkler critique for a separate posting…

Apr 22 2010 at 12:47pm

Tom Vest
Re: Seidenberg comments
(Posted April 22, 2010 10:52 AM):

Those are not my claims. They are Seidenberg’s, quoted from the linked interview.

Interview with Ivan Seidenberg, Chairman and CEO, Verizon Communications.

Seidenberg’s claims should carry at least as much weight as Benkler’s and possibly more since he (Seidenberg) is spending real stockholder money.

I can’t match your rich experience in building and using overseas networks but I can add one datapoint with respect to bandwidth. My choices for internet connectivity are three: DSL from the phone company, cable from the “legacy” cable provider and cable from a second provider that started wiring the city two years ago. That second provider decided that demand for fiber speeds was not great enough to install fiber. Rather they are installing the same coax and offering the same bandwidth as the “legacy” provider.

Tom Vest
Apr 22 2010 at 3:54pm

Hi Speed,

I’m setting aside the long rebuttal I wrote to your earlier points contra Benkler, in part because I don’t want to belabor the adversarial dimensions of this matter any further, but mostly because a couple of your final point really get at the heart of the challenge — the choice — that we face.

The effort you put into drawing out some of the other points in the Seidenberg interview are appreciated; no doubt they will help other readers to form their own judgments about what, if anything, is wrong with the state of broadband in the US. I find it interesting that you feel that the credibility of Seidenberg’s assertions is actually enhanced by the fact that he has an overwhelming material interest in how this issue is perceived, in particular among another group of materially interested parties (i.e., Verizon stockholders). In my experience, when an assertions is made by a party with a known, undeniable interest in one particular outcome, and those assertions are predictable and consistent with that preferred outcome, most people tend to discount the materiality, if not the credibility of that claim….

However, rather than pursuing that curiosity any further, or attempting to build a preemptive case for the (greater) disinterestness of Benkler, let’s just stipulate that Seidenberg sent the appropriate message given his narrow fiduciary responsibility to Verizon shareholders. Let go even further and stipulate that Seidenberg is actually correct in believing that the broadband approach that he envisions/describes *will* maximize shareholder wealth, over and above all conceivable alternative broadband strategies — and that the Verizon stockholders who hear that message will rightly conclude that their respective private stakes in Verizon will grow faster under that plan than under any other possible arrangement.

The question is: would/should maximizing the wealth of that small minority of US citizens who are also Verizon shareholders be sufficient to justify any broadband strategy, no matter what wealth preempting and shareholder immiserating impacts that that strategy imposes on the overwhelming majority of US citizens who are not Verizon stockholders? To those who would demand “concrete proof” of harm before even considering the possibility that Verizon’s private interests might not be consistent with the balance of interests (material, political, et al.) of the overwhelming majority of stakeholders in the US economy, I would recommend looking again at the international comparison data, and the fact that the US went from being the world’s absolute undisputed leader in Internet access to being, at best, in the top 50-75% in just a decade.

[I suspect that at this point some loyal EconTalk fans may be reminded of the oft-repeated joke about the two wolves and the sheep debating about what’s for dinner. IMO that’s got to be the most ironic joke in human history… in many contexts it seems like two peasants debating with Marie Antoinette about how to divvy up some cake would be more fitting]

Be honest: is it Benkler’s particular methodology/data/interpretation that’s not persuasive, or is the very concept of a harmful private (not government sanctioned) monopoly fundamentally at odds with your understanding of the critical functioning structure that defines how the world works?

Apr 23 2010 at 9:20am

Tom Vest
April 22, 2010 3:54 PM

Beckler’s point 1: Because the cost of building out a state of the art high performance high speed digital infrastructure to private residences (which he claims requires fiber to the home) is so high, no private for-profit entity will do it. Therefore the government must mandate installation and how it will be financed.

Beckler’s point 2: Because the cost of building out a state of the art high performance high speed digital infrastructure to private residences (which he claims requires fiber to the home) is so high, no private for-profit entity will build second or third competing conduits (cable or fiber). Therefore the owner of existing high speed conduits to the home (cable or fiber) should be required to carry any operator’s (ISPs) traffic at regulated rates.

Beckler’s point 3: The US high speed internet connection to the home is outdated and inferior to many (most?) other nations’ which puts the US at a competitive disadvantage. Therefore government must step in and mandate a solution.

I refute his arguments with five examples.

1. Bandwidth may not be the best measure of quality or performance. (Seidenberg)
2. Verizon is installing, using their own money, fiber to the home. (Seidenberg)
3. “Google Fiber is a project to build an experimental broadband internet network in the United States in a community of Google’s choice, following a selection process.” (Wikipedia)
4. A local provider is installing cable (underground) in direct competition with the legacy provider. (personal experience)
5. New technology is under development that will boost copper wire (currently used by telephone companies to provide DSL service) speeds by orders of magnitude using (relatively) inexpensive electronics rather than installing fiber. (Alcatel-Lucent, see my comment above)

The operative joke here is not about sheep and wolves but of the chicken and the pig at breakfast. The chicken is involved but the pig is committed. Beckler is involved but Seidenberg, Google, Alcatel-Lucent and others are committed.

Tom Vest
Apr 23 2010 at 2:17pm

Re: Speed’s “refutations”

One could spend days enumerating the rich variety of ways in which your assertions are factually incorrect or alternately misleading/do not support the inference you’re attempting to make. I don’t have days, but her’s a few minutes worth:

1, 2. Your attempt to reframe Benkler’s concerns about the cost of deploying fiber to the home is deceptive and utterly absurd. First of all, your interpretation of his remarks could only be accurate IFF Benkler believes (a) that Verizon’s FIOS actually does not exist, and (b) that no post-PSTN/privatized entity anywhere else on Earth has ever deployed fiber to the home. Is that what you took away from the interview? Second, even without listening again to the full podcast, the summary text that Russ provides above (@7:34) clearly spells out Benkler’s true position, which is that the single greatest advantage that Verizon enjoys — i.e., the existing, ubiquitous, nation-spanning right-of-way that it came to possess by acquiring other regional carriers, which in turn had basically inherited much of it cost-free from the original US private monopoly decades earlier — accounts for 80-85% of the *total* cost of deploying end-to-end nationwide fiber-to-the home. To be fair, based on Benkler’s estimate and the fact that even Verizon has deployed FIOS only to a minority of households in an even smaller minority of communities, one could credit your assertion as being about 15% right, in an ironic sort of way — but it’s 85% nonsense.

3. Yes, Google is planning to deploy a metro-scale FTTH facilities platform in a single city, largely for the purpose of gathering empirical data about the actual/current costs of deploying such a platform. In terms of market cap, Google is the world’s largest media and communications company by a fair margin (only AT&T is even in the ballpark). If your litmus test for market openness is that it’s possible for the world’s largest company to attempt to contest a tiny portion of the market, then I guess you got me there — but try thinking about what that might imply for some other product or service that you actually care about.

4. I’ve been looking into the state of cable platform competition since you made that claim. Based on what I found, e.g., from the 2008 GAO Media Ownership survey (, you must live the most competition-blessed community in the US. Of the 20 US metro markets that the GAO examined, only one enjoyed significant cable system competition at the household-level). According to a 2005 American Cable Association statement, the average independent cable operator serves about 1000 households, and the entire indie cable industry only reaches appx. 7-8 million subscribers total (many of whom are in rural area, and do not have a second cable option). I have no doubt that it’s nice to have local cable competition, but at historical growth rates the expansion of cable platform competition will continue to be irrelevant to the overwhelming majority of Americans for many, many decades to come.

5. On the existence of copper platform alternatives, and the comparability of copper and coax-based transmission to optical transmission:

On the existence of copper – Perhaps you were not aware of the fact that Verizon has a policy of ripping out the legacy twisted pair copper lines to every customer premises where it installs FIOS… or perhaps you were aware, and that’s why you your original statement was framed in such a confusing/deceptive way (e.g., ‘fiber expensive, but copper might be competitive with *cable*). So you’re saying that some households might enjoy more competition in the future thanks to new technology — just not any of the Version FIOS households?! This doesn’t even merit 15% credit…

On the significance of the new technology – read a little more, you may learn that the technology that you identified has the same distance/decay characteristics as DSL (so little or no additional service or competition in rural areas), and is twice as wire-intensive as current DSL (which halves its max theoretical capacity). Read further still and you’ll discover that when (if) the new system is actually deployed, it just might bring US DSL levels up to the level that was commonplace around Japan about 5-6 years ago.

On the very idea of fiber vs. cable (or anything else) “competition” – Why do you think that it matters that you have two competing cable providers? After all, anybody who’s uncomfortable with their local cable monopoly can just go out and rent a video cassette, right? Or worst case, they can just purchase a ViewMaster and trade reels by mail with others who have access to the ViewMaster platform. Ditto with voice service competition — why all the bother when anyone can use a walkie talkie, or CB radio, or two cups on a string? What makes those comparisons absurd is the fact that, beyond a certain range of variance in what we might call the capacity+features/cost ratio, even two otherwise “similar” products and services cease to be comparable or substitutable — which is another way of saying that they are no long part of the same market.

Today, it’s (just barely) possible to consider FTTH and cable Internet and DSL (and maybe even “wireless broadband”) to be more-or-less “competing” services. But the existence of that overlapping market is just a transient by-product of the fact that today the “average basket” of online content and services that the average US consumer demands has technical properties (packet loss, latency, jitter, etc.) that fall within the range that can generally be supported by all four platforms. However, that “average basket” gets heavier and more demanding every year. It’s already so heavy at this point that wireless Internet is starting to drop out of the shared market — and short of a miracle, it’s not likely to ever matter again as a competitive *alternative* to terrestrial broadband. It could take a few years, maybe even a decade, but eventually all copper and coax-based transport will fall by the wayside too. In the end, the de facto infinite expandability of optical fiber-based networks will render all of the competing technologies that we know about today completely irrelevant, at least with respect to competition questions; eventually the idea that they might represent competitive alternatives to the fiber network will seem just as silly as the idea of a ViewMaster competing with HD cable services does today.

And so in the end, US citizens will *still* be facing the same problem that they face today: one or perhaps two incumbent “last mile” facilities owners with overwhelming, unassailable market power. Address the problem now, or wait until a day when the US domestic economy and international competitiveness is even closer to being 100% contingent on the cost/quality/capacity of the national network facilities platform; those only two realistic options.

Your closing metaphor is so perfect, I won’t even try to match it. You should try it out on others; ask them to fill in the blank for “__________ like a pig.” I bet you’ll find that the majority of people will converge on the same missing word, but I don’t think it will be “committed.”


Apr 25 2010 at 7:53pm

Tom Vest
April 23, 2010 2:17 PM

In the Benkler podcast, he sums up his position saying, “If there’s an intervention that can set us up to have markets of 3-5 competitors as opposed to markets of 1-2, chances are even though there will be companies that will make money, consumers will be better off. If you are talking first principles, an intervention however potentially distorted or limited or imperfect that gets us to 3-5 competitors will improve consumer welfare over one that has a very clear and most likely trajectory to mostly monopoly and to some extent duopoly arguments.” (40:13)

I have presented a case where there are currently three competitors (my city – two cable, one telephone). There is no reason to believe that my city is unique or will not become the norm over the next few years. I also presented a technology that will provide copper wire (twisted pair) data rates that exceed current cable (coax) offerings. There is no reason to believe that this is not the “trajectory” (a method of predicting the future that Benkler relies on) we (the US) will follow over the next decade.

Benkler’s overarching concern expressed at the beginning of the podcast is that while the US had the “best” internet infrastructure a decade ago, the regulatory environments of Japan and others lead to those countries’ internet infrastructure passing the US and that this lag will prevent US technologists from making the many quick cheap innovations that lead to great leaps forward. To allay this fear I present you with some of the remarkable success of the last decade: Skype, Facebook, Twitter, Google, YouTube, Netflix, Android, iPhone, Foursquare-Loopt-Gowalla, Gmail, Google Docs, Hulu, Pandora, Spotify, TripIt, Kindle and Automattic (WordPress). All but Spotify are US companies.

And finally, I give you a list of the world’s ten largest data centers (technically 11) – eight (nine) in the US and one of the non-US locations is owned by a US company.

I will not be posting any further on this thread but I will leave you with this bit from one of Benkler’s earlier works (52 Fed. Comm. L.J. 561 2000).

“The point to understand is that if all consumers whose cable system is owned by AOL are offered broadband Internet access only by AOL, these consumers will end up with something that is more like a five hundred channel cable system than a peer-to-peer network of users. AOL is a company whose business model depends on capturing consumers who do not know what “getting on the internet” means, and then persuading these consumers to pay a premium for access that is not to the Internet primarily but to AOL proprietary content. It is immensely successful in this business model. With its new cable systems and Time-Warner content, it could design a system where most default choices lead consumers to stay within the AOL-Time-Warner system, rather than to venture outside of it.”
(page 574)

Didn’t happen.

Tom Vest
Apr 26 2010 at 12:00pm

You’re right about the outcome, but as before you’re just plain wrong on the causation and relevance.

Obviously AOL didn’t cause this to happen in 2000. In part this was because AOL not do it, thanks to conditions imposed by the FTC on AOL’s soon-to-be-consummated acquisition of Time Warner, which had a controlling stake in the Time Warner Cable network…

And in part this was because of the change of Administrations, which coincided with a sudden loss of interest in enforcing all kinds of regulations in every industry, including the kind mentioned several times in the past (e.g., the “sharing and resale” requirements). In case you hadn’t noticed, AOL has all but disappeared in the intervening years, precisely because it did not inherit the kind of national-wide end-to-end right-of-way that *might* have made the construction of a completely new facilities platform of any kind commercially viable. In case you hadn’t noticed, no other non-incumbent telco or MSO-based consumer Internet access of any size has emerged in the years since then, for the same reason: the ideology that came to prevail, esp. after 2000, basically pre-empted the pro-competitive rules that had created the enabling conditions for the Internet both to grow beyond its original university incubators — and also to avoid the kind of scary scenario that Benkler described back in 2000. Now that all traces of that earlier pro-competitive Internet access environment have been eliminated in the US, Benkler’s warning is once again apropos.

One has to wonder whether such an pattern of persistent and aggressive misreading is a sign of genuine misunderstanding, or rather is just a cynical strategy designed to mislead the casual observer. Has it truly not occurred to you that one of the primary motivations that inspires people to write speculatively about possible unhappy futures is to help inform current decision-making, and ideally to influence decision making in such a way that would reduce the possibility of ending up in the envisioned unhappy future? Unfortunately for those who write in this vein, the more persuasive one is, the less likely one’s speculations are to be borne out as history unfolds. In general that’s a small price to pay, even assuming such writings are never likely to play anything more than a marginal contributing role to that end.

Besides, since people like you claimed that social security would inevitably to communism, I’m not sure why anyone has been listening for the last half-century…

About this week's guest: About ideas and people mentioned in this podcast:

Podcast Episode Highlights
0:36Intro. [Recording date: March 24, 2010.] What is the most crucial issue facing the future of the Internet? Or two or three? Big question. Different questions with regard to the future of the Internet. Generally, globally, future of, particularly internet communications here in the United States. Overall: critical question is whether in the transition to the next generation of connectivity the net will continue to be as open, creative, distributed in terms of institutional cultural, economic, entrepreneurial forms as it has been in the last 15 or more years; or whether the process of maturation will end up normalizing to a slower innovation, more structured and concentrated cultural information-production system. What is the threat to that openness? Would look at several technological potential change points in different places: potential legal change points and as a result of those, organizational change points. So, there is a lot to work through. Basic driver of what makes the net so innovative, creative, and fast-moving is the low cost of effective action: experimentation, adaptation, failure--very cheap. Model of innovation is not the long-term R&D lab in three organizations that are the major players and which one of them wins, but rather tens of thousands, millions, of experiments that are very cheap to try out and cheap to prototype and then implement and then fail and try again. Rapid evolutionary process rather than a well-planned, engineered process of innovation. Example: social networking sites--MySpace, Facebook, Twitter. Friendster, first one, SixApart; now at least the second, if not the third generation of social networking. We didn't know in advance which would win. The ones that were successful were successful because people liked them. Relatively inexpensive. Who the winner is would be the relatively surprising story. So if you were to look at MySpace, for example, there it was--massive corporate buyer and backer, should have succeeded. Facebook, and in voice-over-IP, Skype, came from nowhere--but it was the successful one. Google: in 1999-2000 you'd have said the search market was mature--Yahoo, AltaVista, Lycos, HotBot--we know the players, and improvement will come from today's equivalent of Bell Labs. Instead it came from Google. We don't even necessarily know what the next innovation is. If somebody in 1999 had said 'Build me a massive data storage system that would be available to a hundred million users around the world, would be capable of terabytes of data 24 hours a day, 7 days a week, and robust to attack from armed people bringing down attacks,' you would say 'Give me a billion bucks and ten years.' You wouldn't say, give me Shawn Fanning in a dorm room, building peer-to-peer (p-to-p) file sharing systems. You wouldn't know what file sharing systems are or that would be the solution space. You wouldn't know that Kazaa would become the architecture of the main voice-over-IP international application Skype. What we assumed in the 20th century--relatively well-capitalized major players providing much of the impetus of innovation, relatively well-funded government research providing basic science, some Schumpeterian intersection between small firms and large firms, relatively peripheral social activity that can't really achieve much of anything because it can't get to the level of capitalization necessary to move from shooting the breeze around the coffee table to a pilot that somebody would be willing to move on and experiment on. The range of experiments was much smaller. Cost of failure much higher. System had to be much more planned and much less evolutionary. That's not where we've been for the last 15 years.
7:34Except for one thing. There is a capital part of this, which is broadband. If you go back to 1999, ask to put up a service that has billions of photographs and videos, it would have been much more difficult than it is today. Somebody did invest in that hardware--incredible explosion of photos and videos that are much higher bandwidth-intensive. Didn't that require some serious capital and investment? Again, there are different parts of the networks with different kinds of investment and different potential risks of undermining the evolutionary innovation dynamic. First, there is the transition from dial-up to broadband. There are certainly things that VocalTec didn't have in the 1990s when they did the first pc-to-pc relatively successful voice-over-IP application. But most people were still on dialup, hard to discover people online. Whereas Skype came along after there was broadband. Question of capital investment in high speed, next generation connectivity--massive study I did for the Federal Communications Commission (FCC) as part of the national broadband plan. To some extent, there the major question on the table is whether or not the necessity of investing large amounts of capital in the last mile can be translated into control over innovation practices higher up in the stack, where levels of capital investment are lower. What is meant by "the last mile"? Let's not forget later on to come to cloud computing and the potential risks that have to do with large-scale capital investment. In order to get high speed connectivity to every home, you need a wire or a cable connecting to the home. That means digging a trench, putting an actual duct, pulling a fiber through that duct, making a hole in the wall, wiring it to the home. Enormously expensive--80-85% of the cost of that is paying for people to dig the holes. The electronics are relatively cheap. In the first generation transition, you already had two different kinds of incumbent local monopolies--the telephone monopolies and the cable monopolies who already had made the holes. A lot of their upgrade had to do with electronics, which was much cheaper. The theory was that because they have relatively similar costs, and because of what is called "convergence"--everything is now voice, video, etc.--you have these two companies who have by historical accident dug the trenches and pulled the wires; relatively similar costs and they could compete with each other. This was the foundation of competition. Turned out that cable was a little cheaper earlier on; DSL was able to catch up; but overall it worked reasonably well for the first 10 years or so until we hit the point that in order to really increase speeds you had to move to fiber on the telephone wire side, as opposed to cable which can still upgrade electronics. Getting a big mismatch in the relative costs--going to fiber to the home costs 15-25 times as much as going to similar speeds over cable today. Getting an imbalance in competition there. In the United States, really at best have competition between two--highly imperfect market. Three, four, five better than two.
13:39What role does satellite play in this? No role. Was a false hope; has not played a significant role anywhere. Originally when people were thinking about this problem, there was a sense that there were multiple pathways--cable, telephone, broadband over the electric power lines, satellite, terrestrial wireless--what we now know as wimax though at the time people weren't talking like that--competition. Everything except for cable and telephone, maybe now U.S. investment in fiber, turned out to be not the case. Satellite: problem of how to communicate upstream as opposed to downstream was too hard. Massive broadcast system; only plays a role where there are no other alternatives; high cost and relatively low speed. Terrestrial wifeless--that is to say, wireless that is not from satellites--is a good complement; plays a role in places that are very expensive to wire--places in Africa, countries with low density. Amtrak trains between Washington and Boston--see people using those networks, wireless cards on the train. Two ways to look at wireless. Critical complement to wired connections. Connectivity is becoming ubiquitous. Cannot do that without wireless. Main question: is wireless close enough to what you can do over wired connections that it can be a third competitor with high speed internet to the home. Not generally considered to be plausible. Upgrade path of next generation: wires, look at Docsis 3.0 today; look at Japan as being 2-3 years ahead of the other places--cable is 160 megabits per second from Jcomm; fiber is 1 gigabit per second from K-opticom and KDDI. Plans for 4G wireless when it gets there are simply not the same. Not offering the same product. One is offering, by today's standards, very high speed but tomorrow's standards very low speed; ubiquitous availability. The other is providing unfathomable speeds by today's terms, without mobility. What you want is a system that complements with the two of these. Basic model, other countries: you get fiber or high speed cable all the way to the home in as competitive a market structure as you can--which goes to the question of bottleneck control over the open innovation system--and you complement that with as competitive as possible a market in wireless capability. Complements, not substitutes. What's the issue with the last mile; why worried? What threatens it? Two distinct questions: net neutrality, focused on in the United States; and the question of open access, more focused on by other countries. Two complementary regulatory efforts to avoid the same basic problem. The basic problem is this: If you have a relatively weakly-competitive, duopolistic market in the high-capital, high-to-replicate portions of the network, you risk the owners of that bottleneck leveraging up to the lower-cost levels and differentiating between different innovators and different sources of cultural information and production, to the detriment of less-capitalized experiments and the benefit of the more-capitalized experiments. Puts you in a more 20th-century model of innovation as opposed to a 21st-century model of innovation with very low cost, rapid-prototyping, failure, etc. One obvious way to avoid that is to have competition. If you have competition in the wired connection to the home, the most expensive portion of the network, then market discipline avoids ways that undermine the value that users see from the net; get less of a risk of discrimination harmful to innovation between different applications. That's what open access, as a cluster of policies, was intended to do initially in the United States in the mid-1990s up until 2002 or so--everywhere else in the world, over the course of the last decade or so. Many countries have both telephone companies and cable companies. Keep hearing that the United States is unique in having cable companies--not unique, but we have a lot. So do other of the best-performing countries. In addition to that you have a regulatory system that says: here's the set of core, expensive high-cost facilities that are hard to replicate, so the price of entry into the basic carriage market is so high that you only get one or two players. Instead what you had, when it was copper wire and only copper wire, unbundling and bitstream: regulations that said you--someone who wants to provide Internet--this was AT&T before they were bought by SBC in the United States--would come in and compete locally, provide the electronics but be able to buy the most expensive part, the copper loop, at a regulated rate and compete with whoever it was--SBC or Verizon in the local loop. That's what happened in Finland--exactly the trajectory that the United States was going on. AT&T was trying to do the same thing in the early 2000s after it lost the regulatory battle, was bought by SBC and you didn't have the third player in these markets. MCI bought at the same time by Verizon, no option for entry for these long distance players into the local markets.
23:02First-line defense against control over the last mile leveraging up and making it innovation and experimentation more expensive in the network itself is competition. The study we did looking at a lot of countries suggested that the most successful countries in terms of speed and prices and as a consequence penetration had some competition between cable and incumbent telephone companies as well as 1-3 entrants, some of whom were entrepreneurial entrants like Softbank in Japan, Free in France, etc. Some were incumbents. What's the policy implication for that? For the United States? Yes. Competition is good, so what's stopping competition now that you think we need a solution for? We need to incorporate some model of open access in the next generation transition to the local loop in the United States. Cautious about saying that. The next generation is not about copper loops; not about translating 1-to-1. New networks require different solutions. Ofcom study from 3-4 months ago looking at how you translate unbundling from a copper wire to a particular architecture of fiber networks, strategies with advantages and disadvantages. Netherlands just implemented a general open-access requirement across all networks, but then said that's what you need to do if you don't cooperate, but let's sit down with the other companies--venture being built together; set rates out of your business plan. Lost here. Think about the United States. In practical terms for the United States, explain open access and unbundling. Unbundling is one particular application of open access. Open access is the family of solutions. As a practical matter it would mean a fairly long study process of what the most appropriate rules would be to require whoever owns a trench or a duct to offer it under non-discriminatory terms to anyone else who wants to provide service. That's net neutrality? No. Net neutrality is higher up in the stack. Net neutrality is stacks on top of open access. Open access means that you the consumer can actually buy your internet service from more than the companies that own the wires. Net neutrality means, irrespective of whether that's true or not, whoever sells you your internet service can't discriminate between applications or between data streams of similar type or between packets.
28:53Russ: As a private property guy, how do you square that--basically converting the past investments that innovators have made into a utility, into the equivalent of the grid; saying you have to give access to all commerce at a fixed price--which I assume would be regulated--that's what you are talking about with making a careful study? Benkler: I'm saying that just as we did with telephone companies for a century, we would have private investment in utility-like networks that would then have to sell capacity at regulated rates. That's a controversial idea. On a philosophical level: as an outsider who doesn't know the details of packets and copper, see two people on different sides of the issue. You and colleagues, who think that we have to increase the regulatory environment to make sure that bad things don't happen; and equally intelligent people on the other side who say that's a horrible mistake that will lead to regulatory capture, stifle innovation. Both groups claim that unless their preferred policy outcome is put in place, the future of the Internet is in trouble. Both sides tend to have a lot of backing financially by players who have a stake in this. What are those of us not in the trenches to think about and want? Advantages of being on a podcast with an audience of people who are already intelligent and able to think through issues. No magic wand to wave or a matter of trust and belief. It's a question of reading through the evidence. What we tried to do in our study was provide first of all a broad literature review. The baseline position in the United States for the last five years or so has been almost generally that open access is a disproven theory. It turns out that the evidence in favor of that is very ambiguous. You have to read analyses in both directions and see which are more powerful. A lot of industry-funded work. One of the things people keep saying is if you have open access you'll undermine investment; or if you have open access you will improve investment. We found that over half of the studies, industry funded, equally shared between those that were funded on one side or the other, each in the direction expected. Then have to go and look: what studies aren't industry funded? What other countries are looking at it. If the position that open access doesn't work were correct, then the United States was poised in the best possible position in 2001. We were the ones that had cable in more than 50% of the market, had regulatory environment that allowed every company to control its own infrastructure; we had existing infrastructure; we were the leaders and we had the regulatory environment that built on existing investments and didn't require access. By all of the predictions of people who said open access would ruin the Internet, then by a decade later we should have been further ahead of everyone on speed, prices, and penetration. Instead we fell from fourth in penetration to 15th; fell from first in prices to maybe 18th or 19th--that has to count for something. But a lot of people dispute those findings. They say there are a lot of reasons those prices are distorted; doesn't take account of the investments made publically that distorted the prices. Doesn't mean you can't find the truth; but it seems this is a very difficult issue. Think about broadband access in the United States is in the 65-70% range; we are a massively large geographic entity; large rural populations; always going to be easy to say that there's something special about the United States. Just as an outsider, applied economist, tend to fall back on first principles. Worry is that usually when government gets deeply involved in something, at least in the United States, it doesn't serve the customer. It serves the corporation. Worried that no matter which solution the FCC imposes--which right now looks like a fight between different corporations--it looks like some corporation is going to win and not the consumer. The consumer rarely wins. Current health care debate--a lot of Americans think the insurance companies got their comeuppance now that we have the insurance reform; but guarantee you that the insurance companies are going to get really rich from health care reform. Reassure me that a heavier role for the FCC is going to serve me rather than Google, Verizon, Comcast, etc.
36:19Baffled by the metaphor. Is the theory that if the insurance companies end up getting rich from health care reform it can't also the case that consumers will get better health insurance, better coverage--that there's a win-lose situation between companies and consumers? It's entirely possible that with the right institutional framework you'll set up a competitive dynamic that some companies would lose--monopoly is great if you are the monopolist. The question is whether the regulator, under the influence of the corporations, is going to set up a competitive market. Why would they? They rarely do. Everything is relative to the baseline. At the moment, we have a highly competitive market, driven by a regulator that made a set of choices that in 2002-2003 there wasn't any evidence--because there was no evidence at the time--but at least plausible if you looked Canada and the United States, that had very similar market structures and had could plausibly support the decisions the FCC made in 2001 and early 2002. Baseline is already one in which you have very strong incumbents spending an enormous amount of money on lobbying but also on research to show that this works. How much worse could you do by creating a system that could have instead of 1-2 players, 3-5 players? We are currently based on the best acknowledged non-refuted evidence on a trajectory to having 75-85% of the country will have next generation speeds under monopoly--under cable--because of the different costs of fiber to the home versus cable upgrade and because fiber upgrade simply lead to the node, which is DSL. Essentially we are moving to monopoly. Completely buy that regulators are not pristine public servants; particularly difficult in the United States that has such a complicated relationship with civil servants. We tend to call them bureaucrats instead of civil servants and we tend to think of it as transitional positions as opposed to lifetime careers. Different countries have different sociological structures to their regulatory environments, which is not about first principles. These are all empirical questions because the level of variation is very high. If there's an intervention that can set us up to have markets of 3-5 competitors as opposed to 1-2, chances are even though there will be companies that will money, consumers will be better off. If you are talking first principles, an intervention however distorted or limited that gets us to 3-5 competitors will improve consumer welfare over one that has a very clear trajectory to mostly monopoly or to some extent duopoly arguments. Most of the time--all the time in technology markets in my [Russ's] lifetime the great monopolist who threatened to ruin the world turned out not to be much of a monopolist after all. It was IBM in the 1960s-1970s who was going to destroy the world; then it was Microsoft; now it's Google; yet somehow competition always comes from an unexpected source.
41:39Would like to push back just a little bit. Do you see telephone as having changed its market structure at the local level independent of regulation, or for that matter cable? The phone is complicated because we had a highly regulated market, so to say that we have more choice now because we broke it up--that worked, but we could have not tried to steer it from the top down. Rather than letting some competition emerge, we made it very difficult. The cell phone market, which is highly regulated still, is pretty competitive; got some problems. More pleasant life than 25 years ago. Hard to separate out how much of that is basic technology and how much is in the nature of things. For any given market, whether or not it is likely that any given level of concentration is temporary and whether it will be superseded depends on the relative stability of the technology and the relative cost of entry. And the profitability of the incumbent, which encourages entrants to take it away. That really assumes away some costs that just don't get much cheaper with technology, like labor costs and trenching costs. This works when you are talking about things like electronics. If you look at efforts to make electricity markets more efficient, you can make some parts of the market function more like markets, but that requires some kind of regulation of the transmission side because it's so expensive to replicate the last mile. No simple switch--either regulation or market. Given the particular capital costs of any given component and the degree to which they are susceptible to technological change, that's what sets your conditions. In the late 1990s-early 2000 was that people believed the technological constraint on the ability to compete in markets in delivering the last mile of connectivity had gone away. Because cable, telephone, etc. were there. When we look back ten years with the experience from the United States and elsewhere in the world, that technological assumption that there was an easy way to get around the fact that you simply needed to pay human beings to dig holes had a major cost that you couldn't recover in any kind of time frame for people to invest in. Fiber example based on the Dutch market.
45:48Sharing and open source. Cloud computing: one source of threat, and handheld, which is another source of threat, is the main concern is that you'll get very high capital cost infrastructure in the network that on the one hand may make it very cheap to scale up capacity and in that regard may increase this trend. But on the other hand, if all you have is two or three companies at most without whose services you can't set up a web-based interface and to the extent that those companies start leveraging that position, that could be one way that that position is one long-term risk. So much of what made the net creative was the general purpose PC and its open architecture. The handheld based on the wireless network--the iPhone--is coming to occupy much of computation, moving toward more controlled infrastructure. Could be a false risk; so many applications, but not quite the internet model, more controlled than proprietary. If that or the small PC comes to displace the home-based computer. Isn't that great? Russ a Kindle owner; first and second generation. Cheaper, better. iPad coming soon; will put pressure on the Kindle; Sony EReader also will try. Apple's really good--they might dominate for 20-30 years. Isn't that good? There's a Kindle app now for the iPad. Assuming that the licensing model will be permitted. Up in the stack. Royalties, licensing constraint by content owners. What is worrisome--because the Apple iPad is not as open as the cloud? As the PC. That's a market response. That assumes the market response incorporates all externalities. It doesn't. Classic externality for the consumer to prefer the fantastic interface of the iPhone to the long-term innovation. You wouldn't anticipate consumers making buying decisions today incorporating the long-term value of a more innovative platform. No reason for them to do it.
50:31Wikipedia and Linux--new form of cooperation, crowd-sourcing. Have we seen the best of it? Is there more to come? Peer production, distributed production, distributed innovation. Doubt we have seen the best of it. A little bit of software and a little bit of Wikipedia--sounds like it has limited effect. See lots of people making lots of money working in traditional models--will that be threatened, enhanced? In some specific industries you'll see displacement by peer production systems. Eleven years ago information rules was open, with Encarta, threat to Britannica--the major change when Encarta challenges Britannica. Clearly there are places for straight competition and better service replacing the other. Weblogs--Jason Calacanis started top diggers to get weblogs to be at the top of those sites and Digg is still ahead. Web server software--Apache has withstood Microsoft for 15 years through two busts and two booms. Clearly important component of information economy that is continuously dependent on social production. Most scripting today is php, not Active X. Proprietary software on the desktop has succeeded in fending off free software, but not in the network. Wikipedia. Partly competition and partly as complements. Major debate today about the future of journalism. Some kinds of journalism that face competition from distributed social production. A lot is individual bloggers. What you get instead is a media environment of greater diversity of styles of production. Global newspapers that have many more advertising imprints. Still have hyper-local papers and mid-level papers also have web versions, some do well, some hurt. At the same time intermediate level entities that are online only, that put some of their costs on contributors or in-house and rely on some level of advertising or merchandising to support them. Seeing not a complete displacement but a shifting of the set of strategies available. Blurring the line between production and consumption--what used to be consumption is becoming a much more active production model. Also seeing smaller scale, more effective non-profits, like Sun Labs Foundation, government transparency by citizens. Not so much that we'll all stop working in traditional businesses. In the information economy, there will be a greater diversity of models. You'll see businesses shifting toward creating platforms for social production. Video--collaborative video editing. Sounds like a good future. Worries a lot of people--monopolies, silos of things they disagree with. Will be a lot of mixing of models we can't anticipate.

More EconTalk Episodes

Search Econlib