February 2011

Boeing subsidiary Narus reports on its Web site that it “protects and manages” a number of worldwide networks, including that of Egypt Telecom. A recent IT World article entitled “Narus Develops a Scary Sleuth for Social Media” reported on a Narus product called Hone last year:

Hone will sift through millions of profiles searching for people with similar attributes — blogger profiles that share the same e-mail address, for example. It can look for statistically likely matches, by studying things like the gender, nationality, age, location, home and work addresses of people. Another component can trace the location of someone using a mobile device such as a laptop or phone.

Media advocate Tim Karr reports that “Narus provides Egypt Telecom with Deep Packet Inspection equipment (DPI), a content-filtering technology that allows network managers to inspect, track and target content from users of the Internet and mobile phones, as it passes through routers on the information superhighway.”

It’s very hard to know how Narus’ technology was used in Egypt before the country pulled the plug on its Internet connectivity, or how it’s being used now. Narus is declining comment.

So what’s to be done?

Narus and its parent, The Boeing Company, have no right to their business with the U.S. government. On our behalf, Congress is entitled to ask about Narus’/Boeing’s assistance to the Mubarak regime in Egypt. If contractors were required to refrain from assisting authoritarian governments’ surveillance as a condition of doing business with the U.S. government, that seems like the most direct way to dissuade them from providing top-notch technology capabilities to regimes on the wrong side of history.

Of course, decades of U.S. entanglement in the Middle East have created the circumstance where an authoritarian government has been an official “friend.” Until a few weeks ago, U.S. unity with the Mubarak regime probably had our government indulging Egypt’s characterization of political opponents as “terrorists and criminals.” It shouldn’t be in retrospect that we learn how costly these entangling alliances really are.

Chris Preble made a similar point ably on the National Interest blog last week:

We should step back and consider that our close relationship with Mubarak over the years created a vicious cycle, one that inclined us to cling tighter and tighter to him as opposition to him grew. And as the relationship deepened, U.S. policy seems to have become nearly paralyzed by the fear that the building anger at Mubarak’s regime would inevitably be directed at us.

We can’t undo our past policies of cozying up to foreign autocrats (the problem extends well beyond Egypt) over the years. And we won’t make things right by simply shifting — or doubling or tripling — U.S. foreign aid to a new leader. We should instead be open to the idea that an arms-length relationship might be the best one of all.

For my contribution to Berin Szoka and Adam Marcus’ (of TechFreedom fame) awesome Next Digital Decade book, I wrote about search engine “neutrality” and the implicit and explicit claims that search engines are “essential facilities.” (Check out the other essays on this topic by Frank Pasquale, Eric Goldman and James Grimmelmann, linked to here, under Chapter 7).

The scare quotes around neutrality are there because the term is at best a misnomer as applied to search engines and at worst a baseless excuse for more regulation of the Internet.  (The quotes around essential facilities are there because it is a term of art, but it is also scary).  The essay is an effort to inject some basic economic and legal reasoning into the overly-emotionalized (is that a word?) issue.

So, what is wrong with calls for search neutrality, especially those rooted in the notion of Internet search (or, more accurately, Google, the policy scolds’ bête noir of the day) as an “essential facility,” and necessitating government-mandated access? As others have noted, the basic concept of neutrality in search is, at root, farcical. The idea that a search engine, which offers its users edited access to the most relevant websites based on the search engine’s assessment of the user’s intent, should do so “neutrally” implies that the search engine’s efforts to ensure relevance should be cabined by an almost-limitless range of ancillary concerns. Nevertheless, proponents of this view have begun to adduce increasingly detail-laden and complex arguments in favor of their positions, and the European Commission has even opened a formal investigation into Google’s practices, based largely on various claims that it has systematically denied access to its top search results (in some cases paid results, in others organic results) by competing services, especially vertical search engines. To my knowledge, no one has yet claimed that Google should offer up links to competing general search engines as a remedy for its perceived market foreclosure, but Microsoft’s experience with the “Browser Choice Screen” it has now agreed to offer as a consequence of the European Commission’s successful competition case against the company is not encouraging. These more superficially sophisticated claims are rooted in the notion of Internet search as an “essential facility” – a bottleneck limiting effective competition. These claims, as well as the more fundamental harm-to-competitor claims, are difficult to sustain on any economically-reasonable grounds. To understand this requires some basic understanding of the economics of essential facilities, of Internet search, and of the relevant product markets in which Internet search operates.

The essay goes into much more detail, of course, but the basic point is that Google’s search engine is not, in fact, “essential” in the economically-relevant sense.  Rather, Google’s competitors and other detractors have basically built precisely the most problematic sort of antitrust case, where success itself is penalized (in this case, Google is so good at what it does it just isn’t fair to keep it all to itself!). Continue reading →

Via TechDirt, “The news media always need a bogeyman,” says Cracked.com in their well-placed attack on techno-panics, “5 Terrifying Online Trends (Invented By the News Media).” It’s a popular topic here, too.

I’m not one of those libertarians who incessantly rants about the supposed evils of National Public Radio (NPR) and the Public Broadcast Service (PBS).  In fact, I find quite a bit to like in the programming I consume on both services, NPR in particular. A few years back I realized that I was listening to about 45 minutes to an hour of programming on my local NPR affiliate (WAMU) each morning and afternoon, and so I decided to donate $10 per month. Doesn’t sound like much, but at $120 bucks per year, that’s more than I spend on any other single news media product with the exception of The Wall Street Journal. So, when there’s value in a media product, I’ll pay for it, and I find great value in NPR’s “long-form” broadcast journalism, despite its occasional political slant on some issues.

In many ways, the Corporation for Public Broadcasting, which supports NPR and PBS, has the perfect business model for the age of information abundance. Philanthropic models — which rely on support for foundational benefactors, corporate underwriters, individual donors, and even government subsidy — can help diversify the funding base at a time when traditional media business models — advertising support, subscriptions, and direct sales — are being strained.  This is why many private media operations are struggling today; they’re experiencing the ravages of gut-wrenching marketplace / technological changes and searching for new business models to sustain their operations. By contrast, CPB, NPR, and PBS are better positioned to weather this storm since they do not rely on those same commercial models.

Nonetheless, NPR and PBS and the supporters of increased “pubic media” continue to claim that they are in peril and that increased support — especially public subsidy — is essential to their survival.  For example, consider an editorial in today’s Washington Post making “The Argument for Funding Public Media,” which was penned by Laura R. Walker, the president and chief executive of New York Public Radio, and Jaclyn Sallee, the president and chief executive of Officer Kohanic Broadcast Corp. in Anchorage. They argue: Continue reading →

I absolutely loved this quote about the dangers of regulatory capture from Holman Jenkins in today’s Wall Street Journal in a story (“Let’s Restart the Green Revolution“) about how misguided agricultural / environmental policies are hurting consumers:

When some hear the word “regulation,” they imagine government rushing to the defense of consumers. In the real world, government serves up regulation to those who ask for it, which usually means organized interests seeking to block a competitive threat. This insight, by the way, originated with the left, with historians who went back and reconstructed how railroads in the U.S. concocted federal regulation to protect themselves from price competition. We should also notice that an astonishingly large part of the world has experienced an astonishing degree of stagnation for an astonishingly long time for exactly such reasons.

I’ve just added it to my growing compendium of notable quotations about regulatory capture.  It’s essential that we not ignore how — despite the very best of intentions —  regulation often has unintended and profoundly anti-consumer / anti-innovation consequences.

This is the second of two essays making “The Case for Internet Optimism.” This essay was included in the book, The Next Digital Decade: Essays on the Future of the Internet (2011), which was edited by Berin Szoka and Adam Marcus of TechFreedom. In my previous essay, which I discussed here yesterday, I examined the first variant of Internet pessimism: “Net Skeptics,” who are pessimistic about the Internet improving the lot of mankind. In this second essay, I take on a very different breed of Net pessimists:  “Net Lovers” who, though they embrace the Net and digital technologies, argue that they are “dying” due to a lack of sufficient care or collective oversight.  In particular, they fear that the “open” Internet and “generative” digital systems are giving way to closed, proprietary systems, typically run by villainous corporations out to erect walled gardens and quash our digital liberties.  Thus, they are pessimistic about the long-term survival of the Internet that we currently know and love.

Leading exponents of this theory include noted cyberlaw scholars Lawrence Lessig, Jonathan Zittrain, and Tim Wu.  I argue that these scholars tend to significantly overstate the severity of this problem (the supposed decline of openness or generativity, that is) and seem to have very little faith in the ability of such systems to win out in a free market. Moreover, there’s nothing wrong with a hybrid world in which some “closed” devices and platforms remain (or even thrive) alongside “open” ones. Importantly, “openness” is a highly subjective term, and a constantly evolving one.  And many “open” systems or devices are as perfectly open as these advocates suggest.

Finally, I argue that it’s likely that the “openness” advocated by these advocates will devolve into expanded government control of cyberspace and digital systems than that unregulated systems will become subject to “perfect control” by the private sector, as they fear.  Indeed, the implicit message in the work of all these hyper-pessimistic critics is that markets must be steered in a more sensible direction by those technocratic philosopher kings (although the details of their blueprint for digital salvation are often scarce).   Thus, I conclude that the dour, depressing “the-Net-is-about-to-die” fear that seems to fuel this worldview is almost completely unfounded and should be rejected before serious damage is done to the evolutionary Internet through misguided government action.

I’ve embedded the entire essay down below in Scribd reader, but it can also be found on TechFreedom’s Next Digital Decade book website and SSRN.

Continue reading →

On this week’s podcast, Joseph Reagle, a fellow at Harvard’s Berkman Center for Internet and Society, discusses his recent book, Good Faith Collaboration: The Culture of Wikipedia. Reagle talks about early attempts to create online encyclopedias, the happy accident that preceded Wikipedia, and challenges that the venture has overcome. He also discusses the average Wikipedian, minority and gender gaps in contributors, Wikipedia’s three norms that allow for its success, and co-founder Jimmy Wales’ role with the organization.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?