Boeing subsidiary Narus reports on its Web site that it “protects and manages” a number of worldwide networks, including that of Egypt Telecom. A recent IT World article entitled “Narus Develops a Scary Sleuth for Social Media” reported on a Narus product called Hone last year:

Hone will sift through millions of profiles searching for people with similar attributes — blogger profiles that share the same e-mail address, for example. It can look for statistically likely matches, by studying things like the gender, nationality, age, location, home and work addresses of people. Another component can trace the location of someone using a mobile device such as a laptop or phone.

Media advocate Tim Karr reports that “Narus provides Egypt Telecom with Deep Packet Inspection equipment (DPI), a content-filtering technology that allows network managers to inspect, track and target content from users of the Internet and mobile phones, as it passes through routers on the information superhighway.”

It’s very hard to know how Narus’ technology was used in Egypt before the country pulled the plug on its Internet connectivity, or how it’s being used now. Narus is declining comment.

So what’s to be done?

Narus and its parent, The Boeing Company, have no right to their business with the U.S. government. On our behalf, Congress is entitled to ask about Narus’/Boeing’s assistance to the Mubarak regime in Egypt. If contractors were required to refrain from assisting authoritarian governments’ surveillance as a condition of doing business with the U.S. government, that seems like the most direct way to dissuade them from providing top-notch technology capabilities to regimes on the wrong side of history.

Of course, decades of U.S. entanglement in the Middle East have created the circumstance where an authoritarian government has been an official “friend.” Until a few weeks ago, U.S. unity with the Mubarak regime probably had our government indulging Egypt’s characterization of political opponents as “terrorists and criminals.” It shouldn’t be in retrospect that we learn how costly these entangling alliances really are.

Chris Preble made a similar point ably on the National Interest blog last week:

We should step back and consider that our close relationship with Mubarak over the years created a vicious cycle, one that inclined us to cling tighter and tighter to him as opposition to him grew. And as the relationship deepened, U.S. policy seems to have become nearly paralyzed by the fear that the building anger at Mubarak’s regime would inevitably be directed at us. We can’t undo our past policies of cozying up to foreign autocrats (the problem extends well beyond Egypt) over the years. And we won’t make things right by simply shifting — or doubling or tripling — U.S. foreign aid to a new leader. We should instead be open to the idea that an arms-length relationship might be the best one of all.

For my contribution to Berin Szoka and Adam Marcus’ (of TechFreedom fame) awesome Next Digital Decade book, I wrote about search engine “neutrality” and the implicit and explicit claims that search engines are “essential facilities.” (Check out the other essays on this topic by Frank Pasquale, Eric Goldman and James Grimmelmann, linked to here, under Chapter 7).

The scare quotes around neutrality are there because the term is at best a misnomer as applied to search engines and at worst a baseless excuse for more regulation of the Internet.  (The quotes around essential facilities are there because it is a term of art, but it is also scary).  The essay is an effort to inject some basic economic and legal reasoning into the overly-emotionalized (is that a word?) issue.

So, what is wrong with calls for search neutrality, especially those rooted in the notion of Internet search (or, more accurately, Google, the policy scolds’ bête noir of the day) as an “essential facility,” and necessitating government-mandated access? As others have noted, the basic concept of neutrality in search is, at root, farcical. The idea that a search engine, which offers its users edited access to the most relevant websites based on the search engine’s assessment of the user’s intent, should do so “neutrally” implies that the search engine’s efforts to ensure relevance should be cabined by an almost-limitless range of ancillary concerns. Nevertheless, proponents of this view have begun to adduce increasingly detail-laden and complex arguments in favor of their positions, and the European Commission has even opened a formal investigation into Google’s practices, based largely on various claims that it has systematically denied access to its top search results (in some cases paid results, in others organic results) by competing services, especially vertical search engines. To my knowledge, no one has yet claimed that Google should offer up links to competing general search engines as a remedy for its perceived market foreclosure, but Microsoft’s experience with the “Browser Choice Screen” it has now agreed to offer as a consequence of the European Commission’s successful competition case against the company is not encouraging. These more superficially sophisticated claims are rooted in the notion of Internet search as an “essential facility” – a bottleneck limiting effective competition. These claims, as well as the more fundamental harm-to-competitor claims, are difficult to sustain on any economically-reasonable grounds. To understand this requires some basic understanding of the economics of essential facilities, of Internet search, and of the relevant product markets in which Internet search operates.

The essay goes into much more detail, of course, but the basic point is that Google’s search engine is not, in fact, “essential” in the economically-relevant sense.  Rather, Google’s competitors and other detractors have basically built precisely the most problematic sort of antitrust case, where success itself is penalized (in this case, Google is so good at what it does it just isn’t fair to keep it all to itself!). Continue reading →

Via TechDirt, “The news media always need a bogeyman,” says Cracked.com in their well-placed attack on techno-panics, “<a href'"5″>http://www.cracked.com/article_18982_5-terrifying-online-trends-invented-by-news-media.html”>5 Terrifying Online Trends (Invented By the News Media).” It’s a popular topic here, too.

I’m not one of those libertarians who incessantly rants about the supposed evils of National Public Radio (NPR) and the Public Broadcast Service (PBS).  In fact, I find quite a bit to like in the programming I consume on both services, NPR in particular. A few years back I realized that I was listening to about 45 minutes to an hour of programming on my local NPR affiliate (WAMU) each morning and afternoon, and so I decided to donate $10 per month. Doesn’t sound like much, but at $120 bucks per year, that’s more than I spend on any other single news media product with the exception of The Wall Street Journal. So, when there’s value in a media product, I’ll pay for it, and I find great value in NPR’s “long-form” broadcast journalism, despite its occasional political slant on some issues.

In many ways, the Corporation for Public Broadcasting, which supports NPR and PBS, has the perfect business model for the age of information abundance. Philanthropic models — which rely on support for foundational benefactors, corporate underwriters, individual donors, and even government subsidy — can help diversify the funding base at a time when traditional media business models — advertising support, subscriptions, and direct sales — are being strained.  This is why many private media operations are struggling today; they’re experiencing the ravages of gut-wrenching marketplace / technological changes and searching for new business models to sustain their operations. By contrast, CPB, NPR, and PBS are better positioned to weather this storm since they do not rely on those same commercial models.

Nonetheless, NPR and PBS and the supporters of increased “pubic media” continue to claim that they are in peril and that increased support — especially public subsidy — is essential to their survival.  For example, consider an editorial in today’s Washington Post making “The Argument for Funding Public Media,” which was penned by Laura R. Walker, the president and chief executive of New York Public Radio, and Jaclyn Sallee, the president and chief executive of Officer Kohanic Broadcast Corp. in Anchorage. They argue: Continue reading →

I absolutely loved this quote about the dangers of regulatory capture from Holman Jenkins in today’s Wall Street Journal in a story (“Let’s Restart the Green Revolution“) about how misguided agricultural / environmental policies are hurting consumers:

When some hear the word “regulation,” they imagine government rushing to the defense of consumers. In the real world, government serves up regulation to those who ask for it, which usually means organized interests seeking to block a competitive threat. This insight, by the way, originated with the left, with historians who went back and reconstructed how railroads in the U.S. concocted federal regulation to protect themselves from price competition. We should also notice that an astonishingly large part of the world has experienced an astonishing degree of stagnation for an astonishingly long time for exactly such reasons.

I’ve just added it to my growing compendium of notable quotations about regulatory capture.  It’s essential that we not ignore how — despite the very best of intentions —  regulation often has unintended and profoundly anti-consumer / anti-innovation consequences.

This is the second of two essays making “The Case for Internet Optimism.” This essay was included in the book, The Next Digital Decade: Essays on the Future of the Internet (2011), which was edited by Berin Szoka and Adam Marcus of TechFreedom. In my previous essay, which I discussed here yesterday, I examined the first variant of Internet pessimism: “Net Skeptics,” who are pessimistic about the Internet improving the lot of mankind. In this second essay, I take on a very different breed of Net pessimists:  “Net Lovers” who, though they embrace the Net and digital technologies, argue that they are “dying” due to a lack of sufficient care or collective oversight.  In particular, they fear that the “open” Internet and “generative” digital systems are giving way to closed, proprietary systems, typically run by villainous corporations out to erect walled gardens and quash our digital liberties.  Thus, they are pessimistic about the long-term survival of the Internet that we currently know and love.

Leading exponents of this theory include noted cyberlaw scholars Lawrence Lessig, Jonathan Zittrain, and Tim Wu.  I argue that these scholars tend to significantly overstate the severity of this problem (the supposed decline of openness or generativity, that is) and seem to have very little faith in the ability of such systems to win out in a free market. Moreover, there’s nothing wrong with a hybrid world in which some “closed” devices and platforms remain (or even thrive) alongside “open” ones. Importantly, “openness” is a highly subjective term, and a constantly evolving one.  And many “open” systems or devices are as perfectly open as these advocates suggest.

Finally, I argue that it’s likely that the “openness” advocated by these advocates will devolve into expanded government control of cyberspace and digital systems than that unregulated systems will become subject to “perfect control” by the private sector, as they fear.  Indeed, the implicit message in the work of all these hyper-pessimistic critics is that markets must be steered in a more sensible direction by those technocratic philosopher kings (although the details of their blueprint for digital salvation are often scarce).   Thus, I conclude that the dour, depressing “the-Net-is-about-to-die” fear that seems to fuel this worldview is almost completely unfounded and should be rejected before serious damage is done to the evolutionary Internet through misguided government action.

I’ve embedded the entire essay down below in Scribd reader, but it can also be found on TechFreedom’s Next Digital Decade book website and SSRN.

Continue reading →

On this week’s podcast, Joseph Reagle, a fellow at Harvard’s Berkman Center for Internet and Society, discusses his recent book, Good Faith Collaboration: The Culture of Wikipedia. Reagle talks about early attempts to create online encyclopedias, the happy accident that preceded Wikipedia, and challenges that the venture has overcome. He also discusses the average Wikipedian, minority and gender gaps in contributors, Wikipedia’s three norms that allow for its success, and co-founder Jimmy Wales’ role with the organization.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

Here’s the first of two essays I’ve recently penned making “The Case for Internet Optimism.” This essay was included in the book, The Next Digital Decade: Essays on the Future of the Internet (2011), which was edited by Berin Szoka and Adam Marcus of TechFreedom.  In these essays, I identify two schools of Internet pessimism: (1) “Net Skeptics,” who are pessimistic about the Internet improving the lot of mankind; and (2) “Net Lovers,” who appreciate the benefits the Net brings society but who fear those benefits are disappearing, or that the Net or openness are dying.  (Regular readers of this blog will be familiar with these themes since I sketched them out in previous essays here such as, “Are You an Internet Optimist or Pessimist?” and “Two Schools of Internet Pessimism.”) The second essay is here.

This essay focuses on the first variant of Internet pessimism, which is rooted in general skepticism about the supposed benefits of cyberspace, digital technologies, and information abundance. The proponents of this pessimistic view often wax nostalgic about some supposed “good ‘ol days” when life was much better (although they can’t seem to agree when those were). At a minimum, they want us to slow down and think twice about life in the Information Age and how it’s personally affecting each of us.  Occasionally, however, this pessimism borders on neo-Ludditism, with some proponents recommending steps to curtail what they feel is the destructive impact of the Net or digital technologies on culture or the economy.  I identify the leading exponents of this view of Internet pessimism and their major works. I trace their technological pessimism back to Plato but argue that their pessimism is largely unwarranted. Humans are more resilient than pessimists care to admit and we learn how to adapt to technological change and assimilate new tools into our lives over time. Moreover, were we really better off in the scarcity era when we were collectively suffering from information poverty?  Generally speaking, despite the challenges it presents society, information abundance is a better dilemma to be facing than information poverty.  Nonetheless, I argue, we should not underestimate or belittle the disruptive impacts associated with the Information Revolution.  But we need to find ways to better cope with turbulent change in a dynamist fashion instead of attempting to roll back the clock on progress or recapture “the good ‘ol days,” which actually weren’t all that good.

Down below, I have embedded the entire chapter in a Scribd reader, but the essay can also be found on the TechFreedom website for the book as well as on SSRN.  I have also includes two updated tables that appeared in my old “optimists vs. pessimists” essay.  The first lists some of the leading Internet optimists and pessimists and their books. The second table outlines some of the major lines of disagreement between these two camps and I divided those disagreements into (1) Cultural / Social beliefs vs. (2) Economic / Business beliefs.

Continue reading →

My essay last week for Slate.com (the title I proposed is above, but it must have been too “punny” for the editors) generated a lot of feedback, for which I’m always grateful, even when it’s hostile and ad hominem.  Which much of it was.

The piece argues generally that when it comes to the Internet, a disruptive technology if ever there was one, the best course of action for traditional, terrestrial governments intent on “saving” or otherwise regulating digital life is to try as much as possible to restrain themselves.  Or as they say to new interns in the operating room, “Don’t just do something.  Stand there.”

This is not an argument in favor of anarchy, or even more generally for social Darwinism.  I have something much more practical in mind.  Disruptive technologies, by definition, do not operate within the “normal science” of those areas of life they impact. Its problems can’t be solved by reference to existing systems and institutions. In the case of the Internet, that’s pretty much all aspects of life, including regulation. Continue reading →

You have to read all the way to the end to get exactly what the New York Times is getting at in its Sunday editorial, “Netizens Gain Some Privacy.”

Congress should require all advertising and tracking companies to offer consumers the choice of whether they want to be followed online to receive tailored ads, and make that option easily chosen on every browser.

That means Congress—or the federal agency it punts to—would tell authors of Internet browsing software how they are allowed to do their jobs. Companies producing browser software that didn’t conform to federal standards would be violating the law.

In addition, any Web site that tailored ads to their users’ interests, or the networks that now generally provide that service, would be subject to federal regulation and enforcement that would of necessity involve investigation of the data they collect and what they do with it.

Along with existing browser capabilities (Tools > Options > Privacy tab > cookie settings), forthcoming amendments to browsers will give users more control over the information they share with the sites they visit. That exercise of control is the ultimate do-not-track. It’s far preferable to the New York Times‘ idea, which has the Web user issuing a request not to be tracked and wondering whether government regulators can produce obedience.

[I got enough push-back to a recent post arguing the existence of market nimbleness in the browser area that I’m unsure of the thesis I expressed there. The better explanation of what’s going on may be that regulatory pressure is moving browser authors and others to meet the peculiar demands of the pro-regulatory community. The reason they have waited to act until now is because they do not perceive consumers’ interests to be met by protections against tailored advertising. The question of what meets consumers’ interests won’t be answered if regulation supplants markets, of course.]