Articles by Jerry Brito

Jerry is a senior research fellow at the Mercatus Center at George Mason University, and director of its Technology Policy Program. He also serves as adjunct professor of law at GMU. His web site is jerrybrito.com.


Regulations.gov, the federal government’s centralized regulatory docketing system that I look at in my new transparency paper, recently won an award from Government Computer News for “combining vision and IT innovations with an attention to detail and a willingness to collaborate.” The result of that award-winning combination, however, is not impressing everyone. A few days later the Congressional Research Service issued a report that catalogs the site’s shortcomings.1 (Another great dissection of Regulations.gov was performed by BNA and reported that “Cornell students studying human-computer interaction, when asked to evaluate the E-Rulemaking Web site’s public interface in early 2006, rated it ‘absolutely horrific[.]'”)

What’s striking to me is how what many believe is an unsatisfactory product is hailed as a success. Despite the hard work that many civil servants no doubt expended trying to make Regulations.gov a useful site, one has to admit it is confusing and difficult to use. Increased traffic is often cited by OMB in reports to Congress (PDF) as a measure of success. Increased web traffic was also mentioned in the GCN story about the award.

Looking at traffic, however, is tallying output, not outcomes; measuring activity, not results. One could conceivably build a website so unnavigable that it results in the number of “web hits” quadrupling because users have such a hard time finding what they need or because they have to click through many links before getting to what they want. Also, a total traffic number is difficult to judge. Are 150 million “hits” a good thing? Relative to what? Who knows.

Instead, what I’d like to know is whether Regulations.gov is making it easier for citizens to find and comment on regulatory proceedings. I see from the site’s “What’s New” section (I’d link to it but I can’t because the site uses 1990s-style frames technology2) that they conduct a regular “customer satisfaction survey.” I’d like to see those results published on the web. That sounds to me like a much better measurement of the site’s effectiveness.

Continue reading →

digg_url = ‘http://digg.com/podcasts/Tech_Policy_Weekly_from_The_Technology_Liberation_Front‘;

http://digg.com/tools/diggthis.js

Brad Stone of the New York Times has a good post on the Bits Blog regarding the Comcast kerfuffle (Jim, Why are we calling it that, again?). The gist:

It seems unlikely that Comcast has a secret agenda to shut down file-sharing applications and combat piracy on its network. But the company is clearly trying to have it both ways. It claims it is a neutral Internet service provider that treats all packets equally, not blocking or “shaping” its Internet traffic. Meanwhile it also positions itself as the champion of average Internet users whose speeds are being slowed by file-sharing. The problem Comcast may now be facing is that in the absence of a plain explanation about what the company does to disadvantage certain applications in the name of managing traffic on its network, anecdotal reports and conspiracy theories fill the vacuum.

I have no doubt that Comcast’s practices stem from trying to provide a good, quality service for the majority of their customers. The problem their actions pose for those of us who advocate against unnecessary regulation, however, is that they’re not being completely clear about what they’re doing (although they’re trying).

For example, if the problem is one percent of users who tend to be bandwidth hogs, why not address the users instead of a protocol? AOL dial-up and T-Mobile wireless are able to meter customer use above a certain allotment without any negative privacy implications. It seems like Comcast does in fact target bandwidth hogs, although it doesn’t publish what the limit is. These sort of unknowns stir up the conspiracy theories Stone mentions. That makes explaining to folks that there’s nothing nefarious here pretty tough.

If you’d like to get a flavor for the sort of impact that one (tenacious) citizen can have on making government data more transparent, check out this Google Tech Talk by one of my personal heroes, Carl Malamud. (I write about his exploits in my new paper on online transparency.) He talks about cajoling the ITU to put standards online, forcing the SEC to put its public information online, and his new project to live-stream and archive video of all congressional and agency hearings in Washington. He’s a real inspiration.

I’ve been laboring for a few months on a paper about government transparency on the internet and I’m happy to say that it’s now available as a working paper. In it I show that a lot of government information that is supposed to be publicly available is only nominally available because it’s not online. When data does make it online it’s often useless; it’s as if the .gov domain has a prohibition on XML and reliable searches.

First I look at independent third parties (such as GovTrack.us) that are doing yeoman’s work by picking up the slack where government fails and making data available online in flexible formats. Then I look at yet other third parties who are taking the liberated data and using them in mashups (such as MAPLight.org) and crowdsourcing (such as our own Jim Harper’s WashingtonWatch.com). Mashups of government data help highlight otherwise hidden connections and crowdsourcing makes light work of sifting through mountains of data. If I may corrupt Eric Raymond’s Linus’s Law for a moment, “Given enough eyeballs, all corruption is shallow.” In the coming days I plan to write a bunch more on how online tools can shed light on government, including a series dissecting the FCC’s website–not for the squeamish.

I believe opening up government to online scrutiny is immensely important. If we’re going to hold government accountable for its actions, we need to know what those actions are. The Sunlight Foundation has been doing fantastic work on this front and I would encourage you to visit them and especially their Open House Project blog. I would also encourage you to send me any comments you might have on my paper as I’m still perfecting it before I submit it to journals.

The AP reports today the results of an investigation it conducted on Comcast’s “traffic shaping” practices as they relate to BitTorrent. The bottom line, if the AP is correct, is that Comcast interferes with packets coming from both ends of a BitTorrent communication. Comcast allegedly inserts messages pretending to be one or the other end requesting that the transmission be reset. Susan Crawford has a technical explanation on her blog.

If this is a consistent policy, this is much worse than the meaningless one-off snafus such as Madison River, Pearl Jam, or NARAL. While this is technically legal, and should always be, it’s a bit indefensible. No doubt Comcast and every other access provider should have the ability to manage their networks to ensure that a minority of users doesn’t slow down or increase costs for the majority. However, they should be transparent about what they do.

As the AP reports it (and I’m really looking forward to clarification), “Comcast’s technology kicks in, though not consistently, when one BitTorrent user attempts to share a complete file with another user.” If that means any BitTorrent user, even if they’re not a heavy user, then the policy seems over-broad to me. In its acceptable use policy,1 Comcast reserves the right to take any measures it deems necessary to deal with subscribers who use too much bandwidth (although how much is too much is not clearly defined). But if the AP is right, this is targeting a specific application, not specific users.

What this all points out to me, however, is that we don’t need regulation prohibiting these kinds of network management practices. The problem is not the practice, but the lack of disclosure, and as Google’s Andrew McLaughlin has said, it’s more of an FTC issue than an FCC one. The other issue this brings up is Adam’s favorite: Why not just have a Ramsey two-part tariff style metering after instead of interfering with legitimate applications?


  1. See the relevant portions of the acceptable use policy here.

The WSJ reports that the French government has “rejected the sole bid it had received for the so-called third-generation, or 3G license, from French Internet start-up Iliad SA, on the grounds that it didn’t meet required financial criteria.” It also says that the “failed auction for a fourth mobile-operator license could forestall new competition and keep prices at their lofty levels for consumers[.]”

It seems like the French government is going to try to remove the technical roadblocks stopping the deal, and that desire for more competition is certainly gratifying. But what I’m more curious about is why there aren’t more bidders? After all the WSJ also says, “France is one of the more desirable markets in Europe for operators. Prices have remained high and competition — limited to the three operators — isn’t as brutal as elsewhere. Italy, for example, has four mobile operators and is set to roll out more.”

It wouldn’t have anything to do with forced business models, would it?

In an interesting post today, Glenn Fleishman explores what AT&T’s purchase of 700 MHz spectrum from Aloha Partners today means for Verizon. While my conspiracy theory radar tingles a bit, I had this same thought earlier today. No point in paraphrasing; enter the blockquote.

AT&T spends $2.5b for 12 MHz across 200m people in the 700 MHz band: Let’s talk two-steps-ahead. In the terms for the C Block licenses that Google wanted very open and Verizon and AT&T wanted to have cell-spectrum-like restrictions, AT&T did a volte-face and said it would agree to most of the openness that Google wanted. Huh, I said, I wonder what made them do that? Well, it’s gamesmanship. AT&T was obviously already in a position to acquire Aloha Partners’s licenses. This means that AT&T is reverse-encumbering the other band. While the C Block involves more bandwidth and greater coverage, Verizon is now in a worse position because of the lack of device and application lock-in if they choose to bid in 700 MHz as AT&T will already have holdings. AT&T can have the flexibility to deploy different services in the different 700 MHz blocks. I think.

AT&T can now focus on bidding on the A and B blocks, which can compliment their Aloha acquisition and which don’t come with open-access restrictions. So did AT&T pull off a Machiavellian ploy to saddle Verizon with an open access mandate?

french-carterfone.jpgI love my iPhone. Despite what others might say, it is the most innovative mobile phone in a decade. I also think innovators should be rewarded, which is why I’m fine with the iPhone being locked to AT&T’s network. As a result, Apple gets a cut of my (and every other iPhone owner’s) wireless bill.

France might be left behind when it comes to this innovation, however. That country has laws similar to the wireless Carterfone rules Tim Wu, Skype, and others have advocated for the U.S. Locked phones in France must be unlocked by the carrier upon user request, and wireless carriers must also sell unlocked versions of their mobile phones. As a result, Apple is considering keeping the iPhones off French shelves indefinitely.

To me it’s clear that forced access laws limit innovation. I think folks who propose such rules want to have their cake and eat it, too. That is, they want the innovation that comes from entrepreneurs acting in a free market (and often fueled by exclusive deals such as the one between Apple and AT&T), and they also want the forced openness of networks. They think that the latter will have no impact on the former; that innovators will innovate regardless of the incentives. The iPhone snag in France, however, shows that incentives do matter.

Buying or pacifying?

by on October 8, 2007 · 2 comments

In a blog post entitled “Buying regulation,” Susan Crawford wonders about the legality of the FCC reserve price scheme for the 700 MHz rules. (I.e., as long as the $4.6 billion reserve price is met for the much coveted C Block, then open access rules will apply. If the reserve price isn’t met, then the rules go away.) Crawford asks,

Think about it. How can the FCC condition regulations … on the payment of money? And then have the rules dissolve if it doesn’t get the money? This is such a pure quid pro quo – it’s government for sale. Completely screwy. But how do you say “completely screwy” in legalese?

Well, it is certainly a creative gambit by Kevin Martin to make Google put their money where their mouth is, and I don’t have an opinion about whether it’s technically legal. That said, I’m not sure it’s exactly a “quid pro quo.” It’s not as if the highest bidder gets their preferred rules applied to the spectrum block. One can conceive of AT&T, for example, winning the auction at a price above $4.6 billion and therefore being subject to rules it dislikes. What I think the scheme is meant to do is pacify Congress by addressing the concern that given the restrictive rules the spectrum block might fetch much less than the many billions Congress is anticipating (and probably has already spent).