Articles by Jerry Brito

Jerry is a senior research fellow at the Mercatus Center at George Mason University, and director of its Technology Policy Program. He also serves as adjunct professor of law at GMU. His web site is jerrybrito.com.


Here are some thoughts that might become a paper. Feedback would be much appreciated.

http://www.youtube.com/v/WyNzC9W2C8Q&hl=en

Public choice theory can be summarized in four words: concentrated benefits, dispersed costs. Your share of the bill for the Bridge to Nowhere might be 5¢—much less than even the postage needed to write your member of Congress—but the developer and the local community stand to gain millions, which pays for lots of stamps. The few who will benefit from the transfer have an easy time organizing to lobby for it, while a group as diverse and dispersed as taxpayers face what Mancur Olson called a collective action problem. That is, the costs of organizing large groups are greater than the possible gain, and then there’s always the free-rider problem. This is the status quo and the source of much pessimism.

Here is, perhaps, cause for optimism: social media has pushed down, and continues to push down, the cost of organizing. If the cost can be pushed down far enough, it’s conceivable that the collective action problem could be solved. (That’s a thesis in case you hadn’t noticed.)

In his wonderful book, Here Comes Everybody, Clay Shirky tells the tale of two flights that where stranded at airports with the passengers subjected to terrible conditions. One incident happened in 1999, and the other almost identical incident happened in 2007. The former became a press blip, while the latter led to congressional reform of passenger rights. The difference, Shirky points out, is that the second event happened after the technology was in place to make it trivial for the passengers who had been in similar situations to find one another and organize into a cohesive group.

There seem to be two ways to get the attention of Congress: money or members. The AARP is the most effective lobby in town because it has the backing of 38 million members. Like the AAA, unions, and other large lobbies, the AARP solves the collective action problem by offering its members benefits—beyond representation in Washington—that can actually be captured by the individual. (What Olson called a “separate and selective incentive.”) Wikipedia and Linux and the rest offer selective incentives, but they also lower the costs of organization and participation dramatically. Would it be possible for an ad hoc Facebook group to rival a traditional lobby? I don’t know. Maybe we should try. Continue reading →

So I finally had a chance to read Beth Simone Noveck’s article on wiki-government about which Jim has previously posted. The idea is to take tools of mass collaboration that have given us Wikipedia and Linux and apply them to the development of policy. Like the encyclopedias and operating systems of the past, policy development is now often the exclusive domain of government experts. Noveck coins the term “civic software” to refer to collaboration tools aimed at policy.

While I’m a fan of the power of crowds (see my recent paper on “crowdsourcing government transparency”) I’d like to take issue with one minor point of her plan. She critiques our current system of experts saying, “Sometimes these pre-selected scientists and outside experts are simply lobbyists passing by another name.” (I’d change “sometimes” to often.) The implication is that a mass collaborative process might help limit the influence of special interests in policy-making. How? The wiki-wonks, Noveck suggests, won’t be limited to appointed pros:

People have no option to self-select on the basis of enthusiasm, rather than being chosen on the basis of profession. Even when not unduly subject to political influence, the decision as to who participates is based on institutional status. Those who may have meaningful contributions to make–graduate students, independent scientists, avid hobbyists, retired executives, and other consultants in the “free agent nation”–fall outside the boundaries of professional institutions and status and will of necessity be excluded, regardless of their intellectual assets.

But isn’t that what lobbies are? The most enthusiastic on a given issue? The same way ornithologists or passionate bird watchers are the ones writing the Wikipedia entries about robbins, it seems to me that special interests will be the most active in shaping any wiki-policy. As Hillary Clinton likes to say, lobbyists “represent real Americans.” I don’t think wiki-government can meaningfully diminish special interest influence.

That said, Noveck’s Peer-to-Patent pilot program with the USPTO is an excellent idea. I especially like how the community chooses what gets sent to the Patent Office:

The community not only submits information, but it also annotates and comments on the publications, explaining how the prior art is relevant to the claims of the patent application. The community rates the submitted prior art and decides whether or not it deserves to be shared with the USPTO. Only the 10 best submitted prior-art references, as judged on the basis of their relevance to the claims of the patent applications by the online review community, will be forwarded to the patent examiner.

I’d love to see something like this for regulations. There’s no reason why we must wait for a government pilot program to do this. Maybe we can set up a wiki where the community can collaborate on a comment on a proposed agency rulemaking and the finished product is submitted to the docket. There’s no such thing as a neutral point of view when it comes to policy, so the wiki would have to have some first principles the community agrees to, or maybe a mechanism for developing several opposing comments. Thoughts?

connectkentucky-1.jpgI’d like to tap TLF’s incredibly smart readers for some help. Does anyone know what ConnectKentucky is or how it works? If you do, I’d much appreciate you post a comment explaining it. Its website is typified by language like this passage from its homepage:

ConnectKentucky connects people to technology in world-altering ways: improving the lives of the formerly disconnected; renewing hope for previously withering rural communities; driving increases in the number of tech-intensive companies and jobs; and nurturing an environment for lifetime learning, improved healthcare, and superior quality of life. … ConnectKentucky develops and implements effective strategies for technology deployment, use, and literacy in Kentucky, creating both the forum and the incentive for interaction among a variety of people and entities that would not otherwise unite behind common goals and a shared vision. This level of teamwork is making Kentucky a better place for business and a better place to live.

Most press articles about the organization are no better at explaining exactly what it does, and its Wikipedia entry is so-so. The best explanation I’ve found is from an article in the Economist:

Internet service providers could not be sure that there were enough [potential customers] in the Kentucky countryside to justify new investment in cabling or wireless transmitters. But by the end of this year, Mr Mefford boasts, 98% of residents will have access to inexpensive broadband services. This is primarily because of ConnectKentucky’s effort to map broadband demand in communities that didn’t have access, he says, which indicated that enough people in Kentucky farm country would sign up if providers entered the market. At the same time, the organisation also talked up high-speed internet services to sceptical residents, creating demand where it was slack.

Ars Technica also had this useful description:

ConnectKentucky is a public/private partnership that has boosted broadband availability from 60 percent to more than 90 percent in just two and a half years and used mapping techniques to identify current gaps in service. Once those were discovered, the group helped to create a regulatory environment that encouraged private investment, then partnered with companies on a market-driven approach to rolling out new lines, even in rural areas. 80 percent of the funding came from state and federal government agencies, while 20 percent was put up by the companies involved. By the end of this year, 100 percent of Kentucky homes should be able to access broadband of at least 768Kbps.

I’m asking because the program has many times been hailed as a model for other states and for the nation. So here are my questions: What exactly does ConnectKentucky (and its parent Connected Nation) do? How serious is lack of broadband mapping? Is there a market failure here (i.e. why aren’t private parties generating this sort of data)? What sort of changes did it secure “to create a regulatory environment that encouraged private investment”? Why is it up to a mostly government-funded organization to “talk[] up high-speed internet services to sceptical residents”? Who are the private partners in this public-private partnership?

UPDATE: Here is an article by Art Brodsky critical of ConnectKentucky’s origins and effectiveness, and here is CK’s response.

A923E041-59A0-4EA3-89E7-71F6FCE3D838.jpgLawrence Lessig has an op-ed in the New York Times today calling the orphan works bill now before Congress “unfair and unwise.” He agrees that the orphan works problem is real and merits an immediate response, but finds fault with the bill because it is unfair to copyright holders who have relied on existing law and “because for all this unfairness, it simply wouldn’t do much good.” Lessig writes: “The uncertain standard of the bill doesn’t offer any efficient opportunity for libraries or archives to make older works available, because the cost of a ‘diligent effort’ is not going to be cheap.” Instead Lessig suggests an alternative reform:

Congress could easily address the problem of orphan works in a manner that is efficient and not unfair to current or foreign copyright owners. Following the model of patent law, Congress should require a copyright owner to register a work after an initial and generous term of automatic and full protection. For 14 years, a copyright owner would need to do nothing to receive the full protection of copyright law. But after 14 years, to receive full protection, the owner would have to take the minimal step of registering the work with an approved, privately managed and competitive registry, and of paying the copyright office $1. This rule would not apply to foreign works, because it is unfair and illegal to burden foreign rights-holders with these formalities. It would not apply, immediately at least, to work created between 1978 and today. And it would apply to photographs or other difficult-to-register works only when the technology exists to develop reliable and simple registration databases that would make searching for the copyright owners of visual works an easy task.

I’ve addressed his concerns about fairness and have critiqued his proposal before, but I’d like to restate the latter here now.

An orphan work is a work that one finds but has no idea who its owner is or even if it’s copyrighted. The uncertainty is crippling because if one uses it one runs the risk of being sued for stiff damages. Therefore works that would otherwise spawn new creation (and therefore promote the progress of science) go unused.

Let’s say I find a photograph in my school’s archive that I would like to reproduce in a book I’m writing. The photo has no marks on it and there’s no other information, a true orphan work. How would Lessig’s proposal apply? Continue reading →

orphan-annie.jpgYesterday bills were introduced in the House (PDF) and the Senate (PDF) addressing the orphan works copyright issue about which I’ve written many times before. Alex Curtis has a great write-up of the bills over at the Public Knowledge blog.

An orphan work is a work under copyright the owner of which cannot be located so that a potential re-user cannot ask for permission to use or license the work. If you can’t find the owner, even after an exhaustive search, and use a work anyway, you risk the possibility that the owner will later come forward, sue you, and claim statutory damages up to $150,000 per infringing use.

Both bills are largely based on the Copyright Office’s recommendations and not the unworkable Lessig proposal that had been previously introduced as the Public Domain Enhancement Act by Rep. Zoe Lofgren. The bills limit the remedies available to a copyright owner if an infringing party can show that they diligently searched for the owner before they used the work. (What constitutes a diligent search is specifically defined, which should address the concerns about the Smith bill expressed by visual and stock artists.)

Rather than statutory damages, the owner would simply be owed the reasonable compensation for the infringing use—that is, what the infringer would have paid for the use if they had been able to negotiate. I think this is a fine solution because it gives all copyright holders an incentive to keep their registrations current and their works marked to the best of their abilities (i.e. what old-time formalities used to accomplish). I’m also happy to see that injunction is also limited.

Like the Smith bill, both of these new bills direct the Copyright Office to complete a study and produce a report on copyright small claims. There are many instances of copyright infringement that are too small to be litigated in federal district court—like a website that uses my copyrighted photo they got off flickr. Professional photographers and other visual artists face this all the time and there should be a way to address their concerns. One idea is to create a copyright small claims court and it’s something I’d love to research and contribute to a Copyright Office proceeding. So if Congress has been thinking about this for a few years, what’s stopping the Copyright Office from taking on the project sua sponte?

Anyhow, stay tuned as these bills wind their way through committee and the IP maximalists are engaged.

ALF 5 in 60 seconds

by on April 22, 2008 · 8 comments

http://www.vimeo.com/moogaloop.swf?clip_id=930330&server=www.vimeo.com&fullscreen=1&show_title=1&show_byline=1&show_portrait=0&color=c9ff23

A bunch of drunks obsessing over Justine Bateman.

Chairman Martin and his FCC colleagues testified today before the House Energy and Commerce Telecommunications and the Internet Subcommittee on the just-completed 700 MHz spectrum auction. At the top of the agenda was the failed D Block auction. According to Martin, all options are on the table. According to the WSJ, however, some have definite ideas for the block:

Some Republican members on the committee said they believed the 10 megahertz of spectrum should be sold off to the commercial wireless industry, and part of the proceeds then given to public safety so they could solve their communications shortcomings on their own. Those who advocate this solution have argued that public safety entities already control more than enough spectrum allocated to them by Congress over the years, but that it is being used ineffectively.

Those “some republicans” seem to include ranking member Joe Barton.

This is a bad idea. While I’m sympathetic to the argument that “public safety entities already control more than enough spectrum allocated to them by Congress over the years, but that it is being used ineffectively,” throwing more money at the problem isn’t going to fix it, either. Bringing commercial providers into the public safety sphere can help begin to break down the collective action problem that is the cause of the ineffective use of spectrum. If a commercial solution is successful, maybe then Congress can take a second look at all the spectrum public safety now holds and do something akin to the DTV transition: auction the spectrum while moving public safety to better, more efficient technologies.

cuban2_2.jpgThe always provocative Mark Cuban has an interesting post on his blog today. He writes:

There is a dirty little secret in the cable industry. Its being kept secret not by the cable distributors, but by the big cable networks. End this practice and the United States goes from being 3rd world by international broadband standards, to top of the charts and exemplary. … What is the dirty little secret ? That your cable company still delivers basic cable networks in analog. Why is this such an important issue ? Because each of those cable networks takes up 6mhz. That translates into about 38mbs per second. Thats 38mbs PER NETWORK. … If we want to truly change the course of broadband in this country, the solution is simple. Just as we had an analog shutdown date for over the air TV signals, we need the same resolution for analog delivered cable networks.

Obviously this would entail a government mandate to an industry, which we’re all biased against. If it really were so easy, I would expect to see the cable industry make the move on its own—if nothing else to respond to FIOS. But all that aside, my question to the cable-savvy folks I know read this blog is this: how true is Cuban’s claim? How much “spectrum in a tube” is really potentially available? How difficult would it be to make a digital transition in cable?

The Administrative Law Review at American University will hold a pretty interesting symposium next Friday on media regulation and the legacy of Red Lion v. FCC. Don’t let their horrendous program design scare you (PDF), they have some top notch speakers scheduled, including Cass Sunstein. Check out TLF’s Red Lion coverage over the years here.

This is just a test post to see if our new spiffy site has a problem with embedded videos. By the way, we have a new spiffy site. Thanks much to PJ Doland for his help getting it up. That’s what she said.

http://www.youtube.com/v/am7iJFaCR7g&hl=en