The Information Economy Project at the George Mason University School of Law is hosting a conference tomorrow, Friday, April 19. The conference title is From Monopoly to Competition or Competition to Monopoly? U.S. Broadband Markets in 2013. There will be two morning panels featuring discussion of competition in the broadband marketplace and the social value of “ultra-fast” broadband speeds.

We have a great lineup, including keynote addresses from Commissioner Joshua Wright, Federal Trade Commission and from Dr. Robert Crandall, Brookings Institution.

The panelists include:

Eli Noam, Columbia Business School

Marius Schwartz, Georgetown University, former FCC Chief Economist

Babette Boliek, Pepperdine University School of Law

Robert Kenny, Communications Chambers (U.K.)

Scott Wallsten, Technology Policy Institute

The panels will be moderated by Kenneth Heyer, Federal Trade Commission and Gus Hurwitz, University of Pennsylvania, respectively. A continental breakfast will be served at 8:00 am and a buffet lunch is provided. We expect to adjourn at 1:30 pm. You can find an agenda here and can RSVP here. Space is limited and we expect a full house, so those interested are encouraged to register as soon as possible.

Why did the government impose a completely different funding mechanism on the Internet than on the Interstate Highway System? There is no substantive distinction between the shared use of local infrastructure by commercial “edge” providers on the Internet and shared use of the local infrastructure by commercial “edge” providers (e.g., FedEx) on the highways.

In Part 1 of this post, I described the history of government intervention in the funding of the Internet, which has been used to exempt commercial users from paying for the use of local Internet infrastructure. The most recent intervention, known as “net neutrality”, was ostensibly intended to protect consumers, but in practice, requires that consumers bear all the costs of maintaining and upgrading local Internet infrastructure while content and application providers pay nothing. This consumer-funded commercial subsidy model is the opposite of the approach the government took when funding the Interstate Highway System: The federal government makes commercial users pay more for their use of the highways than consumers. This fundamental difference in approach is why net neutrality advocates abandoned the “information superhighway” analogy promoted by the Clinton Administration during the 1990s. Continue reading →

The US Patent and Trademark office is starting to recognize that it has a software patent problem and is soliciting suggestions for how to improve software patent quality. A number of parties such as Google and EFF have filed comments.

I am on record against the idea patenting software at all. I think it is too difficult for programmers, as they are writing code, to constantly check to see if they are violating existing software patents, which are not, after all, easy to identify. Furthermore, any complex piece of software is likely to violate hundreds of patents owned by competitors, which makes license negotiation costly and not straightforward.

However, given that the abolition of software patents seems unlikely in the medium term, there are some good suggestions in the Google and EFF briefs. They both note that the software patents granted to date have been overbroad, equivalent to patenting headache medicine in general rather than patenting a particular molecule for use as a headache drug.

This argument highlights one significant problem with patent systems generally, that they depend on extremely high-quality review of patent applications to function effectively. If we’re going to have patents for software, or anything else, we need to take the review process seriously. Consequently, I would favor whatever increase in patent application fees is necessary to ensure that the quality of review is rock solid. Give USPTO the resources it needs to comply with existing patent law, which seems to preclude such overbroad patents. Simply applying patent law consistently would reduce some of the problems with software patents.

Higher fees would also function as a Pigovian tax on patenting, disincentivizing patent protection for minor innovations. This is desirable because the licensing cost of these minor innovations is likely to exceed the social benefits the patents generate, if any.

While it remains preferable to undertake major patent reform, many of the steps proposed by Google and EFF are good marginal policy improvements. I hope the USPTO considers these proposals carefully.

My new policy brief urges the Federal Communications Commission to get on with the business of allocating the necessary spectrum to meet the burgeoning demand for wireless services.

The paper was finished before Chairman Julius Genachowski announced his resignation last month. At the risk of sounding harsh, that might be addition by subtraction. One of the big disappointments of Genachowski’s tenure was the lack of significant movement to get spectrum freed up and auctioned. In fairness, there were the interests a number of powerful constituencies to be balanced: the wireless companies, the broadcasters, and the federal government itself, which is sitting on chunks of prime spectrum and refuses to budge.

But that’s the job Congress specifically delegated to the FCC. We’d be closer to a resolution–and the public would have been better served–had the FCC put its energies into crafting a viable plan for spectrum trading and re-assignment instead of hand-wringing over how to handicap bidders with neutrality conditions and giving regulatory favors to developers of unproven technologies such as Super WiFi. Instead of managing the spectrum process, the FCC got sidetracked trying to to pick winners and losers.

A new chairman brings an opportunity for a new direction. Spectrum relief should go to the top of the agenda. And as I say in the policy brief, just do it.

Last summer at an AEI-sponsored event on cybersecurity, NSA head General Keith Alexander made the case for information sharing legislation aimed at improving cybersecurity. His response to a question from Ellen Nakashima of the Washington Post (starting at 54:25 in the video at the link) was a pretty good articulation of how malware is identified and blocked using algorithmic signatures. In his longish answer, he made the pitch for access to key malware information for the purpose of producing real-time defenses.

What the antivirus world does is it maps that out and creates what’s called a signature. So let’s call that signature A. …. If signature A were to hit or try to get into the power grid, we need to know that signature A was trying to get into the power grid and came from IP address x, going to IP address y.

We don’t need to know what was in that email. We just need to know that it contained signature A, came from there, went to there, at this time.

[I]f we know it at network speed we can respond to it. And those are the authorities and rules and stuff that we’re working our way through.

[T]hat information sharing portion of the legislation is what the Internet service providers and those companies would be authorized to share back and forth with us at network speed. And it only says: signature A, IP address, IP address. So, that is far different than that email that was on it coming.

Now it’s intersting to note, I think—you know, I’m not a lawyer but you could see this—it’s interesting to note that a bad guy sent that attack in there. Now the issue is what about all the good people that are sending their information in there, are you reading all those. And the answer is we don’t need to see any of those. Only the ones that had the malware on it. Everything else — and only the fact that that malware was there — so you didn’t have to see any of the original emails. And only the ones that had the malware on it did you need to know that something was going on.

It might be interesting to get information about who sent malware, but General Alexander said he wanted to know attack signatures, originating IP address, and destination. That’s it.

Now take a look at what CISPA, the Cybersecurity Information Sharing and Protection Act (H.R. 624), allows companies to share with the government provided they can’t be proven to have acted in bad faith:

information directly pertaining to—

(i) a vulnerability of a system or network of a government or private entity or utility;

(ii) a threat to the integrity, confidentiality, or availability of a system or network of a government or private entity or utility or any information stored on, processed on, or transiting such a system or network;

(iii) efforts to deny access to or degrade, disrupt, or destroy a system or network of a government or private entity or utility; or

(iv) efforts to gain unauthorized access to a system or network of a government or private entity or utility, including to gain such unauthorized access for the purpose of exfiltrating information stored on, processed on, or transiting a system or network of a government or private entity or utility.

That’s an incredible variety of subjects. It can include vast swaths of data about Internet users, their communications, and the files they upload. In no sense is it limited to attack signatures and relevant IP addresses.

What is going on here? Why has General Alexander’s claim to need attack signatures and IP addresses resulted in legislation that authorizes wholesale information sharing and that immunizes companies who violate privacy in the process? One could only speculate. What we know is that CISPA is a vast overreach relative to the problem General Alexander articulated. The House is debating CISPA Wednesday and Thursday this week.

Many net neutrality advocates would prefer that the FCC return to the regulatory regime that existed during the dial-up era of the Internet. They have fond memories of the artificially low prices charged by the dial-up ISPs of that era, but have forgotten that those artificially low prices were funded by consumers through implied subsidies embedded in their monthly telephone bills.

Remember when the Internet was the “information superhighway”? As recently as 2009, the Federal Communications Commission (FCC) still referred to the broadband Internet as, “the interstate highway of the 21st century.” Highways remain a close analogy to the Internet, yet by 2010, net neutrality advocates had replaced Internet highway analogies with analogies to waterworks and the electrical grid. They stopped analogizing the Internet to highways when they realized their approach to Internet regulation is inconsistent with government management of the National Highway System, which has always required commercial users of the highways to pay more for their use than ordinary consumers. In contrast, net neutrality is only the latest in a series of government interventions that have exempted commercial users from paying for the use of local Internet infrastructure. Continue reading →

Marc Hochstein, Executive Editor of American Banker,  a leading media outlet covering the banking and financial services community, discusses bitcoin.

According to Hochstein, bitcoin has made its name as a digital currency, but the truly revolutionary aspect of the technology is its dual function as a payment system competing against companies like PayPal and Western Union. While bitcoin has been in the news for its soaring exchange rate lately, Hochstein says the actual price of bitcoin is really only relevant for speculators in the short-term; in the long-term, however, the anonymous, decentralized nature of bitcoin has far-reaching implications.

Hochstein goes on to talk about  the new market in bitcoin futures and some of bitcoin’s weaknesses—including the volatility of the bitcoin market.

Download

Related Links

On Wednesday, April 10, a bill “to Affirm the Policy of the United States Regarding Internet Governance” was marked up in the U.S. House of Representatives. The bill is an attempt to put a formal policy statement into statute law. The effective part says simply:

It is the policy of the United States to promote a global Internet free from government control and to preserve and advance the successful multistakeholder model that governs the Internet.

Yet this attempt to formulate a clear principle and make it legally binding policy has become controversial. This has happened because the bill brings to a head the latent contradictions and elisions that characterize U.S. international Internet policy. In the process it has driven a wedge between what was once a unified front by U.S. Democrats and Republicans against incursions into Internet governance by intergovernmental organizations such as the ITU.

The problem, it seems, is that the Democratic side of the aisle can’t bring itself to say that it is against ‘government control’ per se. Indeed, the bill has forced people linked to the Obama administration to come out and openly admit that ‘government control’ of the internet is OK when we exercise it; it’s just those other countries and international organizations that we need to worry about.

Following up on Eli’s earlier post (“Does CDT believe in Internet freedom?”), I thought I’d just point out that we’ve spent a great deal of time here through the years defending real Internet freedom, which is properly defined as “freedom from state action; not freedom for the State to reorder our affairs to supposedly make certain people or groups better off or to improve some amorphous ‘public interest.'” All too often these days, “Internet freedom,” like the term “freedom” more generally, is defined as a set of positive rights/entitlements complete with corresponding obligations on government to delivery the goods and tax/regulate comprehensively to accomplish it.  Using “freedom” in that way represents a grotesque corruption of language and one that defenders of human liberty must resist with all our energy.

I’ll be writing more about this in upcoming columns, but here’s a short list of past posts on Internet freedom, properly defined:

Last year, in advance of the World Conference on International Telecommunication, Congress passed a concurrent resolution stating its sense that US officials should promote and articulate the clear and unequivocal “policy of the United States to promote a global Internet free from government control and preserve and advance the successful multistakeholder model that governs the Internet today.” This language sailed through the House on a bipartisan basis with broad support from basically everyone in US civil society.

Now that WCIT is over, and the World Telecommunication/ICT Policy Forum looms, Congress is considering a law that reads:

It is the policy of the United States to promote a global Internet free from government control and to preserve and advance the successful multistakeholder model that governs the Internet.

And suddenly it’s controversial. Democrats are concerned that language about freedom “from government control” would apply to—gasp—the US government.

As Rep. Walden says,

Last Congress, we “talked the talk” and passed a resolution defending a global Internet free from government control. This Congress we must “walk the walk” and make it official U.S. policy. If this is a principle we truly believe in, there is no downside to stating so plainly in U.S. law.

I could not agree more.

I am especially disappointed by our friends at CDT. They are coming out against the bill, with both blog post and letter barrels blazing, after having supported the exact same language last year. Apparently, in CDT’s world: US government regulation of the Internet good, foreign government regulation of the Internet bad.

This episode shows the prescience of my colleagues Jerry Brito and Adam Thierer. As they wrote last year when Congress was considering the joint resolution:

The most serious threat to Internet freedom is not the hypothetical specter of United Nations control, but the very real creeping cyber-statism at work in the legislatures of the United States and other nations.

CDT gets this exactly backwards. Here’s hoping they change their minds yet again.