Articles by Cord Blomquist

Cord Blomquist spends most of his time pining for the singularity. To pass the time while waiting for this convergence, he serves as the New Media Manager at the Mercatus Center at George Mason University. Before landing this sweet gig, Cord hocked policy writing for the Competitive Enterprise Institute, toiled in the halls of Congress, and even worked in a crouton factory. In college, Cord spent his hours studying political philosophy and artificial intelligence, resulting in an unhealthy obsession with Lt. Commander Data. All of these activities will, of course, be viewed as laughable when he is ported from this crude meatspace into the nanobot cloud.


WARNING: The PFF Aspen Summitt served to both educate and inspire me, so expect a flurry of blog posts over the next few days.

While reviewing my notes during my 24 hour trek back to DC (most of which involved sitting in the Denver airport) I realized that Eric Schmidt said a lot of interesting things despite my intitial impression that his speech was rather devoid of content. Unfortunately for Dr. Schmidt, most of my conclusions are rather critical.

During the middle of his remarks, Schmidt pointed out that our web-powered world changes conventional thinking about business models and industry integration. In the past, Schmidt observed, vertical integration–buying up assets like, mines, railroads, and mills–cut costs by allowing one company to take a good from raw material to finished consumer good, without the transaction costs of swapping ownership throughout the process.

Continue reading →

Why Wi-Fi?

by on August 21, 2007 · 8 comments

David Robinson at The American said my last blog post on Wi-Fi was intriguing and asked me to write a piece for him. I can’t turn down a request for writing, so here it is. The piece is about the recent failure of the San Francisco Wi-Fi plan with Google and Earthlink. I also advance the argument that a public/private partnership to create Wi-Fi is a generally bad idea–the regulation that comes with Muni-Wi threatens to turn providers into utilities.

Over the last two years, San Francisco has been in negotiations with Earthlink who, in partnership with Google, has had plans to build a Wi-Fi “cloud” over the 47 sq. mile geek-infested city. The goal, set out in 2005, was to blanket the city with 1,500 wireless hot-spots which would be accessible free of charge, supported by ads from Google. For those who wanted faster, ad-free service, a subscription fee could be paid.

Now rumors are circulating that Earthlink is pulling out of the deal, while the San Fran government is moving forward with a non-binding referendum in September that will presumably decide the fate of this boondoggle.

But San Franciscans needn’t worry. According to a 2005 paper by Steven Titch of the Heartland Institute the number of San Fran hot-spots that year was 396 (making it the #1 Wi-Fi city in the country). The latest jiwire.com numbers show that number is now over 800. It seems that hot-spots are following Moore’s law and doubling in 2 years!

With over 800 public hot-spots (halfway to Earthlink’s goal) already covering this 47 square mile area, why should the city give away special privileges to Earthlink for a city-wide build-out? The competitive marketplace is already taking care of spreading the wireless love around the city. Why not open up the city to more competition by easing right of way access, eliminating or cutting taxes associated with Wi-Fi installation, and opening exclusive franchising? Rather than looking backward and modeling themselves after past state-run follies, cities could take a leading role in increasing competition.

USA Today reports that most are unaware of the dangers facing them at public Wi-Fi hotspots, which brought to mind an interesting question about municipal Wi-Fi. What incentive is there for municipalities to provide encryption and other security technologies?

The article mentions that AT&T and T-Mobile are the largest providers of free Wi-Fi hookups in the country and although the Wi-Fi itself is unsecured, both companies encourage the use of freely provided encryption software. The incentives for both companies seem fairly obvious. If people are going to be Wi-Fi users they need to feel safe and encryption technology is a way to do this. Customers stay safe and continue to use the service, making AT&T T-Mobile and other providers money.

Do municipal setups have the same incentives? Depending on the financial structure of such a system I can see how there would be little incentive to provide security software or other safeguards to users. Yet these Muni-Fi services would still distort the market, making it less likely for companies–that might be affected by privacy concerns–to invest in those areas.

Question: Does Muni-Fi pose a risk to security because of the lack of incentives to push security solutions and its edging out private competitors who have that motivation?

I agree with Tim that open networks are great and likely preferable in most situations, but to say that open networks simply “tend to be better than closed networks” doesn’t make sense.

This is akin to saying that copper is more efficient than iron. This begs the question. More efficient at what? Copper is more efficient than iron in some applications like conduction of electricity, but it’s a much less efficient armor plating. Ends dictate the standard by which we judge efficiency, otherwise efficiency is meaningless.

That said, not all networks are built for the same ends. While the Internet is an undisputed engine of growth and innovation, it’s not the only model that EVER makes sense. Closed or limited networks can also have value because Metcalfe’s Law–which states that a network’s utility increases in proportion to the square of the number of members–is not the only factor in determining network worth, despite being a very strong factor.

Continue reading →

Openness–in our culture filled with feel-goodery and self congratulation openness is seen as a good thing–a trait that any liberal and modern person should hope to have. But is openness always the best policy?

Google sure thinks so. It’s advocating that the 700 Mhz spectrum–soon to be freed up by the transition to digital TV–should be auctioned with openness in mind. Eric Schmidt, Google’s CEO, has asked FCC Chairman Martin to limit the auction to models that would include open applications, open devices, open services, and open networks.

Sounds great doesn’t it? After all, other open things in the political world are good. Open government, open hearings–both good. But would we want open phone conversations or open email? Maybe open doors and open shades would be a good idea. What do you have to hide?

Living in a democracy we’re used to transparency, but certainly we can recognize the value of limits and closed proceedings as well. What about limited and closed models for networks? Can these be of any benefit or are they, like the technocrats claim they are, just stifling innovation?

Closed networks, or rather networks that aren’t wide-open, offer some significant advantages. Security, for one, is markedly enhanced by a closed or limit-access system. That’s why our national security system, at least those outside the Pentagon’s email servers, are often totally severed from the wide-open internet.

An open network, like the internet itself, is prone to all variety of attacks. By contrast, I’ve never gotten a cell phone virus, something I owe to my cell carrier’s closed system. My phone also seldom crashes, unlike my PC. I owe much of my PC woes to the OS I’m sure, but the various apps I have running are likely not custom made for my particular machine, unlike the apps found on many cell phones.

Let’s think different for a moment and consider Apple. Mac has always been a fairly limited–if not closed–system, yet this walled-garden isn’t seen as an evil. That’s likely because Macs works so well, but its crucial to recognize that much of this is owed to Mac’s closed architecture, something that eliminates many of the variables that plague PCs.

Google may have a business model that makes sense under their proposed restrictions, but forcing the model on others isn’t because of some overarching philosophy of “openness.” Rather, Google wants to save money on the auctions by driving out many of the bidders. This is a shame. While an open wireless network is intriguing and could create a platform for unique innovations, limited networks will still offer stability, compatibility, security, and privacy and should be allowed to compete.

Google’s Policy Blog today makes a succinct argument for why its purchase of DoubleClick should be approved. While I find their reasoning compelling and logical–in fact, I don’t think any justification should be necessary–I find it hard to be sympathetic to a plea for fairness when Google is asking DC to stack the deck in its favor on other issues.

Example: Google has issued an ultimatum to the FCC, asking it to offer up the 700 Mhz spectrum–the radio waves that will be free when TVs switch over to digital in 2009–with conditions attached. These conditions would make all potential bidders conform to Google’s business model.

What other example in history do we have of a company actually demanding strings be attached to an FCC auction such as this? If anyone can think of such an example I encourage you to comment. As far as I know, this is totally unprecedented.

And why ask the FCC to place limits on something you plan to buy? That seems a little odd. Unless you want to reduce the value of the spectrum to competitors that operate under different models.

What about these other models? More on that later when I discuss the idea of ‘openess’ in a post later today.

These types of restrictions are just political games, which Google doesn’t like when they prevent Larry and Sergey from making an aquisition or collecting different kinds of data. Yet the same political maneuvering is just fine when the men of Moutain View can use hapless regulators to make a mint at the public’s expense.

Hat Tip: John Battelle’s Searchblog

My letter to the Washington Post regarding Michael Gerson’s “Where the Avatars Roam,” which appeared in the Post last week:

Michael Gerson’s July 6 piece “Where the Avatars Roam” shows that his understanding of libertarianism isn’t nearly as deep as his understanding of online games.

Mr. Gerson describes Second Life as “large-scale experiment in libertarianism,” citing the game’s lack of community structure and long-term consequences.  He describes this “libertarian” world as one in which there is not human nature, only human choices.

This doesn’t describe a libertarian world, but one of fantasy.  Libertarianism, as envisioned by the founding fathers or Friedrich Hayek, is predicated on an understanding of the world that’s very different from Second Life.  Common sense agrees with this libertarian understanding–the world is one of consequences, community institutions are vital to human life, and human beings have an innate nature that we should harness, not deny.

True, libertarians believe in the idea of spontaneous order, but Mr. Gerson treats this idea unfairly.  Libertarianism holds that society is not the product of uncoordinated human choice, but of human choice coordinated by the institutions of liberty.  Rule of law, private property, and a robust civil society together create rules within which markets operate to ensure the greatest possible outcomes, both for individuals and for society as a whole.

Denying human nature and basic economics is the forte of the modern left, not libertarians.  Perhaps Second Life would be a good testing ground for the left’s pet theories–they may work better there.  As for libertarians, we’ll stick to the real thing.