January 2008

I’ve missed Dan. Four years ago, as Techliberation was just getting started, Dan Rather provided us (and most of the rest of the blogosphere) with a rich source of content and amusement as he tried to pawn off forged documents regarding George Bush’s National Guard service on viewers. PhotobucketIn the end, Rather ended up with omelette on his face, and soon thereafter was ushered out of his anchor seat at CBS News.

Well, he’s now back in the news, thanks to his $70 million lawsuit against CBS for dropping him. Denying that he did anything wrong, the unashamed Rather says that CBS took him off the air in an attempt to pacify the White House. Right.

CBS is arguing — among other things — that it was under no legal obligation to keep Rather on the air. CBS lawyer Jim Quinn yesterday compared the situation to the New York Jets benching their star quarterback: saying that while he might not like it, but there’s nothing he can do about it.

According to the New York Post, Rather responded by pointing out that there’s a difference between him and the Jet’s QB. “I’m in the Television Hall of Fame,” he said.

And so humble too.

Stay tuned for more — the trial judge yesterday ruled against motions to throw the case out, meaning there’s much more fun to come.

Ron Paul

by on January 10, 2008 · 34 comments

Since I’ve mentioned Ron Paul a few times in this space, I wanted to mention that after appalling examples of racist and anti-gay sentiments from his newsletters came to light, I would no longer characterize myself as a Ron Paul supporter. Before Tuesday, the only evidence of Paul’s racism I’d seen was one issue of the newsletter. I took Paul at his word that the comments in question were written without his knowledge or approval, and that the writer was let go when they were brought to his attention. But now it appears that at least a dozen issues of his newsletter over a period of some 5 years contained similarly appalling comments. I no longer find Paul’s rationalizations plausible. Whether Paul wrote the newsletters himself is irrelevant. If he is not a bigot himself, he had no qualms about associating with bigots over the course of many years. I have more thoughts on Paul’s newsletters here and here.

Hanging out with an old friend over the weekend way outside the Beltway, he was asking me about copyright, and told me that the RIAA was coming out with a theory that copying music from CD’s that one owns to an iPod was now a target. I found that hard to imagine–it didn’t sound like an issue that RIAA would find it worthwhile to pursue, and indeed they’ve argued against liability in such a case on a few occasions (once on the theory that a license to do so was implied). And, indeed, the Washington Post has now pulled the story.

That such a rumor would spread points to deeper problems with press coverage of the music industry’s problems as a whole. Advocates have created an image of aggressive copyright holders proceeding without regard to their own long run interests in their own audience. By and large, journalists have bought into this. That the music industry and consumers have a *real* problem to solve–the difficulty of creating new business models without enforceable boundaries to keep out free riders en masse (not every single one)–has been neglected. That it is simply not plausible that an entire economic sector has mysteriously been populated by mean, short-sighted people is likewise ignored. Alas, some of us on the free-market side have bought into this, folks who one would expect to think in terms of the big picture and the long run, not personalities. Ah well.

One of the things I disagreed with in Yoo’s paper is that he puts a lot of stock in the notion that Akamai is a violation of network neutrality. Akamai is a distributed caching network that speeds the delivery of popular content by keeping copies of it at various points around the ‘net so that there’s likely to be a cache near any given end user. Yoo says that the existence of Akamai “attests to the extent to which the Internet is already far from ‘neutral.'” I think this is either an uncharitable interpretation of the pro-regulation position or a misunderstanding of how Akamai works.

Network neutrality is about the routing of packets. A network is neutral if it faithfully transmits information from one end of the network to the other and doesn’t discriminate among packets based on their contents. Neutrality is, in other words, about the behavior of the routers that move packets around the network. It has nothing to do with the behavior of servers at the edges of the network because they don’t route anyone’s packets.

Now, Yoo thinks content delivery networks like Akamai violate network neutrality:

When a last-mile network receives a query for content stored on a content delivery network, instead of blindly directing that request to the designated URL, the content delivery network may redirect the request to a particular cache that is more closely located or less congested. In the process, it can minimize delay and congestion costs by taking into account the topological proximity of each server, the load on each server, and the relative congestion of different portions of the network. In this manner, content delivery networks can dynamically manage network traffic in a way that can minimize transmission costs, congestion costs, and latency…

The problem is that content delivery networks violate network neutrality. Not only does URL redirection violate the end-to-end argument by introducing intelligence into the core of the network; the fact that content delivery networks are commercial entities means that their benefits are available only to those entities willing to pay for their services.

I think Yoo is misreading how Akamai works because he’s takes the word “network” too literally. Content delivery networks are not “networks” in the strict sense of physical infrastructure for moving data around. The Akamai “network” is just a bunch of servers sprinkled around the Internet. They use vanilla Internet connections to communicate with each other and the rest of the Internet. Internet routers route Akamai packets exactly the same way they route any other packets.

The “intelligence at the core of the network” Yoo discusses doesn’t actually exist in routers (which would violate network neutrality), but in Akamai’s magical DNS servers. DNS is the protocol that translates a domain name like techliberation.com to an IP address like 72.32.122.135. When you query an Akamai DNS server, it calculates which of its thousands of caching servers is likely to provide the best performance for your particular request (based on your location, the load on various servers, congestion, and other factors) and returns its IP address. Now, from the perspective of the routers that make up “the core of the network,” DNS is just another application, like the web or email. Nothing a DNS server does can violate network neutrality, just as nothing a web server does can violate network neutrality, because both operate entirely at the application layer.

So in a strict technical sense, Akamai is entirely consistent with network neutrality. It’s an ordinary Internet application that works just fine on a vanilla Internet connection. Now, it is true that one of the way Akamai enhances performance is by placing some of its caching servers inside the networks of broadband providers. This improves performance by moving the servers closer to the end user, and it saves broadband providers money by minimizing the amount of traffic that traverses their backbones. This might be a violation of some extremely broad version of network neutrality, and there’s certainly reason to worry that an overzealous future FCC might start trying to regulate the relationship between ISPs and Akamai. But Akamai is not, as Yoo would have it, evidence that the Internet is already non-neutral.

TLF readers may be interested in reading a piece I just wrote with John Berlau, a colleague of mine at CEI, about Hillary Clinton’s stance on video game regulation. Senator Clinton has taken a very aggressive stance against video game violence, suggesting the FTC should oversee how games are rated, opening the door to further interference with the ESRB system.

We’ve quickly received feedback from one of the heavy-hitters in the anti-gaming world. None other than Jack Thompson emailed John today. Thompson, a famous anti-gaming lawyer and activist, has supported a wide variety of legislative solutions to the supposed plague of video game violence. His email to John contained no text in the body, but the subject line read as follows:

You’re wrong. Video games inspire violence. It’s a public safety hazard and a legitimate governmental concern

He attached a PDF of a Stephen Moore column for the Wall Street Journal to back up this assertion. In the piece, Moore complains that his children have turned into zombies, claiming that video games are the “new crack cocaine.” Though I love Moore and his columns for the WSJ and agree with him more often than not, this is one of those instances of not.

Video games are addictive, I’ll say that from personal experience, but I’ve been able to wean myself off a nearly debilitating addiction to Company of Heroes–I’m now down to a reasonable 4 hours a week. But games aren’t the new crack, they’re just a new diversion that neither kids nor adults should invest too much time into. Kids don’t have the self control to keep themselves away from them, so once parents let the kids vegetate for 8 hours a day, it is a tough job for parents to refuse kids their endorphin-producing joy-machines, but government won’t do a better job.

Instead of pushing for government action, which would be a 1st Amendment violation in addition to being ineffective, Jack Thompson ought to be trying to educate parents about sensible limitations for little ones and pointing them in the direction of Adam Thierer’s Parental Controls and Online Child Protection: A Survey of Tools & Methods.

Even before Heritage did, the Department of Homeland Security emailed me an invite to the Heritage Foundation event, Making REAL ID Real: Implementing National Standards.

Headliner Stewart Baker from the Department of Homeland Security will be joined by pro-national ID lobbyist Janice Kephart (client: Digimarc) and a guy nobody’s ever heard of named Donald Rebovich.

Must miss! I do wonder what an event like this could be for, as it is assured to be devoid of content. (Rumor has it that the REAL ID Act regulations may come out this Friday.)

How big were tech issues in the furious election campaigning that just finished in New Hampshire? Not very, reports CNET’s Anne Broache. “Voters here are famously not described as tech-savvy,” she writes. “To be precise, they are famously not described as especially concerned with topics like Net neutrality and intellectual property rights that you, our dear readers, are.”

No surprise, but Broache, with help from Declan McCullough, did some real footwork to back up that disinterest, conducting a few man-in-the-street interviews with Hampshireans.

” We weren’t disappointed”, she says. “Nor, we’re happy to report, did we get punched in the face for bothering those gritty, flinty, and hardy residents with questions about Net neutrality. What we did learn is that Granite State voters are not exactly preoccupied with political skirmishes over rewriting patent law, increasing H-1B visas, and, of course, the throughly pressing concern of broadband regulation”.

Continue reading →

Reader Deane had some great questions that I thought would be worth addressing in a new post:

Your major contention seems to be that closed-source software would be less efficient than open source because the latter is more decentralized..

but arnt’ you really talking about a management style here? I think its completely plausible for a closed-source software to have a very decentralized development process.. isnt that how google seems to do stuff? i dont know about microsoft.

Does this have anything to do with the source being open? i dont think so.

Any product development seems to need a degree of centralization, with teams, firms n so on. the degree of centralization depending on the relative cost/benefits of the management decision.

So i don’t see the point really of having a debate on open source vs closed source software. or to the fact that what’s efficient at creativity n so on, as libertarians i thought we’d trust the market to make those kind of decisions, on a product by product basis.

First of all, let me make clear that I don’t see this as a debate about “open source vs. closed source software.” Both styles of software development have their place, and I certainly wouldn’t want to be misunderstood as being opposed to closed-source development. I just think that the advantages of open source software development processes tend to be underestimated, and that in particular Lanier’s criticisms were rather misguided.

Continue reading →

I just finished a second read-through of Chris Yoo’s excellent paper, “Network Neutrality and the Economics of Congestion.” It’s an excellent paper, and in this post I’m going to highlight some of the points I found most compelling. In a follow-up post I’ll offer a few criticisms of parts of the paper I didn’t find persuasive.

One point Yoo makes very well is that large companies’ ability to coerce their media environment is often overrated. He points out that people often overestimate the ability of media companies to dominate the online discussion. He points out, for example, that fears that the AOL Time Warner merger would become an unstoppable online juggernaut turned out to be overblown. The merged firm turned out to have little ability to shape the browsing habits of AOL customers, and AOL continued to bleed customers.

Similarly, Yoo makes the important point that when evaluating the ability of a broadband provider to coerce a website operator, it is the broadband company’s national market share, not its local market share, that matters:

application and content providers care about the total number of users they can reach. So long as their total potential customer base is sufficiently large, it does not really matter whether they are able to reach users in any particular city. This point is well illustrated by a series of recent decisions regarding the market for cable television programming. As the FCC and the D.C. Circuit recognized, a television programmer ’s viability does not depend on its ability to reach viewers in any particular localities, but rather on the total number of viewers it is able to reach nationwide. So long as a cable network can reach a sufficient number of viewers to ensure viability, the fact that a particular network owner may refuse carriage in any particular locality is of no consequence. The FCC has similarly rejected the notion that the local market power enjoyed by early cellular telephone providers posed any threat to the cellular telephone equipment market, since any one cellular provider represented a tiny fraction of the national equipment market. Simply put, it is national reach, not local reach, that matters. This in turn implies that the relevant geographic market is a national one, not a local one. What matters is not the percentage of broadband subscribers that any particular provider controls in any geographic area, but rather the percentage of a nationwide pool of subscribers that that provider
controls.

Once the relevant market is properly defined in this manner, it becomes clear that the broadband market is too unconcentrated for vertical integration to pose a threat to competition. The standard measure of market concentration is the Hershman-Hirfindahl Index (HHI), which is calculated by summing the squares the market shares of each individual firm. The guidelines employed by the Justice Department and the Federal Trade Commission establish 1800 as the HHI threshold for determining when vertical integration would be a cause for anticompetitive concern. The FCC has applied an HHI threshold of 2600 in its recent review of mergers in the wireless industry. The concentration levels for the broadband industry as of September 2005 yields an HHI of only 1110, well below the thresholds identified above. The imminent arrival of 3G, WiFi, WiMax, BPL, and other new broadband technologies promises to deconcentrate this market still further in the near future.

To put this in concrete terms, if Verizon wants to twist Google’s arm into paying fees for the privilege of Google’s customers, Verizon’s relatively limited national market share (about 9 percent) doesn’t give it a whole lot of leverage. It is, of course, important to Google to be able to reach Verizon’s customers, but Google has enough opportunities in the other 91 percent of the broadband market that it wouldn’t be catastrophic if Verizon blocked its customers from accessing Google sites. Conversely, Google would have a strong incentive not to accede to Verizon’s demands because if it did so it would immediately face demands from the other 91 percent of the broadband market for similar payoffs. Which mens that Verizon, knowing that blocking Google would be a PR disaster for itself and that Google would be unlikely to capitulate quickly, isn’t likely to try such a stunt.

The analysis would be different if we had a single firm that controlled a significant chunk of the broadband market—say 50 percent. But there aren’t any firms like that. The largest appear to be Comcast and AT&T, with about 20 percent each. That’s a small enough market share that they’re unlikely to have too much leverage over web providers, which makes me think it unlikely that broadband providers would have the ability to unilaterally impose a “server pays” regime on them.

Yesterday at the Consumer Electronics Show in Las Vegas, General Motors chief executive Rick Wagoner delivered an address on the future of automobiles and technology and hyped the concept of “autonomous driving.” “Autonomous driving means that someday you could do your e-mail, eat breakfast, do your makeup, and watch a video while commuting to work,” Wagoner said. “In other words, you could do all the things you do now while commuting to work but do them safely.”
Jetsons
Now don’t get me wrong, I’m no Luddite. Matter of fact, I’m obsessed with technology and A/V gadgets, and I have covered tech policy issues for a living a 3 different think tanks over the past 16 years. I love all things tech. But I love driving more. A lot more. I have been fanatical about my sports cars ever since I was a kid. From my first car–a 1979 “Smokey & the Bandit” Pontiac TransAm–to my 86 Mustang GT, to my 90 Nissan 300ZX Twin Turbo, my BMWs (two M3s and an 850i) all the way to my current 2005 Lotus Elise–I have been completely obsessed with cars and the joys of motoring throughout my life. And the idea that we’ll all one day soon be driving to work in the equivalent of personal subway cars makes me a little sad because it means the joy of driving might me lost in coming generations.

I wonder if my son will grow up with the same passion for motoring that I have, and that my dad had before me. (I’m certainly going to have something to say about it!) And I wonder if, a generation from now, “driver’s education” classes will consist of little more than downloading a user name and password for your computer-car.

On the upside, I suppose I could see the advantage of making the driving experience fully automated for all those idiots on the road who really do engage in risky behaviors in their cars, like “e-mail, eat[ing] breakfast, do[ing] your makeup, and watch[ing] a video while commuting to work,” as Wagoner suggests. I hate those SOBs. They give me nightmares because, at a minimum, I fear what they might do to my car when they are not looking at the road. Worse yet, I think of the danger they pose to pedestrians (like my kids). So, perhaps a Jetsons-mobile for these morons will be an effective way to reduce accidents and traffic fatalities.
Lotus at GW scenic overlook 1
But as for myself, I will pass on “autonomous driving,” thank you very much. I want to be fully in control of my motoring experience forever more. Especially behind the wheel of my beloved Lotus Elise!