December 2008

In several of our previous podcasts (see episodes 34, 35,and 37), we’ve discussed what we’ve called the “Comcast Kerfuffle,” which was the controversy surrounding the steps Comcast took to manage BitTorrent traffic on its networks. Critics called it a violation of Net neutrality principles while Comcast and others called it sensible network management.

This week we saw a new kerfuffle of sorts develop over the revelation in a Monday front-page Wall Street Journal story that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content. What exactly is it that Google is proposing, and does it mean – as the Wall Street Journal and some others have suggested – that Google is somehow going back on their support for Net neutrality principles and regulation? More importantly, what does it all mean for the future of the Internet, network management, and consumers. That’s what we discussed on the TLF’s latest “Tech Policy Weekly” podcast.

Today’s 30-minute discussion featured two of our regular contributors at the TLF, who both wrote about this issue multiple times this week. Cord Blomquist of the Competitive Enterprise Institute wrote about the issue here and here, and Bret Swanson of the Progress & Freedom Foundation wrote about it here and here.  To help us wade through some of the more technical networking issues in play, we were also joined on the podcast by Richard Bennett, a computer scientist and network engineer guru who blogs at Broadband Politics as well as Circle ID and he also pens occasional columns for The Register.  Also appearing on the show was Adam Marcus, Research Fellow & Senior Technologist at PFF, who wrote a “nuts and bolts” essay full of excellent technical background on edge caching and net neutrality.

You can download the MP3 file here, or use the online player below to start listening to the show right now.


The introduction below was originally written by Adam Thierer, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.

My PFF colleague Bret Swanson had a nice post here yesterday talking about the evolution of the debate over edge caching and network management (“Bandwidth, Storewidth, and Net Neutrality“), but I also wanted to draw your attention to related essay by another PFF colleague of mine. Adam Marcus, who serves as a Research Fellow and Senior Technologist at PFF, has started a wonderful series of “Nuts & Bolts” essays meant to “provide a solid technical foundation for the policy debates that new technologies often trigger.” His latest essay is on Network neutrality and edge caching, which has been the topic of heated discussion since the Wall Street Journal’s front-page story on Monday that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content.

Anyway, Adam Marcus gave me permission to reprint the article in its entirety down below. I hope you find this background information useful.

Nuts and Bolts: Network neutrality and edge caching

by Adam Marcus, Progress & Freedom Foundation

December 17, 2008

This is the second in a series of articles about Internet technologies. The first article was about web cookies. This article explains the network neutrality debate. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

To understand the network neutrality debate, you must first understand bandwidth and latency. There are lots of analogies equating the Internet to roadways, but it’s because the analogies are quite instructive. For example, if one or two people need to travel across town, a fast sports car is probably the fastest method. But if 50 people need to travel across town, it may require 25 trips in a single sports car. So a bus which can transport all 50 people in a single trip may be “faster” overall. The sports car is faster, but the bus has more capacity. Bandwidth is a measure of capacity, of how much data can be transmitted in a fixed period of time. It is usually measured in Megabits per second (Mbps). Latency is a measure of speed, of the time it takes a single packet data to travel between two points. It is usually measured in milliseconds. The “speeds” that ISPs advertise have nothing to do with latency; they’re actually referring to bandwidth. ISPs don’t advertise latency because its different for each different site you’re trying to reach.
Continue reading →

Jack Shafer, editor at large of Slate, is my favorite media pundit. Everything he does is worth reading, and his column this week is no different. It’s entitled “The Digital Slay-Ride: What’s killing newspapers is the same thing that killed the slide rule,” and in it he notes how “Hardly a day goes by, it seems, without some laid-off or bought-out journalist writing a letter of condolence to himself and his profession.” “The underlying cause of their grief,” Shafer argues, “can be traced to the same force that has destroyed other professions and industries: digital technology.” He recalls how people scoffed back in 1993 when Wired founder Louis Rossetto’s said that the “digital revolution is whipping through our lives like a Bengali typhoon” and destroying the old order. But no one is laughing anymore.  As I noted in my Media Metrics report, digital disruption and disintermediation has completely upended the media marketplace, as well as countless others. Toward that end, Shafer actually starts a list of professions or technologies that have been “typhooned” by the digital revolution. It’s a pretty amazing (and entertaining) list for those of us old enough to remember when all these things were dominate in our society and economy. Can you think of others?

• Bank tellers
• Typewriters
• Typesetting
• Carburetors
• Vacuum tubes
• Slide rules
• Disc jockeys
• Stockbrokers
• Telephone operators
• Yellow pages
• Repair guys
• Bookbinders
• Pimps (displaced by the cell phone and the Web)
• Cassette and reel-to-reel recorders
• VCRs
• Turntables
• Video stores
• Record stores
• Bookstores
• Recording industry
• Courier/messenger services
• Travel agencies
• Print and cinematic porn
• Porn actors
• Stenographers
• Wired telcos
• Drummers
• Toll collectors (slayed by the E-ZPass)
• Book publishing (especially reference works)
• Conventional-watch makers
• “Browse” shopping
• U.S. Postal Service
• Printing-press makers
• Film cameras
• Kodak (and other film-stock makers)

Adam’s recent post on Free Press’s hysteria over media consolidation reminds me of the left’s general tendency to move the goalposts when it comes to market concentration in communications markets. Over the last quarter century, we’ve gone from a world in which there were honest-to-goodness monopolies in the telephone and cable markets to a “duopoly” where the former monopolies invaded one another’s turf, to a world of much greater competition as mobile companies entered the telephone market and satellite companies entered the video market. Yet as I noted last year, Free Press chairman Tim Wu characterizes the wireless market—with its four national carriers and several regional ones—as a “textbook oligopoly.” Indeed, one often gets the impression that the arguments of pro-regulation scholars are the same as those they would have made 20 years ago with “monopoly” replaced by “oligopoly.”

Now, I’m sympathetic to the argument that a “duopoly” is insufficient competition, and that regulators should at least take a close look at the behavior of firms that comprise one half of a duopoly. I think libertarians’ tendency to laud the broadband marketplace as a free-market nirvana is a bit misguided. However, I find the language of “oligopoly” much harder to swallow. While more competition is better, there are plenty of industries with 4-6 players that few people regard as problematic. The wireless business is extremely capital-intensive, so it’s not that surprising that there are relatively few restaurant chains.

But the tendency to shift from “monopoly” to “duopoly” to “oligopoly” while deploying essentially the same arguments does make one wonder if there’s any amount of competition that the good people at Free Press regard as sufficient. And it seems that Adam has found the answer. Having 55 major players in a market is the very definition of cutthroat competition. The notion that 55 firms can constitute a “bottleneck” that significantly curtail the flow of information to consumers is just silly. And of course the 55 figure is totally arbitrary. The media are a long tail business, and they could have included a lot more firms if they hadn’t set their cutoff at $100 million in revenues. For example, their chart misses Hubbard Broadcasting, which owns around a dozen broadcasting stations concentrated in the upper midwest. So if you’re a Twin Cities resident who doesn’t like what Clear Channel, News Corp and the rest are producing, you can tune in to KSTP’s TV station and talk radio station for a perspective that’s not controlled by the “Big 55.”

Ultimately, I think the moral of the story is that for some advocates of media regulation, it really has nothing to do with competition. Whether there are 1, 2, 4, 8, or 55 competitors, they continue to believe that there’s too little government regulation of communication. When the number is small, it makes a convenient talking point, but they go right on making the same arguments when the number of competitors gets ridiculously large.

The intrepid Chris Soghoian has turned up an important wrinkle in Google’s services. Google pulled his AdWords ad pointing out AT&T’s campaign contributions to an Indiana politician after AT&T lodged a trademark complaint about it.

Trademark law is for preventing confusion about the source of goods and services. There is no possibility that Chris’ ad would confuse consumers in this way. He’s not providing telecommunications services, and his ad didn’t suggest it. Chris’ use of “AT&T” did not violate AT&T’s trademarks.

The subject matter of Chris’ ad is an important part of our national discourse, and something people should be able to run ads about on a platform like Google. It would be, well, evil, to kick small public policy advocates to the curb in favor of big corporations.

A company like Google is in a tough spot, of course, trying to adjudicate trademark claims at scale. But it is not acceptable to treat trademark complaints as proven just for having been submitted.

Google should take some steps to make its process more fair, such as by allowing advertisers to respond to a trademark complaint before Google acts on it. Much of the process could be automated, and it could explain to both sides what trademark rights include – and what they don’t. If after a few automated steps, the two remained at loggerheads, Google employees could take a look to see whether the claim or the response were meritorious. (A trained monkey could have determined that Chris’ ad is not a trademark violation.)

In close cases, Google should leave it to the parties to resolve, while it works in the courts to generate a substantive body of law that service providers in the position of Google are not properly liable for the trademark infringements of users. (My brief pitch for common law findings of “no liability” in such situations – as opposed to statutory protections like CDA section 230 – starts at minute 22 of this video.)

Would these ideas increase Google’s cost and potential liability? Yes, some. But Google should embrace those costs as it educates its users, employees, courts, and – most important – trademark holders about what trademark does and does not do.

Kudos to Chris for his tenacity. Google, fix this.

On the Google Public Policy Blog, Richard Whitt’s response to the recent Wall Street Journal article (of now considerable infamy) fails to mention one of the primary benefits of Google’s OpenEdge caching program.  Whitt only mentions the following benefits:

By bringing YouTube videos and other content physically closer to end users, site operators can improve page load times for videos and Web pages. In addition, these solutions help broadband providers by minimizing the need to send traffic outside of their networks and reducing congestion on the Internet’s backbones.

What Whitt doesn’t say is that caching programs like Google’s have the potential to dramatically reduce the total traffic on tier one and tier two carriers (networks that peer, or exchange data without charge, with other networks).  But this traffic reduction is one of the biggest benefits Google’s program provides to the rest of the Net.

How would OpenEdge do this?  Let me explain using a bit of anecdotal evidence.

Continue reading →

2116296391_99dba4360f_oThe Washington Post reports today that Yahoo! has changed its data retention policy to anonymize user behavior information after 3 months, rather than its previous, much lengthier retention window of 13 months.

This move by Yahoo! is likely in response to both consumer demand for greater privacy protection and pressure from government regulators both in the US and in the EU.  Google & Microsoft have recently tightened their own retention policies recently after experiencing similar pressure.

Yahoo! and other search companies may be experiencing pressure of a different kind under the Obama administration. Eric Holder, the President-elect’s nominee for Attorney General, has stated publicly that he believes existing privacy laws may have to change to accommodate law enforcement needs:

In some cases, changes to privacy laws may be required to recognize the new technological reality we now confront.

Speaking on data-retention specifically, in the same memo Holder said that:

Certain data must be retained by ISPs for reasonable periods of time so that it can be accessible to law enforcement.

These statements suggest that Holder may be in favor of a mandatory minimum length of time for companies to retain data, rather than mandatory maximums.  This puts search engines, ISPs, and other web-based companies in the awkward position of trying to please two sets of regulators with completely opposite goals.

Continue reading →

Gov. Paterson unveiled the New York budget yesterday, and among the 137 proposed new and increased taxes is a new tax on digital products.  An article in today’s New York Post quotes NetChoice opposing the Governor’s effort to tax music and creativity distributed through the Internet.

New York’s approach is two-fold, broadening what is taxed and who has to collect: 1) add digital music, books, songs and movies to what can be taxed; 2) expand and assert the concept of “nexus” to cover out-of-state sellers that use an online network of affiliates

First, New York is imposing a new tax on digital goods. It’s not as the state claims, to close a “digital property taxation loophole” — instead, this is a new tax on New Yorkers, for a service that’s not taxable under today’s law.

What’s worse, why in the world would NY impose a new tax on something we all want to encourage right now?  Digital downloads of music, movies, and books have no carbon footprint and use none of the oil consumed with a round-trip to the store. Moreover, there’s no plastic and paper packaging to create and crate off to a landfill. I’ve blogged on the environmental benefits of downloading here.

It’s also important to note that this is more than just a new tax, it will also extend the long hand of government to the long tail of online commerce. Who’ll be hurt? Small, independent artists that have websites to sell their own creative works. If a NY-based author or musician adds a link to her webpage saying ‘buy my book/music now on’, she’d be creating a new tax collection burden for Amazon–on everything Amazon sells to anyone in NY State.  Amazon’s not going to sit still for that, and they might just stop their affiliate program for NY-based suppliers, authors, and musicians.

I’ve blogged about nexus issues here. Continue reading →

[This represents a bit of a departure from the traditional format of my ongoing “Media Deconsolidiation Series,” but you will see how it ties in…]

So, some guy from the (Un)Free Press — the activist group that wants to regulate every facet of the media and broadband universe — has created a scary looking chart about “Information Control” [seen below]. It’s based loosely on the Periodic Table of Elements, you know, to give it the aura of science and fact. In reality, it’s just another silly scare tactic that tells us very little about the true nature of our modern media marketplace.

The chart is accompanied by the typical Free Press gloom-and-doom rhetoric about the unfolding media apocalypse. “Nearly everything you see, hear and read that isn’t from a friend — whether on TV, the radio, or even on the Web — comes from a for-profit gatekeeper.”  And then comes the obligatory A.J. Liebling quote about how “Freedom of the press belongs to those who own one,” followed quickly by the typical punch line about how just a handful of companies (in this case 55 of ’em) are puppeteering all our thoughts in America today:

Combined, these 55 powerful media and telecommunications companies raked in total revenues in excess of $700 billion in 2007. Together they own over 540 TV stations, 2000 radio stations, 430 newspapers, 230 magazines, and 80 major cable channels in the United States. They provide paid TV service to approximately 52 million subscribers and broadband Internet service to over 57 million subscribers. They’re the bottlenecks through which our news, our entertainment, and our political discourse must travel. What they want to promote becomes prominent; what they suppress stays out of the mainstream. As such, these companies are the elements of information control.

Oh my God! We are all just brainwashed sheep!

Except we’re not. It amazes me how these “information control” and “media monopoly” myths keep getting widespread circulation. But the first thing to note is how the media reformistas can’t get even their story straight when it comes to how many “monopolists” are supposedly out there today. As I noted in my 2005 book, Media Myths: Making Sense of the Debate over Media Ownership, the critics seem to just pull their numbers out of a hat. Some say as few as 3 companies control everything. Others says 5 or 6. Still others say it might be a few dozen. And now this guy says its 55. Hey, that’s progress that even the Free Press should love!

Regardless of the number, does this really represent the totality of our modern media universe? Do those 55 companies really “own most of the 21st-century presses in America” as the “Info Control” website states? Answer: NOT. EVEN. CLOSE.  Here are the facts. [I happened to have compiled them for a PFF special report entitled Media Metrics: The True State of the Modern Media Marketplace to debunk myths just like this.]

Info Control Debunked

Continue reading →

Man, I’d love to bring one of these mobile phone jamming devices into the movie theater with me. I’m getting tired of all the rude jackasses who don’t mute their phone, or even take calls, during the middle of movies. Of course, as this WSJ article notes, such devices violate FCC rules and would disrupt all sorts of beneficial uses. (The company is apparently trying to get them authorized for use in prisons to disable smuggled-in phones from being used and creating problems).

Oh well. I guess I’ll just have to keep throwing popcorn at those idiots in the theater until they shut their pie holes.

Cell Tower Jammer