Articles by Adam Thierer 
Senior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.
Over the past year or so, many market-oriented critics of Google, like Scott Cleland and Richard Bennett, have criticized the company for aligning itself with Left-leaning causes and intellectuals. Lately, however, what I find interesting is how many leading leftist intellectuals and organizations have begun turning on the company and becoming far more critical of the America’s greatest capitalist success story of the past decade. The reason this concerns me is that I see a unholy Right-Left alliance slowly forming that could lead to more calls for regulation not just of Google, but the entire search marketplace. In other words, “Googlephobia” could bubble over into something truly ugly.
Consider the comments of Tim Wu and Lawrence Lessig in Jeff Rosen’s huge
New York Times Magazine article this weekend, “Google’s Gatekeepers.” Along with Yochai Benkler, Lessig and Wu form the Holy Trinity of the Digital Left; they set the intellectual agenda for the Left on information technology policy issues. Rosen quotes both Wu and Lessig in his piece going negative on Google. Wu tells Rosen that “To love Google, you have to be a little bit of a monarchist, you have to have faith in the way people traditionally felt about the king.” Moreover:
Continue reading →
Last week I discussed Barbara Esbin’s new PFF paper about the FCC’s absurd investigation into how the cable industry is transitioning analog customers over to digital. This is an essential transition is the cable industry is going to free up bandwidth to compete against telco-provided fiber offerings in the future. The faster the cable industry can migrate its old analog TV customers over to the digital platform, the more bandwidth they can re-deploy for high-speed Net access and services. Mark Cuban helps put things in perspective:
1. the only thing that cable companies, and satellite for that matter have to sell is bandwidth and the applications they can run on that bandwith. More bandwidth means more digital everything.
2. For Basic Cable subscribers that get say, 40 analog channels, they are consuming 40 x 38.6mbs or 1.54 Gbs. Let that sink in. 1.54 Gbs of bandwidth. Compare that to how fast your internet access is. That more bandwidth than your entire neighborhood consumes online, by a lot.
Thats also the equivalent of 500 standard def digital channels. If you convert that to revenue per bit for cable companies, or cost per bit for basic cable consumers, the basic cable customers are getting the best deal in town. By a long shot.
Digital cable customers, not so much. Digital customers are paying multiples of analog customers for bandwidth. In reality, analog customers are getting an amazing deal, and the cable companies have been hesitant to convert them only because of the potential FCC backlash.
I’m as cynical as the next guy when it comes to cable rates and motivations, but the reality is that the longer analog remains, the fewer opportunities to leverage the freed up bandwidth to create next generation bandwidth hog applications. Will the cable companies charge us an a lot for that bandwidth, probably. But when we start to see applications built on top of 250mbs per second and more, it will have far more value to society than watching USA Network on your old analog TV. And Net Neutrality? Well if everyone had that 1.54gbs available to them, net neutrality would be a non issue. We wouldn’t be arguing about access or pre-emption, we would be arguing about quality of service.
Once again we are reminded that all regulations have opportunity costs and in this case the FCC’s actions could cost consumers the loss (or at least delay) of higher-speed broadband offerings in the near-term.
[Hat tip to Richard Bennett for the recommendation here..] I haven’t had a chance to read through the entire thing yet, but this new study by Nemertes Research seems worthy of attention: “Internet Interrupted: Why Architectural Limitations Will Fracture the ‘Net.” From the exec sum:
In 2007, Nemertes Research conducted the first-ever study to independently model Internet and IP infrastructure (which we call “capacity”) and current and projected traffic (which we call “demand”) with the goal of evaluating how each changes over time. In that study, we concluded that if current trends were to continue, demand would outstrip capacity before 2012. Specifically, access bandwidth limitations will throttle back innovation, as users become increasingly frustrated with their ability to run sophisticated applications over primitive access infrastructure. This year, we revisit our original study, update the data and our model, and extend the study to look beyond physical bandwidth issues to assess the impact of potential logical constraints. Our conclusion? The situation is worse than originally thought!
We continue to project that capacity in the core, and connectivity and fiber layers will outpace all conceivable demand for the near future. However, demand will exceed access line capacity within the next two to four years. Even factoring in the potential impact of a global economic recession on both demand (users purchasing fewer Internet-attached devices and services) and capacity (providers slowing their investment in infrastructure) changes the impact by as little as a year (either delaying or accelerating, depending on which is assumed to have the greater effect).
This is a subject that my colleague Bret Swanson has written a great deal about, so I’m sure he’ll be commenting on this study at some point. Even if you don’t agree with the conclusion Nemertes reaches, as Richard Bennett notes, the report is well worth reading just the background information on public and private peering, content delivery networks, and overlay networks.
Over the past year, I have been monitoring a very interesting trend with important ramifications for the future of Internet policy. State Attorneys General (AGs) — often in league with the National Center for Missing and Exploited Children (NCMEC) — have been striking a variety of “voluntary” agreements with various Internet companies that deal with child safety concerns or other online issues. These agreements require the companies involved to take various steps to alter site architecture and functionality, commit to stop certain practices, or take steps to block certain users (ex: predators; escort services) or types of content (ex: child porn; online “discrimination”) altogether.
To begin, let me be very clear about one thing: Some of these activities or types of content warrant a law enforcement response. That is certainly the case with child pornography or predation, for example. However, as I will note down below, there is a legitimate question about whether state officials and a non-profit private organization should be crafting legal or regulatory policies to address such concerns for a global medium like the Internet. Regardless, these agreements are creating a new layer of Internet regulation (almost extra-legal in character) that is worthy of exploration.
First, let me itemize some of these recent “voluntary” agreements between Internet companies and the AGs and/or NCMEC:
Continue reading →
In her latest column, Media Post media market guru Diane Mermigas wonders how long it will be before we see a traditional over-the-air (OTA) broadcast TV network (like ABC, NBC, CBS, or Fox) dump their old broadcast business altogether and just move all their properties to cable and satellite TV. And, in response to Mermigas, Cory Bergman of Lost Remote argues, as I did last week, “the real future of TV is not linear cable, but non-linear video delivered seamlessly via IP to multiple devices, including your TV set. But mass adoption of this approach is still several years away.”
Bergman is right. It would be foolish to think any traditional network is going to rely exclusively on IP-based distribution any time soon; they see it as more of a compliment (or another product window). But Mermigas may be on to something in predicting that broadcast networks may soon be looking to get out of the OTA television business altogether and essentially become “a glorified general entertainment cable network.”
The strain on their dysfunctional paradigm is emanating from a devastating recession and the ongoing digital revolution. Both are permanently altering the rules of play for the networks. A case can be made for at least one of the Big 4 broadcast networks emerging as a glorified general entertainment cable network within the next several years. The economic advantages: more steady ad revenues and consistent subscriber fees as content is distributed cross-platform.
It would be a bold move that a free-spirited company such as News Corp. might already be contemplating for its Fox Broadcast TV Network, or NBC Universal for its peacock network. Industry analysts increasingly wonder how an independent CBS can prattle on under the crumbling old rules. In a world of exploding access and choices, the prime-time ratings (even with Live plus 3 configurations) spell diminishing returns. For Disney, ABC’s general entertainment status is on par with ESPN in sports; the new multi-platform model is in place except for formally moving the ABC TV Network to the cable side of the ledger.
Continue reading →
When people ask me why I do what I do for a living — and, more specifically, why I focus all my attention on digital media and technology policy — I often respond by showing them the new gadgets or software I am playing with at any given time. I just love digital technology. I am swimming in a sea of digital gadgets, consumer electronics, online applications, computing software, video games, and all sorts of cyber-stuff.
Anyway, even though this is a technology
policy blog, I sometimes highlight new digital toys or applications that have changed my life for the better. As the year winds down, therefore, I thought I would share with you five technologies that improved my life and productivity in 2008. I’d also love to hear from all of you about the technologies that you fell in love with this year in case I might have missed them. Here’s my list:
#1) Naturally Speaking 10:
Thanks to Nate Anderson’s outstanding review over at Ars Technica, I finally made the plunge and bought Dragon Naturally Speaking 10 earlier this month. Wow, what a life-changer. I had played around with an earlier version of this market-leading speech recognition technology and found it somewhat clunky and unreliable. But Ver. 10, has ironed out almost all the old problems and become an incredibly sophisticated piece of software in the process. I love the way I can use simple voice commands to navigate menus in Microsoft Word and in Firefox. Perhaps best of all, I can dictate random rants into a pocket recording device and then upload them to Naturally Speaking (via a USB connection) and have them instantly transcribed. I’m even composing blog entries like this using it! Only problem is inserting HTML code; that’s still a hassle. Also, I find that switching from one input device to another definitely affects the quality of the transcription. Once you “train” Naturally Speaking using one device, it makes sense to stick with it. It’s not just the quality of the microphone; it’s also the proximity to your mouth that makes a difference. Regardless, this is one great product and, best of all, it’s should help save my rapidly-aging hands from becoming prematurely arthritic! All those years of video games and keyboards have taken their toll.
Continue reading →
So, while the rest of you are still watching your Saturday morning cartoons this weekend, I’ll be working hard to defend the First Amendment at the Federal Society’s 2008 “National Lawyers Convention.” I am speaking on a panel there on Saturday morning entitled “The FCC and the First Amendment” and will be going up against Federal Communications Commission Chairman Kevin Martin and Gregory Garre, the Solicitor General of the United States. The primary focus of our discussion will be the FCC v. Fox case that was recently heard by the Supreme Court. It should be an interesting conversation.
It looks like registration for the event is now closed, but I’ll try to blog about it afterwords. Not sure if they are taping it or not, but if I find a video or transcript I’ll post it later.
“The FCC and the First Amendment”
Saturday, November 22nd / 10:45 a.m. – 12:15 p.m. / East Room
- Mr. Miguel A. Estrada, Gibson, Dunn & Crutcher, and Former Assistant to the United States Solicitior General
- Hon. Gregory G. Garre, United States Solicitor General
- Hon. Kevin J. Martin, Federal Communications Commission
- Mr. Adam D. Thierer, The Progress and Freedom Foundation
- Moderator: Hon. Brett M. Kavanaugh, United States Court of Appeals, District of Columbia
Is there any other issue under the tech policy sun today that creates stranger intellectual bedfellows than collective licensing of online music? After all, as I noted here before, on the pro-collective licensing side we find mortal enemies EFF and RIAA (at least Warner) in league. And on the anti-collective licensing side, we have Mike Masnick and Andrew Orlowski. If you locked those two guys in a room and tossed out any other copyright topic, they’d probably end up killing each other with their bare hands. But somehow they agree on this one (albeit for somewhat different reasons).
Anyway, I continue to have mixed, but generally skeptical, feelings about online collective licensing. There are countless thorny fairness issues on both the artist and consumer side of things. What’s the pay-in rate? How is it set? Who all pays in? Who gets paid out, how much, and by what formula? And God only knows how you deal with those parties (whether they be ISPs, consumers, or even artists) who don’t want to be a part of the scheme.
For these reasons, I’ve always felt a
voluntary collective licensing scheme for the Internet is challenging, if not impossible. It would have to be compulsory to be a truly blanket license that covered all music, all users, and all platforms. I’m not too fond of that approach, but I think that’s where we are likely heading in the copyright wars. After all, that’s how it has been resolved in many other contexts historically. But that doesn’t give me any comfort since those other systems have been a mess in practice. This 2004 Cato study by Robert Merges provides some details and makes the case against apply the compulsory licensing approach to the online music marketplace.
I need not remind anyone here about FCC Chairman Kevin Martin’s ongoing “war on cable.” Even if you hate the cable industry or capitalism in general, there’s just no way I can see how anyone who believes in the rule of law and good government can support Martin’s incessant abuse of power in his Moby Dick-like crusade against the cable industry. A crusade, incidentally, which happens to be motivated by Chairman Ahab’s desire to control speech on cable television, as I’ll note below.
Anyway, the latest chapter in this miserable saga of government-gone-mad is Martin’s recent effort to begin a far-ranging data gathering effort concerning cable prices and analog-to-digital channel movements under the guise of individual complaint enforcement. In a new paper entitled “Der Undue Prozess at the FCC: Part Deux,” my PFF colleague Barbara Esbin shows, once again, how the FCC’s regular processes and procedures are being perverted by Martin to achieve ends not within the agency’s delegated authority. And the results, in this case, will be profoundly anti-consumer.
Esbin documents the four flaws in the FCC’s investigation as follows:
Continue reading →