Articles by Adam Thierer

Avatar photoSenior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.


In my nearly 17 years of public policy work, I have never felt so vindicated about something as I did this weekend when I read Dan P. Lee’s Philadelphia magazine feature on “Whiffing on Wi-Fi.” It is a spectacularly well-written piece about the spectacular failure of Philadelphia’s short-lived experiment with municipally-subsidized wi-fi, which was called Wireless Philadelphia.  You see, back in April 2005, I wrote a white paper entitled “Risky Business: Philadelphia’s Plan for Providing Wi-Fi Service,” and it began with the following question: “Should taxpayers finance government entry into an increasingly competitive , but technologically volatile, business market?”  In the report, I highlighted the significant risks involved here in light of how rapidly broadband technology and the marketplace was evolving. Moreover, I pointed to the dismal track record of previous municipal experiments in this field, which almost without exception ended in failure. I went on to argue:

Keeping these facts in mind, it hardly makes sense for municipal governments to assume the significant risks involved in becoming a player in the broadband marketplace. Even an investment in wi-fi along the lines of what Philadelphia is proposing, is a risky roll of the dice. [… ]  the nagging “problem” of technological change is especially acute for municipal entities operating in a dynamic marketplace like broadband. Their unwillingness or inability to adapt to technological change could leave their communities with rapidly outmoded networks, and leave taxpayers footing the bill.

I got a stunning amount of hate mail and cranky calls from people after I released this paper.  Everyone accused me of being a sock puppet for incumbent broadband providers or just not understanding the importance of the endevour.  But as I told everyone at the time, I wasn’t out to block Philadelphia from conducting this experiment, I just didn’t think it had any chance of being successful.  And, again, I tried to point out what a shame it would be if taxpayers were somehow stuck picking up the tab, or if other providers decided not to invest in the market because they were “crowded-out” by government investment in the field.

But even I could have never imagined how quickly the whole house of cards would come crumbling down in Philadelphia.  It really was an astonishing meltdown.  Dan Lee’s article makes that abundantly clear:

Continue reading →

“Hasn’t Steve Jobs learned anything in the last 30 years?” asks Farhad Manjoo of Slate in an interesting piece about “The Cell Phone Wars” currently raging between Apple’s iPhone and the Google’s new G1, Android-based phone. Manjoo wonders if whether Steve Jobs remembers what happen the last time he closed up a platform: “because Apple closed its platform, it was IBM, Dell, HP, and especially Microsoft that reaped the benefits of Apple’s innovations.” Thus, if Jobs didn’t learn his lesson, will he now with the iPhone? Manjoo continues:

Well, maybe he has—and maybe he’s betting that these days, “openness” is overrated. For one thing, an open platform is much more technically complex than a closed one. Your Windows computer crashes more often than your Mac computer because—among many other reasons—Windows has to accommodate a wider variety of hardware. Dell’s machines use different hard drives and graphics cards and memory chips than Gateway’s, and they’re both different from Lenovo’s. The Mac OS, meanwhile, has to work on just a small range of Apple’s rigorously tested internal components—which is part of the reason it can run so smoothly. And why is your PC glutted with viruses and spyware? The same openness that makes a platform attractive to legitimate developers makes it a target for illegitimate ones.

I discussed these issues in greater detail in my essay on”Apple, Openness, and the Zittrain Thesis” and in a follow-up essay about how the Apple iPhone 2.0 was cracked in mere hours. My point in these and other essays is that the whole “open vs. closed” dichotomy is greatly overplayed. Each has its benefits and drawbacks, but there is no reason we need to make a false choice between the two for the sake of “the future of the Net” or anything like that.

In fact, the hybrid world we live in — full of a wide variety of open and proprietary platforms, networks, and solutions — presents us with the best of all worlds. As I argued in my original review of Jonathan Zittrain’s book, “Hybrid solutions often make a great deal of sense. They offer creative opportunities within certain confines in an attempt to balance openness and stability.”  It’s a sign of great progress that we now have different open vs. closed models that appeal to different types of users.  It’s a false choice to imagine that we need to choose between these various models.

Continue reading →

This week, I have been up at Harvard University participating in another meeting of the Internet Safety Technical Task Force (ISTTF), of which I am a member. The ISTTF was organized earlier this year pursuant to an agreement between 49 state attorneys general (AGs) and social networking giant MySpace.com. A group of experts from academia, non-profit organizations, and industry were appointed to the Task Force, which is charged with evaluating the market for online child safety tools and methods and issuing a report on the matter to the AGs at the end of this year.  ISTTF members have been meeting privately and publicly in both Cambridge, MA and Washington, D.C. The Task Force has been very ably chaired by John Palfrey, co-director of Harvard’s Berkman Center for Internet & Society.

Although the ISTTF is looking at a wide variety of tools and methods associated with online child protection (ex: filters, monitoring tools, educational campaigns, etc.), many of the AGs who crafted the agreement with MySpace that led to the Task Force’s formation have made it clear that they are most interested in having the ISTTF evaluate age verification / online verification technologies.  In fact, at the start of this week’s session at Harvard Law School, AGs Martha Coakely of Massachusetts and Richard Blumenthal of Connecticut both spoke and made it abundantly clear they expect the Task Force to develop age and identify-verification tools for social networking sites (SNS). AG Blumenthal said we need to deal with “the dangers of anonymity” and repeated his standard line about online age verification: “If we can put a man on the moon, we can make the Internet safe.”  [Of course, putting a man on the moon took hundreds of billions of dollars and a decade to accomplish, but never mind that fact! Moreover, one could also argue that if we can put a man on the moon we can cure hunger, AIDS, and the common cold, but some things are obviously easier said than done. Finally, putting a man on the moon didn’t require all Americans or their kids to give up their anonymity or privacy rights in order to accomplish the feat!]

On many occasions here before, I have outlined various questions and reservations about proposals to mandate online age verification.  Last year, I also published a lengthy white paper on the issue and hosted a lively debate on Capitol Hill [transcript here] about this.  I also have discussed age verification in my book on parental controls and online child safety. [Braden Cox also talked about his experiences up at Harvard this week here, and CNet’s Chris Soghoian had a brutal assessment of this week’s proposals on his “Surveillance State” blog.]

In this essay, I will discuss the new fault lines in the debate over online age verification and outline where I think we are heading next on this front.  I will argue:

  • There is now widespread understanding that it is extraordinarily difficult to verify the ages and identities of minors online using the methods we typically use to verify adults. Because of this, age verification proponents are increasingly proposing two alternative models of verifying kids before they go online or visit SNS…
  • First, for those who continue to believe that we must do whatever we can to verify kids themselves, schools and school records are increasingly being viewed as the primary mechanism to facilitate that. This raises two serious questions: Do we want schools to serve as DMVs for our children? And, do we want more school records or information about our kids being accessed or put online?
  • Second, for those who are uncomfortable with the idea of verifying kids or using schools, or school records, to accomplish that task, parental permission-based forms of authentication are becoming the preferred regulatory approach. Under this scheme, which might build upon the regulatory model found in the Children’s Online Privacy Protection Act of 1998 (COPPA), parents or guardians would be verified somehow and then would vouch for their children before they were allowed on a SNS, however defined.  But how do we establish a clear link between parents and kids?  And will parents be willing to surrender a great deal more information (about themselves and their kids) before their kids can go online? And, is it sensible to use a law that was meant to protect the privacy and personal information of children to potentially gather a great deal more information about them, and their parents?
  • It remains very unclear how either of those two verification methods would make children safer online. Indeed, that could actually make kids less safe by compromising their personal information and creating a false sense of security online for them and their parents.
  • It is highly unlikely the Internet Safety Technical Task Force will be able to reach consensus on this complicated, controversial issue. A small camp will likely flock to the sort of proposals mentioned above. Another, larger camp (including me) will flock to education-based approaches to child safety as well increased reliance on other parental empowerment tools and strategies, industry self-regulatory efforts, social norms, and better intervention strategies for troubled youth. But the age verification debate will go on and, as was the case over the past two years, the legal battleground will be state capitals across America, with AGs likely pushing for age verification mandates regardless of what the Task Force concludes.

Continue reading if you are interested in the details.

Continue reading →

Boynton Beach, Florida’s experiment with municipal wi-fi has ended.  [Add it to the list of recent failures]. According to the South Florida Sun-Sentinel:

There’s a roadblock in Boynton Beach‘s information superhighway. The city’s Community Redevelopment Agency decided this month it has no more money for free wireless Internet service in its district.  Boynton Beach was the first city in Palm Beach County to offer Wi-Fi three years ago. It operated 11 “hot spots,” or access points, paying $44,000 annually for vendors to keep the system running. But the CRA dropped vendors who failed to meet their contracts. Other companies wanted to sell the Community Redevelopment Agency new equipment, but in a tough budget year, offering free wireless was no longer viable, said the agency’s executive director, Lisa Bright.  […]  “There is clearly no way for it to be a revenue generator at this time,” Bright said. “It’s premature for us to go to the next level.”

Whenever I read one of these articles about the small town or mid-sized town wi-fi experiments failing so miserably I have to admit that I am a bit surprised.  After all, many muni wi-fi supporters have argued that it is precisely in those communities where government support is most necessary and will be most likely to fill in gaps left by sporadic / delayed private broadband deployment.  Frankly, I always thought this was the best argument for muni wi-fi and it’s why I made sure to never go on record as opposing all government efforts, even though I am obviously a skeptic and don’t like the idea of wagering taxpayer money on such risky ventures. (By contrast, I could just never see the reason for government subsidies of wi-fi ventures in major metro areas with existing private broadband operators. Like Philly and Chicago.)

But the fact that many small town or mid-sized town wi-fi experiments are failing is really interesting because it must tell us something about either (a) the viability of the technology or (b) demand for such service.  Now, many municipalization believers will just say that clearly (a) is the case and argue that we just need to wait for Wi-Max solutions to come online and then all will be fine.  It certainly may be the case that Wi-Max will help boost coverage in low density areas, but is that really the end of the story?  What about demand?  What really makes me mad when I read most of these stories about current failed experiments is that they rarely give us any solid numbers about how many people utilized the services.  To the extent any journalists or analysts are out there contemplating a story or study on this issue, I beg you to dig into the demand side of the equation and try to find out how much of the currently muni-wifi failure is due to technology and how much is due to demand, or lack thereof.  Of course, government mismanagement could also be a culprit. But I suspect there is a far less demand for these services than supporters have estimated.

Zittrain Future of the Net coverSorry if it seems like I am beating a dead horse here, but the folks at the City Journal asked me a pen a review of Jonathan Zittrain’s new book, The Future of the Internet and How to Stop It.  Faithful readers here will no doubt remember that I have already penned a review of the book and several follow-up essays. (Part 1, 2, 3, 4). I swear I am not picking on Jonathan, but his book is probably the most important technology policy book of the year–Nick Carr’s Big Switch would be a close second–and deserves attention.  Specifically, I think it deserves attention because I believe that Jonathan’s provocative thesis is wildly out of touch with reality.  As I state in the City Journal review of his book:

[C]ontrary to what Zittrain would have us believe, reports of the Internet’s death have been greatly exaggerated. […] Not only is the Net not dying, but there are signs that digital generativity and online openness are thriving as never before. […] Essentially, Zittrain creates a false choice regarding the digital future we face. He doesn’t seem to believe that a hybrid future is possible or desirable. In reality, however, we can have a world full of some tethered appliances or even semi-closed networks that also includes generative gadgets and open networks. After all, millions of us love our iPhones and TiVos, but we also take full advantage of the countless other open networks and devices at our disposal. […]

Continue reading →

In late June, the Federal Communications Commission (FCC) opened a Notice of Inquiry and Notice of Proposed Rulemaking regarding “Sponsorship Identification Rules and Embedded Advertising” (MB Docket No. 08-90). Basically, it’s an inquiry into the product placement and embedded advertising practices on television. Some at the FCC want such practices regulated.

PFF filed comments in the matter today. Ken Ferree and I argue that that FCC regulation of such advertising practices would be unnecessary and unwise. “If the Notice demonstrates anything,” we argue, “it is that a majority of the current Commissioners live in a world wholly alien and unfamiliar to most Americans; indeed, a world long forgotten if it ever existed.” We continue:

The Notice alludes menacingly to new, “subtle and sophisticated means” of commercial messaging, to “sneaky commercials” (quoting a senescent order topped with nearly fifty-years of dust) and to “vindicat[ing]” the policy goals of the Communications Act – as if the FCC must exact vengeance on those who would try – horror of horrors – to sell goods and services to the American public. The melodramatic tone of the Notice is intended, of course, to set the stage for the Commission’s latest effort to micromanage the free marketplace of ideas, i.e., the media. Only by portraying “embedded” advertising as something new and nefarious can the Commission hope to justify a new portfolio of intrusive and burdensome speech regulations in the name of preserving the “public’s right to know who is paying to air commercials or other program matter on broadcast television and radio and cable.”

And, as we make clear in the filing, we don’t buy the argument that the public are nothing more than mindless sheep:

Continue reading →

And so the series continues.  The Washington Post reports that the Department of Justice has just released “a scathing report” finding that over a 5-year period the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) “lost dozens of weapons and hundreds of laptops that contained sensitive information.” The DOJ’s Inspector General Glenn A. Fine found that 418 laptop computers and 76 weapons were lost. According to the report:

Yesterday’s report showed that ATF, a much smaller agency than the FBI, had lost proportionately many more firearms and laptops. “It is especially troubling that that ATF’s rate of loss for weapons was nearly double that of the FBI and [Drug Enforcement Administration], and that ATF did not even know whether most of its lost, stolen, or missing laptop computers contained sensitive or classified information,” Fine wrote.  […] Many of the missing laptops contained sensitive or classified material, according to the report. ATF began installing encryption software only in May 2007. ATF did not know what information was on 398 of the 418 lost or stolen laptops. The report called the lack of such knowledge a “significant deficiency.” Of the 20 missing laptops for which information was available, ATF indicated that seven — 35 percent — held sensitive information. One missing laptop, for example, held “300-500 names with dates of birth and Social Security numbers of targets of criminal investigations, including their bank records with financial transactions.” Another held “employee evaluations, including Social Security numbers and other [personal information].” Neither laptop was encrypted.

The findings regarding lost weapons were equally troubling, if not a bit humorous:

Continue reading →

Tech-related Lolcats

by on September 15, 2008 · 8 comments

I love the lolcats. (Or perhaps I should say, Iz Luvz Da Lolcats.) Here are a couple of my favorite tech-related cats from recent months:

cat
more animals

humorous pictures
more animals

on ur myspace
more animals

cat
more animals

I posted an essay last month about some possible non-regulatory solutions to the problem of porn on planes that I predicted might develop once airlines started rolling out in-flight Internet access.  Some respondants to that essay argued this was likely a non-problem because few people would actually view porn in public.  Unfortunately, a few incidents have apparently already created controversy.

Frankly, I am shocked that legislation hasn’t already been floated on this issue, but I am sure that someone in Congress will be firing off something soon. Again, like I said in that previous essay, before things get ugly and bills start flying up on the Hill, the airlines need to think about crafting some constructive solutions to this problem. We don’t want the FCC to become the censors of the sky, as some lawmakers will no doubt propose eventually.