Posts tagged as:

Back in June, Adam Thierer and I denounced (PDF) Kevin Martin’s plans to create broadband utility to provide censored (and very slow) broadband for free to all Americans.  The WSJ reports that this scheme is now at the top of Martin’s December agenda:

The proposal to allow a no-smut, free wireless Internet service is part of a proposal to auction off a chunk of airwaves. The winning bidder would be required to set aside a quarter of the airwaves for a free Internet service. The winner could establish a paid service that would have a fast wireless Internet connection. The free service could be slower and would be required to filter out pornography and other material not suitable for children. The FCC’s proposal mirrors a plan offered by M2Z Networks Inc., a start-up backed by Kleiner Perkins Caufield & Byers partner John Doerr.

Adam’s August follow-up piece is also well worth reading.  

One could speculate as to how big an impact this service would really have.  Having just spent two weeks “wardriving” around Paris, Abu Dhabi and Dubai (looking for open wi-fi hotspots to try to get Internet access on my otherwise non-functional smart phone), I could certainly imagine scenarios in which some people might well use even a slow wireless service at least as a supplement to another provider–if their devices supported it.  But however useful the service might be to some people, and whether any company would actually want to build such a system in the first place if they have to give away such service, I think it’s a safe bet that if this is actually implemented, it will represent a victory for government censorship over content some people don’t like.

If this idea is still alive and kicking when the Obama administration has security escort Martin out of FCC headquarters in January–to hearty applause from nearly all quarters in Washington, no doubt–it will be interesting to see which impulse prevails on the Left, both within the new Administration and in the policy community.  Will the defenders of free expression triumph over those who see ensuring free broadband as a social justice issue?  Or will those on the Left who usually joining us in opposing censorship simply remain silent as the government extends the architecture of censoring the “public airways” onto the Net (where the underlying rationale of traditional broadcast regulation–that parents are powerless–does not apply)?  

Hope springs eternal.

Last week I discussed Barbara Esbin’s new PFF paper about the FCC’s absurd investigation into how the cable industry is transitioning analog customers over to digital. This is an essential transition is the cable industry is going to free up bandwidth to compete against telco-provided fiber offerings in the future. The faster the cable industry can migrate its old analog TV customers over to the digital platform, the more bandwidth they can re-deploy for high-speed Net access and services. Mark Cuban helps put things in perspective:

1. the only thing that cable companies, and satellite for that matter have to sell is bandwidth and the applications they can run on that bandwith. More bandwidth means more digital everything. 2. For Basic Cable subscribers that get say, 40 analog channels, they are consuming 40 x 38.6mbs or 1.54 Gbs. Let that sink in. 1.54 Gbs of bandwidth. Compare that to how fast your internet access is. That more bandwidth than your entire neighborhood consumes online, by a lot. Thats also the equivalent of 500 standard def digital channels. If you convert that to revenue per bit for cable companies, or cost per bit for basic cable consumers, the basic cable customers are getting the best deal in town. By a long shot. Digital cable customers, not so much. Digital customers are paying multiples of analog customers for bandwidth. In reality, analog customers are getting an amazing deal, and the cable companies have been hesitant to convert them only because of the potential FCC backlash. I’m as cynical as the next guy when it comes to cable rates and motivations, but the reality is that the longer analog remains, the fewer opportunities to leverage the freed up bandwidth to create next generation bandwidth hog applications. Will the cable companies charge us an a lot for that bandwidth, probably. But when we start to see applications built on top of 250mbs per second and more, it will have far more value to society than watching USA Network on your old analog TV. And Net Neutrality?  Well if everyone had that 1.54gbs available to them, net neutrality would be a non issue. We wouldn’t be arguing about access or pre-emption, we would be arguing about quality of service.

Once again we are reminded that all regulations have opportunity costs and in this case the FCC’s actions could cost consumers the loss (or at least delay) of higher-speed broadband offerings in the near-term.

[Hat tip to Richard Bennett for the recommendation here..] I haven’t had a chance to read through the entire thing yet, but this new study by Nemertes Research seems worthy of attention: “Internet Interrupted: Why Architectural Limitations Will Fracture the ‘Net.” From the exec sum:

In 2007, Nemertes Research conducted the first-ever study to independently model Internet and IP infrastructure (which we call “capacity”) and current and projected traffic (which we call “demand”) with the goal of evaluating how each changes over time. In that study, we concluded that if current trends were to continue, demand would outstrip capacity before 2012. Specifically, access bandwidth limitations will throttle back innovation, as users become increasingly frustrated with their ability to run sophisticated applications over primitive access infrastructure. This year, we revisit our original study, update the data and our model, and extend the study to look beyond physical bandwidth issues to assess the impact of potential logical constraints. Our conclusion? The situation is worse than originally thought! We continue to project that capacity in the core, and connectivity and fiber layers will outpace all conceivable demand for the near future. However, demand will exceed access line capacity within the next two to four years. Even factoring in the potential impact of a global economic recession on both demand (users purchasing fewer Internet-attached devices and services) and capacity (providers slowing their investment in infrastructure) changes the impact by as little as a year (either delaying or accelerating, depending on which is assumed to have the greater effect).

This is a subject that my colleague Bret Swanson has written a great deal about, so I’m sure he’ll be commenting on this study at some point.  Even if you don’t agree with the conclusion Nemertes reaches, as Richard Bennett notes, the report is well worth reading just the background information on public and private peering, content delivery networks, and overlay networks.

Over the past year, I have been monitoring a very interesting trend with important ramifications for the future of Internet policy. State Attorneys General (AGs) — often in league with the National Center for Missing and Exploited Children (NCMEC) — have been striking a variety of “voluntary” agreements with various Internet companies that deal with child safety concerns or other online issues. These agreements require the companies involved to take various steps to alter site architecture and functionality, commit to stop certain practices, or take steps to block certain users (ex: predators; escort services) or types of content (ex: child porn; online “discrimination”) altogether.

To begin, let me be very clear about one thing: Some of these activities or types of content warrant a law enforcement response. That is certainly the case with child pornography or predation, for example. However, as I will note down below, there is a legitimate question about whether state officials and a non-profit private organization should be crafting legal or regulatory policies to address such concerns for a global medium like the Internet. Regardless, these agreements are creating a new layer of Internet regulation (almost extra-legal in character) that is worthy of exploration.

First, let me itemize some of these recent “voluntary” agreements between Internet companies and the AGs and/or NCMEC:

Continue reading →

Is there any other issue under the tech policy sun today that creates stranger intellectual bedfellows than collective licensing of online music? After all, as I noted here before, on the pro-collective licensing side we find mortal enemies EFF and RIAA (at least Warner) in league. And on the anti-collective licensing side, we have Mike Masnick and Andrew Orlowski. If you locked those two guys in a room and tossed out any other copyright topic, they’d probably end up killing each other with their bare hands. But somehow they agree on this one (albeit for somewhat different reasons).

Anyway, I continue to have mixed, but generally skeptical, feelings about online collective licensing. There are countless thorny fairness issues on both the artist and consumer side of things. What’s the pay-in rate? How is it set? Who all pays in? Who gets paid out, how much, and by what formula? And God only knows how you deal with those parties (whether they be ISPs, consumers, or even artists) who don’t want to be a part of the scheme.

For these reasons, I’ve always felt a voluntary collective licensing scheme for the Internet is challenging, if not impossible. It would have to be compulsory to be a truly blanket license that covered all music, all users, and all platforms. I’m not too fond of that approach, but I think that’s where we are likely heading in the copyright wars. After all, that’s how it has been resolved in many other contexts historically. But that doesn’t give me any comfort since those other systems have been a mess in practice. This 2004 Cato study by Robert Merges provides some details and makes the case against apply the compulsory licensing approach to the online music marketplace.

I’ve spent a lot of time in recent years trying to debunk various myths about online child safety or at least put those risks into perspective. Too often, press reports and public policy initiatives are being driven by myths, irrational fears, or unjustified “moral panics.”  Luckily, the New York Times reports that there’s another study out this week that helps us see things in a more level-headed light. This new MacArthur Foundation report is entitled Living and Learning with New Media: Summary of Findings from the Digital Youth Project. This white paper is a summary of three years of research on kids’ informal learning with digital media. The survey incorporates the insights from 800 youth and young adults and over 5000 hours of online observations. The information will eventually be contained in a book from MIT Press (“Hanging Out, Messing Around, Geeking Out: Living and Learning with New Media.”)

From the summary of the study on the MacArthur website:

“It might surprise parents to learn that it is not a waste of time for their teens to hang out online,” said Mizuko Ito, University of California, Irvine researcher and the report’s lead author. “There are myths about kids spending time online – that it is dangerous or making them lazy. But we found that spending time online is essential for young people to pick up the social and technical skills they need to be competent citizens in the digital age.”

Importantly, regarding the concerns many parents and policymakers have about online predation, Ms. Ito told the New York Times that, “Those concerns about predators and stranger danger have been overblown.” “There’s been some confusion about what kids are actually doing online. Mostly, they’re socializing with their friends, people they’ve met at school or camp or sports.”

In the report, according to the summary, the researchers “identified two distinctive categories of teen engagement with digital media: friendship-driven and interest-driven. While friendship-driven participation centered on “hanging out” with existing friends, interest-driven participation involved accessing online information and communities that may not be present in the local peer group.” The specific findings of the study are as follows:

Continue reading →

Richard Bennett and Matt Sherman explain why it’s a bad idea. (And here are a few of my old rants on the issue.)

Bennett:

If we’ve learned anything at all about from the history of Internet-as-utility, it’s that this strained analogy only applies in cases where there is no existing infrastructure, and probably ends best when a publicly-financed project is sold (or at least leased) to a private company for upgrades and management. We should be suspicious of projects aimed at providing Wi-Fi mesh because they’re slow as molasses on a winter’s day. I don’t see any examples of long-term success in the publicly-owned and operated networking space. And I also don’t see any examples of publicly-owned and operated Internet service providers doing any of the heavy lifting in the maintenance of the Internet protocols, a never-ending process that’s vital to the continuing growth of the Internet.

Sherman:

Pursuing a public utility model while also desiring competition are fundamentally contradictory goals. Utilities are designed not to compete. Do you, or does anyone you know, have a choice of providers for water, sewage or electricity? My second question would be: is there anyone in the technology world who sees public utilities as a model for innovation? A 1.5 megabit connection (T1) was an unimaginable luxury when I started in tech in the mid-90’s. It was for well-funded companies only. Today, it is a low-end consumer connection and costs around 80% less. Has your sewage service followed a similar trajectory? A public utility is designed to be “good enough” and little more. There is no need, and little room, for differentiation or progress. Your electricity service is essentially unchanged from 20 years ago, and will look the same 10 years from now. Broadband, on the other hand, requires constant innovation if we are to move forward — and it has been delivering it, even if we desire more.

Blown to Bits coverI’ve just finished reading Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion, by Hal Abelson, Ken Ledeen, and Harry Lewis, and it’s another title worth adding to your tech policy reading list. The authors survey a broad swath of tech policy territory — privacy, search, encryption, free speech, copyright, spectrum policy — and provide the reader with a wonderful history and technology primer on each topic.

I like the approach and tone they use throughout the book. It is certainly something more than “Internet Policy for Dummies.” It’s more like “Internet Policy for the Educated Layman”: a nice mix of background, policy, and advice. I think Ray Lodato’s Slashdot review gets it generally right in noting that, “Each chapter will alternatively interest you and leave you appalled (and perhaps a little frightened). You will be given the insight to protect yourself a little better, and it provides background for intelligent discussions about the legalities that impact our use of technology.”

Abelson, Ledeen, and Lewis aren’t really seeking to be polemical in this book by advancing a single thesis or worldview. To the extent the book’s chapters are guided by any central theme, it comes in the form of the “two basic morals about technology” they outline in Chapter 1:

The first is that information technology is inherently neither good nor bad — it can be used for good or ill, to free us or to shackle us. Second, new technology brings social change, and change comes with both risks and opportunities. All of us, and all of our public agencies and private institutions, have a say in whether technology will be used for good or ill and whether we will fall prey to its risks or prosper from the opportunities it creates. (p. 14)

Mostly, what they aim to show is that digital technology is reshaping society and, whether we like or it not, we better get used to it — and quick!  “The digital explosion is changing the world as much as printing once did — and some of the changes are catching us unaware, blowing to bits our assumptions about the way the world works… The explosion, and the social disruption that it will create, have barely begun.” (p 3)

In that sense, most chapters discuss how technology and technological change can be both a blessing and a curse, but the authors are generally more optimistic than pessimistic about the impact of the Net and digital technology on our society. What follows is a quick summary of some of the major issues covered in Blown to Bits.

Continue reading →

Good editorial in the Boston Globe today about “The Dangers of Internet Censorship” by Harry Lewis, a professor of computer science at Harvard and fellow at Harvard’s Berkman Center for Internet and Society. Lewis argues that:

Determining which ideas are “harmful” is not the government’s job. Parents should judge what information their children should see – and should expect that older children will, as they always have, find ways around restrictive rules.

Worth reading the whole thing. Incidentally, Harry Lewis is the co-author of an interesting new book I am reading right now, Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion. I’m going to try to review it here eventually.

Great post over on the Tor blog about how “anonymity on the Internet is not going away.” This is a subject I care about deeply. Here, for example, is an essay I wrote about mandatory age verification and the threat it poses to online anonymity.  I love this paragraph from the Tor essay, and agree with it wholeheartedly:

Anonymity is a defense against the tyranny of the majority. There are many, many valid uses of anonymity tools, such as Tor. The belief that anonymous tools exist only for the edges of societies is narrow-minded. The tools exist and are used by all. Much like the Internet, the tools can be used for good or bad. The negative uses of such tools typically generate huge headlines, but not the positive uses. Raising the profile of the positive uses of anonymity tools, such as Tor, is one of our challenges.

Amen brother.