Too funny. Not quite as good as the Diebold one Tim posted last week, but this is still great…
FCC Okays Nudity On TV If It2019s Alyson Hannigan
Too funny. Not quite as good as the Diebold one Tim posted last week, but this is still great…
FCC Okays Nudity On TV If It2019s Alyson Hannigan
Just wanted to let everyone know that two new contributors–Bret Swanson and Ryan Radia–will be joining us here at the TLF.
Bret recently joined PFF to start a new program on trade, globalization & technology policy issues. It’s called the Center for Global Innovation. He’s fighting back against the foolish push to close off our borders to free trade or to limit the flow of technology and e-commerce globally. He’s also working on a big new book about the role of China in the new global economy.
Ryan Radia, a researcher with the Competitive Enterprise Institute, is also joining us. At CEI, Ryan does great work on technology policy along with our own Cord Blomquist. Ryan is also going to be helping us out with TLF podcast production in coming months.
We’re glad to have Bret and Ryan join us and look forward to their contributions to our little “virtual think tank” here at the TLF!
An inconvenient fact (for opponents of network management):
A survey by the Japan Internet Providers Association shows 40% of Japanese ISPs perform network management, according to Yomiuri Shimbun, and the trend is growing.
Of the 276 respondents, 69 companies said they restricted information flow through their lines. A total of 106 companies, including those that rent lines from infrastructure owners, impose such restrictions. Twenty-nine companies said they were planning to take similar measures.
I’ve finally finished a draft of the network neutrality paper I’ve been blogging about for the last few months. One of the things I learned after the publication of my previous Cato paper is that, especially when you’re writing about a technical subject, you’ll inevitably make some errors that will only be caught when a non-trivial number of other people have the chance to read it. So if any TLF readers are willing to review a pre-release draft of the paper and give me their comments, I would appreciate the feedback. Please email me at leex1008@umn.edu and I’ll send you a copy. Thanks!
Tomorrow is the 2008 Politics Online Conference, and I’m prepping for the event by guest-blogging over at the Institute for Politics, Democracy and the Internet. The panel I’ll be moderating is called “Building a Broadband Strategy for America,” and you can read more about it at the Politics Online site.
I’ve blogged about this panel on this site previously, so I won’t recount that, other than to repeat that I’ll be joined by FCC Commissioner Jonathan Adelstein, by Professor Tim Wu, who coined the term “Net Neutrality,” and by Eric Werner, a senior official at the National Telecommunications and Information Administration of the Commerce Department.
On the IPDI blog, I address how BroadbandCensus.com plays into the National Broadband Strategy debate:
As a technology reporter, I’ve been writing about the battles over broadband for nearly a decade here in Washington. There is one fact about which nearly everyone seems to be in agreement: if America wants better broadband, America need better broadband data. That’s why I’ve recently started a new venture to collect this broadband data, and to make the data available for all on the Web at BroadbandCensus.com.
Read the rest of Want Better Broadband in America? Take the BroadbandCensus.com!
I would be remiss if I didn’t mention the launch of the End Software Patents coalition, headed by Ben Klemens. Ars has a good write-up. Software patents are among the biggest threats to innovation in the software industry. And indeed, they’ve become a nuisance far beyond Silicon Valley, because every company uses software and is therefore likely violating software patents. It’s good to see the movement for ending them continue to grow.
In the previous installment of my ongoing “Media Metrics” series, I highlighted the radical metamorphosis taking place in the market for audible information and entertainment. I showed that this previously stable sector now finds itself in a state of seemingly constant upheaval, especially thanks to the blistering pace of technological change we are witnessing today.
In this sixth installment of the Media Metrics series, we will see how a similar transformation has taken place in the video marketplace over the past three decades. Again, using the analytical framework I presented in the first installment, we will see that today we have more choice, competition, and diversity in every part of the video value chain. [You might also be interested in reviewing the third installment in this series dealing with advertising competition and the fourth installment dealing with changing media fortunes.]
Kenneth Goldstein of Winnipeg, Canada-based Communications Management, Inc., has put together a set of enlightening television value chains that compares the state of the marketplace in 1975 to present.
Larry Lessig is a gifted writer, and he does a good job of finding and telling stories that illustrate the points he’s trying to make. I found Free Culture extremely compelling for just this reason; he does a great job of illustrating a fairly subtle but pervasive problem by finding representative examples and weaving together a narrative about a broader problem. He demonstrates that copyright law has become so pervasive that peoples’ freedom is restricted in a thousand subtle ways by its over-broad application.
He takes a similar approach in Code, but the effort falls flat for me. Here, too, he gives a bunch of examples in which “code is law”: the owners of technological systems are able to exert fine-grained control over their users’ online activities. But the systems he describes to illustrate this principle have something important in common: they are proprietary platforms whose users have chosen to use them voluntarily. He talks about virtual reality, MOOs and MUDs, AOL, and various web-based fora. He does a good job of explaining that the different rules of these virtual spaces—their “architecture”—has a profound impact on their character. The rules governing an online space interact in complex ways with their participants to produce a wide variety of online spaces with distinct characters.
Lessig seems to want to generalize from these individual communities to the “community” of the Internet as a whole. He wants to say that if code is law on individual online communications platforms, then code must be law on the Internet in general. But this doesn’t follow at all. The online fora that Lessig describes are very different from the Internet in general. The Internet is a neutral, impersonal platform that supports a huge variety of different applications and content. The Internet as a whole is not a community in any meaningful sense. So it doesn’t make sense to generalize from individual online communities, which are often specifically organized to facilitate control, to the Internet in general which was designed with the explicit goal of decentralizing control to the endpoints.
Also, the cohesiveness and relative ease of control one finds on the Internet occurs precisely because users tend to use any given online service voluntarily. Users face pressures to abide by the generally-accepted rules of the community, and users who feel a given community’s rules aren’t a good fit will generally switch to a new one rather than make trouble. In other words, code is law in individual Internet communities precisely because there exists a broader Internet in which code is not law. When an ISP tries to control its users’ online activities, users are likely to react very differently. As we’ve seen in the case of the Comcast kerfuffle, users do not react in a docile fashion to ISPs that attempt to control their online behavior. And at best, such efforts produce only limited and short-term control.
I’m re-reading Larry Lessig’s Code and Other Laws of Cyberspace. I last read it about four years ago, long enough that I’d forgotten a lot of the specific claims Lessig made. One of the things that I think has clearly not occurred is his prediction that we would develop a “general architecture of trust” that would “permit the authentication of a digital certificate that verifies facts about you—your identity, citizenship, sex, age, or the authority you hold.” Lessig thought that “online commerce will not fully develop until such an architecture is established,” and that way back in 1999, we could “see enough to be confident that it is already developing.”
Needless to say, this never happened, and it now looks unlikely that it ever will happen. The closest we came was with Passport, which was pretty much a flop. We have instead evolved a system in which people have dozens of lightweight online identities for the different websites they visit, many of which involve little more than setting a cookie on one’s browser. The kind of universal, monolithic ID system that would allow any website to quickly and transparently learn who you are seems much less likely today than it apparently seemed to Lessig in 1999.
Of course, this would have been obvious to Lessig if he’d had the chance to read Jim Harper’s Identity Crisis. Jim explained that the security of an identifier is a function not only of the sophistication of its security techniques, but also of the payoff for breaking it. A single, monolithic identifier is a bad idea because it becomes an irresistible target for the bad guys. It’s also insufficiently flexible: Security rules that are robust enough for online banking is going to be overkill for casual web surfing. What I want, instead, are a range of identifiers of varying level of security, tailored to the sensitivity of the systems to which they control access.
Online security isn’t much about technology at all. For example, the most important safeguard against online credit card fraud isn’t SSL. It’s the fact that someone trying to buy stuff with a stolen credit card has to give a delivery location, which can be used by the police to apprehend him. Our goal isn’t and shouldn’t be maximal security in every transaction. Rather, it’s to increase security until the costs of additional security on the margin cease to outweigh the resulting reductions in fraud. If the size of a transaction is reasonably low, and most people are honest, quite minimalist security precautions may be sufficient to safeguard it. That appears to be what’s happened so far, and Lessig’s prediction to the contrary is starting to look rather dated.