Posts tagged as:

David Margolick has penned a lengthy piece for Portfolio.com about the AutoAdmit case, which has important ramifications for the future of Section 230 and online speech in general. Very brief background: AutoAdmit is a discussion board for students looking to enter, or just discuss, law schools. Some threads on the site have included ugly — insanely ugly — insults about some women.  A couple of those women sued to reveal the identities of their attackers and hold them liable for supposedly wronging them.  The case has been slowly moving through the courts ever since. Again, read Margolick’s article for all the details.  The important point here is that the women could not sue AutoAdmit directly for defamation or harassment because Section 230 of the Communications Decency Act of 1996 immunizes websites from liability for the actions of their users.  Consequently, those looking to sue must go after the actual individuals behind the comments which (supposedly) caused the harm in question.

I am big defender of Section 230 and have argued that it has been the cornerstone of Internet freedom. Keeping online intermediaries free from burdensome policing requirements and liability threats has created the vibrant marketplace of expression and commerce that we enjoy today. If not for Sec. 230, we would likely live in a very different world today.

Sec. 230 has come under attack, however, from those who believe online intermediaries should “do more” to address various concerns, including cyber-bullying, defamation, or other problems.  For those of us who believe passionately in the importance of Sec. 230, the better approach is to preserve immunity for intermediaries and instead encourage more voluntary policing and self-regulation by intermediaries, increased public pressure on those sites that turn a blind eye to such behavior to encourage them to change their ways, more efforts to establish “community policing” by users such that they can report or counter abusive language, and so on.

Of course, those efforts will never be fool proof and a handful of bad apples will still be able to cause a lot of grief for some users on certain discussion boards, blogs, and so on.  In those extreme cases where legal action is necessary, it would be optimal if every effort was exhausted to go after the actual end-user who is causing the problem before tossing Sec. 230 and current online immunity norms to the wind in an effort to force the intermediaries to police speech.  After all, how do the intermediaries know what is defamatory?  Why should they be forced to sit in judgment of such things?  If, under threat of lawsuit, they are petitioned by countless users to remove content or comments that those individuals find objectionable, the result will be a massive chilling effect on online free speech since those intermediaries would likely play is safe most of the time and just take everything down. Continue reading →

What would it take to create a more secure Internet?  That’s what John Markoff explores in his latest New York Times article, “Do We Need a New Internet?”  Echoing some of the same fears Jonathan Zittrain articulates in his new book The Future of the Internet, Markoff wonders if online viruses and other forms of malware have gotten so out-of-control that extreme measures may be necessary to save the Net.  Compared to when cyber-security attacks first started growing over 20 years ago, Markoff argues that:

[T]hings have gotten much, much worse. Bad enough that there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over.

Like many others, Markoff fingers anonymity as one potential culprit:

The Internet’s current design virtually guarantees anonymity to its users. (As a New Yorker cartoon noted some years ago, “On the Internet, nobody knows that you’re a dog.”) But that anonymity is now the most vexing challenge for law enforcement. An Internet attacker can route a connection through many countries to hide his location, which may be from an account in an Internet cafe purchased with a stolen credit card. “As soon as you start dealing with the public Internet, the whole notion of trust becomes a quagmire,” said Stefan Savage, an expert on computer security at the University of California, San Diego.

Consequently, Markoff suggests that:

A more secure network is one that would almost certainly offer less anonymity and privacy. That is likely to be the great tradeoff for the designers of the next Internet. One idea, for example, would be to require the equivalent of drivers’ licenses to permit someone to connect to a public computer network. But that runs against the deeply held libertarian ethos of the Internet.

Indeed, not only does it run counter to the ethos of the Net, but as Markoff rightly notes, “Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.”  I’ve spent a lot of time writing about that fact here and won’t belabor the point other than to say that efforts to eliminate anonymity for the entire Internet would prove extraordinarily intrusive and destructive — of both the Internet’s current architecture and the rights of its users.  There’s just something about a “show-us-you-papers,” national ID card-esque system of online identification that creeps most of us out. That’s why I spend so much time fighting age verification mandates for social networking sites and other websites; it’s the first step down a very dangerous road.

But what if we could apply such solutions in a narrower sense?  That is, could we create more secure communities within the overarching Internet superstructure that might provide greater security?  Markoff starts thinking along those lines when he suggests… Continue reading →

Statue at FTC Headquarters: “Man Controlling Trade” (We’re rooting for the horse!)

Adam Thierer and I have just released a new PFF paper entitled “Targeted Online Advertising: What’s the Harm & Where Are We Heading?” (PDF) about the FTC’s new “Self-Regulatory Principles for Online Behavioral Advertising.”  Adam lampooned some of the attitudes at play in this debate in a great rant yesterday.

But we give the FTC credit for resisting calls to abandon self-regulation, and for its thoughtful consideration of the danger in stifling advertising-the economic engine that has supported a flowering of creative expression and innovation online content and services.  That said, we continue to have our doubts about the FTC’s approach, however-well intentioned:

  1. Where is this approach heading?  Will a good faith effort to suggest best practices eventually morph into outright government regulation of the online advertising marketplace?
  2. What, concretely, is the harm we’re trying to address?  We have asked this question several times before and have yet to see a compelling answer.
  3. What will creeping “co-regulation” mean for the future of “free” Internet services?  Is the mother’s milk of the Internet-advertising-about to be choked off by onerous privacy mandates?

We stand at an important crossroads in the debate over the online marketplace and the future of a “free and open” Internet. Many of those who celebrate that goal focus on concepts like “net neutrality” at the distribution layer, but what really keeps the Internet so “free and open” is the economic engine of online advertising at the applications and content layers. If misguided government regulation chokes off the Internet’s growth or evolution, we would be killing the goose that laid the golden eggs.

The dangers of regulation to the health of the Internet are real, but the ease with which government could disrupt the economic motor of the Internet (advertising) is not widely understood-and therein lies the true danger in this debate.  The advocates of regulation pay lip service to the importance of advertising in funding online content and services but don’t seem to understand that this quid pro quo is a fragile one: Tipping the balance, even slightly, could have major consequences for continued online creativity and innovation.

Continue reading →

Eric Goldman is the man.  His “Technology & Marketing Law Blog” is must-reading for cyberlaw geeks; packed with indispensable updates and insights about breaking development in the world of Internet law.

Anyway, he’s just published his “2008 Cyberlaw Year-in-Review,” which provides a comprehensive overview of the major developments and cases from the past year. This is the sort of compendium that I used to have to spend big bucks to get from DC law firms.  And Eric just gives it away as a public resource.  God bless him.

Just wanted draw everyone’s attention to a couple of great podcasts about online safety issues that include comments from members of the Internet Safety Technical Task Force (ISTTF). As I mentioned a few weeks ago, the ISTTF project and final report represent a major milestone in the discussion about online safety in America, and I was honored to serve as a member of this task force.

This in-depth “Radio Berkman” podcast featuring ISTTF director John Palfrey and co-director Dena Sacco is a really excellent (but lengthy!) overview of the ISTTF’s word. Here’s a shorter podcast that Prof. Palfrey did with Larry Magid of CNet. And I also recommend this excellent NPR “On the Media” podcast featuring my friend Stephen Balkam of the Family Online Safety Institute (FOSI).

For those interested, down below you will find a running list I have been keeping of coverage of the ISTTF. (I will try to keep updating this list here).

Continue reading →

As a means of introducing myself to TLF readers, this is an article that I wrote for the PFF blog in September that has not been previously mentioned on the TLF. Most of my other PFF blog posts have been cross-posted by Adam Thierer or Berin Szoka, but I’ve taken ownership of those posts so they appear on my TLF author page.

This is the first in a series of articles that will focus directly on technology instead of technology policy. With an average age of 57, most members of Congress were at least 30 when the IBM PC was introduced in 1981. So it is not surprising that lawmakers have difficulty with cutting-edge technology. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed, but no insult to the reader’s intelligence is intended.

This article focuses on cookies–not the cookies you eat, but the cookies associated with browsing the World Wide Web. There has been public concern over the privacy implications of cookies since they were first developed. But to understand them , you must know a bit of history.

According to Tim Berners Lee, the creator of the World Wide Web, “[g]etting people to put data on the Web often was a question of getting them to change perspective, from thinking of the user’s access to it not as interaction with, say, an online library system, but as navigation th[r]ough a set of virtual pages in some abstract space. In this concept, users could bookmark any place and return to it, and could make links into any place from another document. This would give a feeling of persistence, of an ongoing existence, to each page.”[1. Tim Berners-Lee, Weaving The Web: The Original Design and Ultimate Destiny of the World Wide Web. p. 37. Harper Business (2000).] The Web has changed quite a bit since the early 1990s.

Today, websites are much more dynamic and interactive, with every page being customized for each user. Such customization could include automatically selecting the appropriate language for the user based on where they’re located, displaying only content that has been added since the last time the user visited the site, remembering a user who wants to stay logged into a site from a particular computer, or keeping track of items in a virtual shopping cart. These features are simply not possible without the ability for a website to distinguish one user from another and to remember a user as they navigate from one page to another. Today, in the Web 2.0 era, instead of Web pages having persistence (as Berners-Lee described), we have dynamic pages and “user-persistence.”

This paper describes the various methods websites can use to enable user-persistence and how this affects user privacy. But the first thing the reader must realize is that the Web was not initially designed to be interactive; indeed, as the quote above shows, the goal was the exact opposite. Yet interactivity is critical to many of the things we all take for granted about web content and services today.

Continue reading →

I can’t believe we’re actually asking whether Obama—the candidate who promised to bring the Federal government (and perhaps everyone else) into the Web 2.0 era whether they like it or not—will have a “personal computer.”

The “webiness” of Obama’s predecessors is just embarrassing:   

Clinton famously sent only two e-mails while he was president, one to test whether he could push the “send” button and one to John Glenn, sent while the former Ohio senator was aboard the space shuttle… During his presidency, George W. Bush didn’t have a personal log-in to the White House Internet server, nor did he have a personal whitehouse.gov e-mail address. (He gave up his private e-mail account, G94B@aol.com, just before his first inauguration.) When he did go online, there were some things he couldn’t access. During Bush’s tenure, the White House’s IT department blocked sites like Facebook, YouTube, Twitter, and most of MySpace. The ability to comment on blogs was blocked, as was certain content that was deemed offensive. According to David Almacy, who served as Bush’s director for Internet and e-communications from 2005-07, only two people had access to the iTunes store during that period: Almacy, who had to upload speeches to the site, and the president’s personal aide, so that he could download songs for Bush’s iPod.

Pipes and tubes, pipes and tubes, my friends…  

If Obama decides not to implement whatever legal or technical changes would be required for him to do something so simple as having a computer on his desk, I suppose we’ll know that he’s not really all that interested—at least on a personal level—in all his rhetoric about the power of the Internet to make government more transparent and accountable.  Let’s hope that doesn’t happen.

Digital video recorders (DVRs) may turn out to be the “last gasp” of cable, satellite and other traditional multichannel subscription video providers.  If users can get the same basic functionality (on demand viewing of the shows they want) over the Internet for free or paying for each show rather than a hefty monthly subscription, Who Needs a DVR?, as Nick Wingfield at the WSJ asks:

Among a more narrow band of viewers -– 18- to 34-year-olds -– SRG found that 70% have watched TV online in the past. In contrast, only 36% of that group had watched a show on a TiVo or some other DVR at any time in the past. That last figure is a fairly remarkable statistic. Remember that DVRs have the advantage of playing video back on a device where the vast majority of television consumption has traditionally occurred –- that is, the TV set. Although it’s also possible to watch shows over the Internet on a TV set through a device like Apple TV and Microsoft’s Xbox 360, most people watch online TV shows through their computers — which have inherent disadvantages, like smaller screens and, in most cases, no remote controls.

Indeed, if users are going to buy a piece of hardware, why buy a DVR when they can buy a Roku box or a game console like the XBox 360 that will put Internet-delivered TV on their programming on their “television” (a term that increasingly simply means the biggest LCD in the house, or the one that faces a couch instead of an office chair)— and save money?

This is precisely the point Adam Thierer and I have been hammering away at in this ongoing series.  The availability of TV through the Internet and the ease with which consumers can display that content on a device, and at a time, of their choosing are quickly breaking down the old “gatekeeper” or “bottleneck” power of cable.  Let’s see how long it takes Congress and the FCC to realize that the system of cable regulation created in the analog 1990s no longer makes sense in this truly digital age.

My problem with what Nick Carr is saying about Wikipedia here — as well as in his book The Big Switch — is that he always seems to assume that Wikipedia constitutes the totality of most searches for information online. I suppose it does for some people, but I have a hard time accepting the argument that everyone’s search for enlightenment ends there, even if Wikipedia does rank high in many search results today.

For me, Wikipedia is just a launch pad; a great starting point in the search for truth. I take much of what I read on Wikipedia with a large grain of salt, however, because I know not every entry is as trustworthy as others, and entries could change at any moment. But that’s true of much of what one finds online!  If one adopts a sort of caveat emptor attitude toward Wikipedia, and then uses it to seek out truth from alternative sources found in each entry, or from other searches, then were is the harm?  Only if one could show that the search for truth ends with Wikipedia would I be as concerned as Carr and other Internet pessimists and Wikipedia critics (like Lee Siegel and Andrew Keen). But I just don’t believe that is the case.

Moreover, it is impossible for me to believe that we have fewer authoritative sources of information at our disposal today than we did in the past.   Continue reading →

Post Jeffersons MooseI used to have a (semi-crazy) uncle who typically began conversations with lame jokes or bad riddles. This sounds like one he might have used had he lived long enough: What do Thomas Jefferson, a moose, and cyberspace have in common?

The answer to that question can be found in a new book, In Search of Jefferson’s Moose: Notes on the State of Cyberspace, by David G. Post, a Professor of Law at Temple University. Post, who teaches IP and cyberspace law at Temple, is widely regarded as one of the intellectual fathers of the “Internet exceptionalist” school of thinking about cyberlaw.  Basically, Post sees this place we call “cyberspace” as something truly new, unique, and potentially worthy of some special consideration, or even somewhat different ground rules than we apply in meatspace. More on that in a bit.

[ Full disclosure: Post’s work was quite influential on my own thinking during the late 1990s, so much so that when I joined the Cato Institute in 2000, one of the first things I did was invite David to become an adjunct scholar with Cato. He graciously accepted and remains a Cato adjunct scholar today. Incidentally, Cato is hosting a book forum for him on February 4th that I encourage you to attend or watch online. Anyway, it’s always difficult to be perfectly objective when you know and admire someone, but I will try to do so here.]

Post’s book is essentially an extended love letter — to both cyberspace and Jefferson. Problem is, as Post even admits at the end, it’s tough to know which subject this book is suppose to teach us more about. The book loses focus at times — especially in the first 100 pages — as Post meanders between historical tidbits of Jefferson’s life and thinking and what it all means for cyberspace. But the early focus is on TJ.  Thus, those who pick up the book expecting to be immediately immersed in cyber-policy discussions may be a bit disappointed at first.  As a fellow Jefferson fanatic, however, I found all this history terrifically entertaining, whether it was the story of Jefferson’s Plow and his other agricultural inventions and insights, TJ’s unique interest in science (including cryptography), or that big moose of his.

OK, so what’s the deal with the moose? When TJ was serving as a minister to France in in the late 1780s, at considerable expense to himself, he had the complete skeleton, skin and horns of a massive American moose shipped to the lobby of his Paris hotel. Basically, Jefferson wanted to make a bold statement to his French hosts about this New World he came from and wake them up to the fact that some very exciting things were happening over there that they should be paying attention to. That’s one hell of way to make a statement!

Continue reading →