Posts tagged as:

Adam Thierer & I have just released a detailed examination (PDF) of brewing efforts to expand the Children’s Online Privacy Protection Act of 1998 to cover adolescents and potentially all social networking sites—an approach we call “COPPA 2.0.”

As Adam explained on Larry Magid’s CNET podcast, COPPA mandates certain online privacy protections for children under 13, most importantly that websites obtain the “verifiable consent” of a child’s parent before collecting personal information about that child or giving that child access to interactive functionality that might allow the child to share their personal information with others. The law was intended primarily to “enhance parental involvement in a child’s online activities” as a means of protecting the online privacy and safety of children.

Yet advocates of expanding COPPA—or “COPPA 2.0″—see COPPA’s verifiable parental consent framework as a means for imposing broad regulatory mandates in the name of online child safety and concerns about social networking, cyber-harassment, etc. Two COPPA 2.0 bills are currently pending in New Jersey and Illinois. The accelerated review of COPPA to be conducted by the FTC next year (five years ahead of schedule) is likely to bring to Washington serious talk of expanding COPPA—even though Congress clearly rejected covering adolescents age 13-16 when COPPA was first proposed back in 1998.

We’ll discuss some of the key points of our paper in a series of blog posts, but here are the top nine reasons for rejecting COPPA 2.0, in that such an approach would:

  • Burden the free speech rights of adults by imposing age verification mandates on many sites used by adults, thus restricting anonymous speech and essentially converging—in terms of practical consequences—with the unconstitutional Children’s Online Protection Act (COPA), another 1998 law sometimes confused with COPPA;
  • Burden the free speech rights of adolescents to speak freely on—or gather information from—legal and socially beneficial websites;
  • Hamper routine and socially beneficial communication between adolescents and adults;
  • Reduce, rather than enhance, the privacy of adolescents, parents and other adults because of the massive volume of personal information that would have to be collected about users for authentication purposes (likely including credit card data);

Continue reading →

Facebook has been at the center of a controversy involving its moderation policies and The Pirate Bay, a popular Bittorrent tracker that was found guilty of copyright infringement by a Swedish court last month. Since early April, Facebook has enforced a “site-wide” ban on links to The Pirate Bay – including those in private messages.wire_tapping_07

This practice may run afoul of federal wiretapping statutes that bar service providers from “intercepting” private messages, according to an article that appeared on Wired Threat Level last week. Wired quotes Kevin Bankston, a senior attorney for the Electronic Frontier Foundation, who explains that Facebook’s filtering raises “serious questions about whether Facebook is in compliance with federal wiretapping law.”

It’s important to draw a distinction between the traditional notion of “wiretapping” and Facebook’s “interception” of user messages, which doesn’t involve any human intervention. Regardless of how the courts may interpret ancient laws like the 1986 Electronic Communications Privacy Act, an automated computer system flagging and deleting certain strings from user messages simply isn’t comparable to a third party secretly listening in on a private phone conversation.

Besides, Facebook makes clear to its users from the get-go that their messages and postings are subject to a set of rules (which Facebook lays out in plain English). If Facebook believes a message or posting is against the rules, it can block or remove it. This is not an unreasonable rule; many online discussion forums have enforced similar policies since the Web’s early days. Such filtering is possible only if sites can “examine” messages to identify misconduct.

Continue reading →

Today, the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) announced the members of the new Online Safety and Technology Working Group (OSTWG).  I am honored to be among those chosen to participate in this new task force and I look forward to continuing the work started last year with the Harvard Berkman Center’s Internet Safety Technical Task Force (ISTTF), which I also served on.   I was very proud of the work done by the ISTTF and the impressive final report that Prof. John Palfrey crafted to reflect our findings.  I am eager to investigate these issues further and take a look at the latest research and technologies that can help us better understand how to protect our kids online while also protecting the free speech and privacy rights of Netizens.

The new NTIA working group, which was established under the “Protecting Children in the 21st Century Act,” will report to the Assistant Secretary of Commerce for Communications and Information on industry-implemented online child safety tools and efforts. Within a year of convening its first meeting, the group will submit a report of its findings and make recommendations on how to increase online safety measures.

Below the fold I have listed the complete roster of OSTWG task force members.  I very much looking forward to working with this outstanding group.  And I’m happy to report that my TLF blogging colleague Braden Cox will be joining me on this task force!

Continue reading →

I’ve been quite depressed to witness Bruce Schneier’s ongoing conversion from opponent of government intervention in the high-tech economy (at least on encryption) to vociferous proponent (at least in terms of privacy regulation).  Anyway, his latest cheerleading piece for government privacy regulation in The Wall Street Journal includes lots of fear-mongering about private website data collection for, God forbid, purposes of trying to better target advertising and market us products we might actually want.

Schneier uses the term “deceptive” several times in the piece to refer to privacy policies that don’t make it explicitly clear that some of the information you leave on a site, or that is collected preemptively by them, will be used to craft more targeted marketing efforts.  Like many other would-be privacy regulators, Schneier seemingly wants companies to fly blimps over your desk as you surf the Net with big signs that basically say: ‘Hey stupid, your info may be used to market you stuff.’  It’s hard to be against more disclosure, of course — and most sites spell out what they do with data in their privacy policies — but it never seems to be good enough for most privacy advocates, who paint consumers out to be mindless sheep who cannot be trusted to make wise decisions for themselves.  Sorry, but I just don’t buy it.

Continue reading →

facebook-logoOn this episode “Tech Policy Weekly,” Technology Liberation Front contributors Ryan Radia and Berin Szoka join me for a discussion of the flare-up over Facebook’s recent changes to the data retention provisions of its Terms of Use agreement and whether there are any serious privacy issues in play here—or if this is all much ado about nothing. [Ryan blogged about it here, and I did here.]

Earlier this month, Facebook announced changes to the way it handled or retained user data on its site after a user quits Facebook, raising questions about who actually owns that data and whether any privacy issues were raised by the company’s new policy. Following some intense scrutiny in the blogosphere, Facebook decided this week to revert to their old terms of service until they figured out a new approach to data management and ownership.

You can begin listening by downloading the MP3 file here or by just clicking the play button below.  Or subscribe to our Podcast ( iTunes, other).

[display_podcast]

Facebook sparked a major user uprising when it amended its terms of service earlier this month to grant the social networking site greater licensing rights over user-submitted content. The implications of Facebook’s amended Terms of Use were originally uncovered by The Consumerist this past Sunday in a story entitled, “Facebook’s New Terms Of Service: ‘We Can Do Anything We Want With Your Content. Forever.'” The title pretty much sums up what the controversy was all about: under Facebook’s amended Terms of Use, even after a user deletes his Facebook account, Facebook would retain its license to distribute nearly all types of user-submitted content including photos and videos.

Predictably, news of Facebook’s expanded licensing rights made many users angry, with several Facebook groups against Terms of Use modifications popping up, attracting thousands of members overnight. As is often the case with juicy reports like this one, news of the Facebook fiasco spread throughout the blogosphere rapidly, eventually making its way to major tech sites and even the main page of CNN.com. By yesterday afternoon, a snapshot of Mark Zuckerberg‘s face was plastered on Fox News Channel, next to an excerpt of an entry he posted to Facebook’s blog in defense of the social networking site’s new terms.

Facebook’s explanation of its new terms seemed reasonable enough: even after a user quits Facebook, material that user has posted on friends’ walls and other messages the user has sent to others may remain available. Facebook also noted that its perpetual license only allowed the site to use material in accordance with departed users’ privacy settings (presumably at the time of their departure). Under the new terms, therefore, Facebook would still be required to respect albums marked as private–and ensure they stay that way.

But the seemingly stark contrast between Facebook’s attempts to justify the changes to its terms of use and, well, the actual language of terms themselves left many observers dissatisfied. In theory, if a user who had a Facebook photo album open to her entire network were to delete her account, Facebook would retain license to make those photos available to members of her network in perpetuity. And depending on how you parse the amended terms, Facebook could even use your profile pic in ads for the social network long after you terminated your Facebook account.

Continue reading →

facebook-logoMuch like the Beacon incident before it, I have mixed feeling about this latest kerfuffle over Facebook’s changes to its privacy policy.

On one hand, I just don’t see what the big deal is. People act like Facebook is taking away all their “rights” or possessions, which is just silly. They were just clarifying how information would be used. In one sense, I feel like saying ‘Chill out. And if you don’t like Facebook’s policies, go use some other social networking site for God’s sake!’

On the other hand, I appreciate the fact that some people are far more sensitive about these things and are seeking to collectively pressure Facebook to change its approach to information use and ownership, and I’m fine with that. In fact, like the Beacon hullabaloo, it’s an example of what Berin Szoka and I have argued is the power of voluntary persuasion and social pressure to remedy privacy concerns before we call on government to adopt coercive, top-down, ham-handed, one-size-fits-all regulatory solutions. As we noted in our recent paper about the looming threat of online advertising regulation:

there are many indirect pressures and reputational incentives that provide an important check on the behavior of firms and the privacy policies they craft.  Just as the Internet increases the ways advertisers can reach audiences, it increases the power audiences have to influence advertisers.  For example, when Facebook introduced its Beacon program in 2007, which shared users’ online purchases with their friends without sufficient warning about how the program worked and the ability to opt-out of the program, the response was swift and effective:  Users “collectively raised their voices” and “the privacy pendulum [swung] back into equilibrium” [according to the Interactive Advertising Bureau.]  Within two weeks of the Beacon program being first deployed, Facebook had created an opt-out procedure.

Continue reading →

(HT The 463) Forget the sex offenders on MySpace, Connecticut Attorney General Richard Blumenthal (and C|Net reporter Elinor Mills) should be investigating reincarnation on Facebook!!

elvis-on-facebook

Terrorism too!

athf-on-facebook

Seriously, they appear to have been completely taken in by a joke MySpace page.

ISTTF coverThe Internet Safety Technical Task Force (ISTTF), which was formed a year ago to study online safety concerns and technologies, today issued its final report to the U.S. Attorneys General who authorized its creation. It was a great honor for me to serve as a member of the ISTTF and I believe this Task Force and its report represent a major step forward in the discussion about online child safety in this country.

The ISTTF was very ably chaired by John Palfrey, co-director of Harvard University’s Berkman Center for Internet & Society, and I just want to express my profound thanks here to John and his team at Harvard for doing a great job herding cats and overseeing a very challenging process. I encourage everyone to examine the full ISTTF report and all the submissions, presentations, and academic literature that we collected. [It’s all here.] It was a comprehensive undertaking that left no stone unturned.

Importantly, the ISTTF convened (1) a Research Advisory Board (RAB),which brought together some of the best and brightest academic researchers in the field of child safety and child development and (2) a Technical Advisory Board (TAB), which included some of America’s leading technologists, who reviewed child safety technologies submitted to the ISTTF. I strongly recommend you closely examine the RAB literature review and TAB assessment of technologies because those reports provide very detailed assessments of the issues. They both represent amazing achievements in their respective arenas.

There are a couple of key takeaways from the ISTTF’s research and final 278-page report that I want to highlight here. Most importantly, like past blue-ribbon commissions that have studied this issue, the ISTTF has generally concluded there is no silver-bullet technical solution to online child safety concerns. The better way forward is a “layered approach” to online child protection. Here’s how we put it on page 6 of the final report:

The Task Force remains optimistic about the development of technologies to enhance protections for minors online and to support institutions and individuals involved in protecting minors, but cautions against overreliance on technology in isolation or on a single technological approach. Technology can play a helpful role, but there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors. Instead, a combination of technologies, in concert with parental oversight, education, social services, law enforcement, and sound policies by social network sites and service providers may assist in addressing specific problems that minors face online. All stakeholders must continue to work in a cooperative and collaborative manner, sharing information and ideas to achieve the common goal of making the Internet as safe as possible for minors.

Continue reading →

Last month, I noted that UCLA Law School professor Doug Lichtman has a wonderful new monthly podcast called the “Intellectual Property Colloquium.” This month’s show features two giants in the field of tech policy — George Washington Law Professor Daniel Solove and Santa Clara Law Professor Eric Goldman –- discussing online privacy, defamation, and intermediary liability. More specifically, in separate conversations, Solove and Goldman both consider the scope of Section 230 of the Communications Decency Act of 1996, which shields Internet intermediaries from liability for the speech and expression of their users. Sec. 230 is the subject of hot debate these days and Solove and Goldman provide two very different perspectives about the law and its impact.

Goldman calls Sec. 230 “pure cyberspace exceptionalism” in the sense that it breaks from traditional tort norms governing intermediary liability. But he argues that this new online version of intermediary liability (which is extremely limited in scope) encourages more robust speech and expression than the older, offline version of liability (which was far more strict). I completely agree with Eric Goldman, but I respect the arguments that Lichtman and Solove raise about the privacy and defamation problems raised by the purist approach that Goldman and I favor.

Goldman also does a nice job dissecting the Roomates.com and Craigslist.com cases. And Lichtman brings up the JuicyCampus.com case during the conclusion. These are important cases for the future of Sec. 230 and online liability. Incidentally, there’s also an interesting conversation between Lichtman and Solove (around the 32:00 mark) about an issue that Alex Harris and Tim Lee have been raising here about the nature of online contracts and the perils of messy EULAs / Terms of Service (TOS).

These are two absolutely terrific conversations. Very in-depth and very highly recommended. Listen here.

[Note: I recently reviewed Daniel Solove’s important new book, Understanding Privacy, here.]