Working in any field of public policy is a bit like living in a haunted house: You spend most of your day dodging bogeymen, ghosts, phantasms, phantoms and specters of imagined harms, frauds, invasions and various conspiracies supposedly perpetrated by evil companies against helpless consumers, justice, God, Gaia, small woodland creatures and every sort of underserved, disadvantaged and/or underprivileged group of man, animal, vegetable and mineral imaginable.
But Internet policy—particularly online privacy—tends to be haunted by such groundless imaginings far more than most other areas of policy, largely because it manifests itself in ways that are far more real and immediate to ordinary users. For example, as outraged as any of us might feel about the Gulf oil spill, how many of us have the slightest clue what’s really involved (beyond what we’ve learned watching TV anchors stumble through a vocabulary they don’t understand)?
By contrast, huge numbers of Americans have daily interaction with web services like those provided by Google, Microsoft, Yahoo, Twitter and Facebook. That doesn’t mean we necessarily understand how these technologies work. Indeed, quite the contrary! As Arthur C. Clark said, “Any sufficiently advanced technology is indistinguishable from magic.” But we often think we know how these technological marvels work, and certainly sound much more informed when we spout off (pun intended) about these things than, say, “top kills” on the bottom of the ocean floor. In short, we know just enough web services to be dangerous when we ground strong policy positions in our unsophisticated understanding of how things really work online.
There are few better examples of this than the constantly repeated bugaboo that “Facebook sells your data to advertisers!” Or “Facebook only wants you to share more information with more people for advertising purposes!” These myths bear no relation to how advertising on social networking sites actually works, as Facebook CEO Sheryl Sandberg explains beautifully in a short tutorial video. Here’s the key portion: Continue reading →
I spend a lot of my time as an Internet policy analyst railing against elitist suggestions that “ordinary” users are just too dumb to take care of themselves online, no matter how effectively technology empowers them to make decisions for themselves about the content they and their children consume, what data they allow to be shared about themselves on social networking sites or while browsing, etc. Indeed, Adam Thierer and I wrote a lengthy paper about What Unites Advocates of Speech Controls & Privacy Regulation? attacking such elitism when enforced by paternalist laws that assume everyone has the same values and that only the wise philosopher-kings of technology policy can possibly protect us all from our own stupidity.
But of course there are plenty of stupid people in the world, and they often do very stupid things—like walking on the side of a highway with just a few feet between a noise barrier and passing cars just because “Google Maps told you to do so!” That’s essentially what Lauren Rosenberg claims in her very stupid lawsuit against Google, after she was hit by a passing car following directions from the beta walking directions tool in Google Maps—and despite the warning Google provided. Danny Sullivan tells the full story at SearchEngine Land, complete with photos that should have caused any reasonably prudent person to think, “Hey, what a minute, maybe that warning label I saw telling me the suggested route might lack sidewalks or pedestrian paths was actually there for a reason!”
Rosenberg seeks several hundred thousand dollars in damages from Harwood (the driver who hit her) and Google, asserting Google was negligent and failed to adequately warn her. The key policy issue this case raises is the same as in many, many aspects of Internet policy: How much disclosure is enough? As clearly shown by the photos in Danny’s post, Google did warn Rosenberg; so the real danger in this case is that the courts (or lawmakers in the future) could set ever-higher standards for increasingly obnoxious warning labels on websites than they would provide on their own. This reminds me of my all-time favorite warning label (on a collapsible baby stroller): “REMOVE BABY BEFORE FOLDING!” (A contest for similarly inane real-life warnings can be found here.) Continue reading →
NY venture capitalist Fred Wilson notes eight advantages of using the iPhone’s Safari browser over iPhone apps to access content. Fred’s arguments seem pretty sound to me and help to illustrate the point I was trying to make a few months ago in a heated exchange over Adam’s post on Apple’s App Store, Porn & “Censorship”: Although Apple restricts pornographic apps, it does not restrict what iPhone (or iPad or iTouch) users can access on their browsers. (And it’s not censorship, anyway, because that’s what governments do!)
As I noted in that exchange, the main practical advantage of apps right now over the browser seems to be the ability to play videos from websites that require Flash—which is especially useful for porn! Apple has rejected using Flash on the iPhone on technical grounds, in favor of HTML5, which will allow websites to display video without Flash—including on mobile devices. But once HTML5 is implemented (large scale adoption expected in 2012), this primary advantage of apps over mobile Safari will disappear: Users will be able to view porn on their browsers without needing to rely on apps—and Apple’s control over apps based on their content will no longer matter so much, if at all.
Of course, it may take several more years for HTML5 to really become the standard, but what matters is that all Apple products, including mobile Safari, already support HTML5. So it’s just a question of when porn sites move from Flash to HTML5. That seems already to be happening, with major porn publishers already starting the transition. The main stumbling block seems to be HTML5 support from the other browser makers. But Internet Explorer 9 supports HTML5, and is expected out early in 2011 with a beta version due out this August. Mozilla’s Firefox 4.0 (formerly 3.7) also promises HTML5 support and is due out this November. Since porn publishers have always been on the cutting edge of implementing new web technologies, I’d bet we’ll start seeing many porn sites move to HTML5 by this Christmas. And by Christmas 2011, as we all sit around the fire with Grandma sipping eggnog and enjoying our favorite adult websites on our overpriced-but-elegant Apple products loading in HTML5 in the Safari browser, we’ll all look back and wonder why anyone made such a big deal about Apple restricting porn apps.
Oh, and if you get tired of waiting, get an Android phone! Anyway, here are my comments on Adam’s February post: Continue reading →
“Don’t turn COPPA into a sweeping age verification mandate for the Internet!” That was essentially the core message of joint comments (below) Adam Thierer and I today filed with the Center for Democracy & Technology and the Electronic Frontier Foundation on the FTC’s Implementation Review of the rules that implement the Children’s Online Privacy Protection Act of 1998 (which requires verifiable parental consent for kids under 13 to use most interactive sites and services if those sites are “directed to” them or if the site has “actual knowledge” it might be collecting personal information from such kids or allowing them to share such information through the site).
Specifically, we counsel the Commission against expanding COPPA beyond its original, limited purposes and scope, or calling on Congress to enact an expansion. In a techno-functional sense, COPPA is already “expansive,” since it is essentially device- and technology- neutral—essentially applying to any site or service that uses the Internet. That flexibility should allow the FTC to apply the statute in a changing landscape without further legislative changes. But we explain why COPPA is necessarily narrow in its age scope and the “directed to” and “actual knowledge” concepts that actually trigger COPPA’s requirements—and why changing any one of these three critical parts would inevitably lead to unconstitutional restrictions on the speech rights of adults, minors, and site operators, while actually reducing online privacy but without enhancing the online safety of children.
We call instead for the agency (i) to use the breadth and flexibility already given to it by Congress in the COPPA statute to enforce the statute in a manner consistent with the rapidly changing technical landscape and (ii) to supplement enforcement of that existing law with increased educational efforts and promotion of parental empowerment solutions.
Adam and I certainly have our differences with CDT and EFF on some issues, but this is not one of them! I’m deeply proud to join with these organizations in pointing out the unintended consequences of expanding regulation in an area where all too many people stop thinking carefully about the effects of regulation because, they seem to think, “We can never do enough for the children!” As we point out in our comments, the trade-offs here aren’t just between “The Children” and anyone’s narrow economic interests, but run far, far deeper. Adam & I did our best to succinctly capture the true, complex cluster of issues at stake with the title of the paper we released last summer about COPPA expansion: “COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech.”
The stakes here for our digital future could hardly be higher, yet more subtle. Continue reading →
Great piece by ZDNet’s Edd Bott on How a decade of antitrust oversight has changed your PC. Here are his four categories of costs:
- Thanks for all the crapware, Judge
- Competition among browsers? It took a long, long time
- You’re less safe online.
- You want software with that OS? Go download it.
He explains his points brilliantly, so it’s well worth reading the article. On point #3. Bott notes:
Microsoft Security Essentials is available to any Windows PC as a free download, but it’s still not available as part of Windows itself. The Windows 7 Action Center will warn you if you don’t have antivirus software installed, but clicking the Find a Program Online button takes you to this page, where Microsoft’s free offering is one of 23 options, most of which are paid products….
I think the mere threat of an antitrust complaint from a big opponent like Symantec or McAfee has been enough to make Microsoft shy away from doing what is clearly in its customers’ best interests. Although Microsoft Security Essentials is free, it’s not included with Windows. And ironically, even though Microsoft’s offering is free and gets excellent reviews, you’re unlikely to find it on a new PC. Why? Because those competitors who sell antivirus software actually pay PC makers to preload their products, banking, literally, on the fact that a significant percentage of them will pay for an annual subscription.
It’s worth pointing out that there are three possible costs to consumers here: Continue reading →
A diverse group of technology companies including broadband, video and wireless providers as well as Google, Microsoft and hardware giants like Intel and Cisco today launched the Broadband Internet Technical Advisory Group (BITAG or TAG) to provide exactly the kind of self-regulatory forum for dealing with concerns about network management practices that we at PFF have long called for—most recently in Adam Thierer and Mike Wendy’s recent paper, “The Constructive Alternative to Net Neutrality Regulation and Title II Reclassification Wars.” But rather than applauding BITAG, the regulatory radicals at Free Press insisted that:
this or any other voluntary effort is not a substitute for the government setting basic rules of the road for the Internet.
Swansong of an Industry?
There must be a separate FCC rulemaking process, which can take the recommendations of this or any other voluntary advisory group into account, but rubber-stamping those recommendations would ignore the agency’s mandate to create public policy in the public interest. Allowing industry to set its own rules is like allowing BP to regulate its drilling. The Comcast BitTorrent case shows that without government oversight, Internet Service Providers will engage in what are already deemed by engineers to be bad practices
Free Press certainly wouldn’t have the influence they do if they weren’t so good at picking metaphors. But what does the oil spill really teach us about regulation? The Wall Street Journal notes the growing outrage on the political Left against president Obama from those who are “furious and frustrated that the President hasn’t demanded the heads of BP executives on pikes.” But the Journal points out the central irony of the situation:
The [so-called] liberals’ fury at the President is almost as astounding as their outrage over the discovery that oil companies and their regulators might have grown too cozy. In economic literature, this behavior is known as “regulatory capture,” and the current political irony is that this is a long-time conservative critique of the regulatory state….
In the better economic textbooks, regulatory capture is described as a “government failure,” as opposed to a market failure. It refers to the fact that individuals or companies with the highest interest or stake in a policy outcome will be able to focus their energies on politicians and bureaucracies to get the outcome they prefer.
Continue reading →
On April 29, I testified before the Senate Commerce Committee’s Consumer Protection Subcommittee on Examining Children’s Privacy: New Technologies and the Children’s Online Privacy Protection Act (COPPA). Today, I filed 23 pages of responses to questions for the Congressional Record from Subcommittee Chairman Mark Pryor (D-AR), touching on many of the concerns and issues Adam Thierer and I developed in our May 2009 paper, COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech.
At the April hearing, Senators asked whether COPPA could be improved. Today, as in my April oral and written testimony, I again urged lawmakers to “tread carefully” because COPPA, as implemented, basically works. I explained why COPPA’s technological neutrality and flexibility should allow the FTC to keep pace with technological convergence and change without the need for legislative changes. But expanding the statute beyond its limited purposes, especially to cover adolescents under 18, could raise serious constitutional questions about the First Amendment rights of adults as well as older teens and site and service operators, and also have unintended consequences for the health of online content and services without necessarily significantly increasing the online privacy and safety of children.
The Committee’s follow-up questions also inquired about COPPA’s implementation, the subject of today’s FTC Roundtable. I noted that COPPA implementation has gone reasonably well, meeting its primary goal of enhancing parental involvement in children’s online activities, but that implementation has come at a price, since the costs of obtaining verifiable parental consent and otherwise complying with COPPA have, on the one hand, discouraged site and service operators from allowing children on their sites or offering child-oriented content, and, on the other hand, raised costs for child-oriented sites. The FTC could do more to lower compliance costs for website operators, thus allowing achievement of COPPA’s goals at a lower cost for parents and kids in foregone content and services.
Finally, I raised concerns about the FTC’s seeming invitation for changes to the COPPA statute itself. As a general matter, regulatory agencies should not be in the business of re-assessing the adequacy of their own powers, since the natural impulse of all bureaucracy is to grow. Though the agency has done a yeoman’s job of implementing COPPA, ultimately it is the responsibility of Congress, not the FTC, to make decisions about modifying the statute. Continue reading →
Today, Facebook announced significant improvements to its privacy management tools. As explained in the new Privacy Guide, this upgrade allows users to exercise greater and easier choice over sharing of their information on the site and through the site to third party applications and external websites.
By giving users powerful new tools to further protect their privacy, Facebook has employed a potent weapon to deal with marketplace apprehensions: self-regulation. Government intervention stands little chance in acting as swiftly or as effectively to tackle such matters. Rather than short-circuiting the self-regulatory process, we should trust that users are capable of choosing for themselves if given the right tools, and that companies like Facebook will respond to reputational pressure to develop, and constantly improve, those tools. That approach is far more likely to move us towards the ideal of user empowerment than is heavy-handed government regulation, which would override marketplace experimentation and have many unintended consequences for free online sites and services like Facebook.
Today’s announcement represents a major leap forward for privacy controls, but of course the company will have to keep innovating in this area as it does in others. In particular, I hope Facebook and other social networking services like MySpace, Buzz, LinkedIn and Flickr will all work on the next logical step forward: building Applications Programming Interfaces (API) that will allow third party tools to tap into each site’s unique privacy settings so that users can have a single “dashboard” for controlling how they share data across platforms. Continue reading →
We had a great discussion yesterday about the technical underpinnings of the ongoing privacy policy debate in light of the discussion draft of privacy legislation recently released by Chairman Rick Boucher (see PFF’s initial comments here and here). I moderated a free-wheeling discussion among terrific panel consisting of:
Here’s the audio (video to come!)
Ari got us started with an intro to the Boucher bill and Shane offered an overview of the technical mechanics of online advertising and why it requires data about what users do online. Lorrie & Ari then talked about concerns about data collection, leading into a discussion of the challenges and opportunities for empowering privacy-sensitive consumers to manage their online privacy without breaking the advertising business model that sustains most Internet content and services. In particular, we had a lengthy discussion of the need for computer-readable privacy disclosures like P3P (pioneered by Lorrie & Ari) and the CLEAR standard developed by Yahoo! and others as a vital vehicle for self-regulation, but also an essential ingredient in any regulatory system that requires that notice be provided of the data collection practices of all tracking elements on the page. Continue reading →
Leo Laporte claimed today on Twitter that Facebook had censored Texas radio station, KNOI Real Talk 99.7 by banning them from Facebook “for talking about privacy issues and linking to my show and Diaspora [a Facebook competitor]. Since Leo has a twitter audience of 193,884 followers and an even larger number of listeners to his This Week In Tech (TWIT) podcast, this charge of censorship (allegedly involving another station, KRBR, too) will doubtless attract great deal of attention, and helped to lay the groundwork for imposing “neutrality” regulations on social networking sites—namely, Facebook.
Problem is: it’s just another false alarm in a long series of unfounded and/or grossly exaggerated claims. Facebook spokesman Andrew Noyes responded:
The pages for KNOI and KRBR were disabled because one of our automated systems for detecting abuse identified improper actions on the account of the individual who also serves as the sole administrator of the Pages. The automated system is designed to keep spammers and potential harassers from abusing Facebook and is triggered when a user sends too many messages or seeks to friend too many people who ignore their requests. In this case, the user sent a large number of friend requests that were rejected. As a result, his account was disabled, and in consequence, the Pages for which he is the sole administrator were also disabled. The suggestion that our automated system has been programmed to censor those who criticize us is absurd.
Absurd, yes, but when the dust has settled, how many people will remember this technical explanation, when the compelling headline is “Facebook Censors Critics!”? There is a strong parallel here to arguments for net neutrality regulations, which always boil down to claims that Internet service providers will abuse their “gatekeeper” or “bottleneck” power to censor speech they don’t like or squelch competitive threats. Here are just a few of the silly anecdotes that are constantly bandied about in these debates as a sort of “string citation” of the need for regulatory intervention: Continue reading →