California’s continuing effort to make the Internet their own digital fiefdom continued this week with Gov. Jerry Brown signed legislation that creates an online “Eraser Button” just for minors. The law isn’t quite as sweeping as the seriously misguided “right to be forgotten” notion I’ve critique here (1, 2, 3, 4) and elsewhere (5, 6) before. In any event, the new California law will:
require the operator of an Internet Web site, online service, online application, or mobile application to permit a minor, who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application, to remove, or to request and obtain removal of, content or information posted on the operator’s Internet Web site, service, or application by the minor, unless the content or information was posted by a 3rd party, any other provision of state or federal law requires the operator or 3rd party to maintain the content or information, or the operator anonymizes the content or information. The bill would require the operator to provide notice to a minor that the minor may remove the content or information, as specified.
As always, the very best of intentions motivate this proposal. There’s no doubt that some digital footprints left online by minors could come back to haunt them in the future, and that concern for their future reputation and privacy is the primary motivation for the measure. Alas, noble-minded laws like these often lead to many unintended consequences, and even some thorny constitutional issues. I’d be hard-pressed to do a better job of itemizing those potential problems than Eric Goldman, of Santa Clara University School of Law, and Stephen Balkam, Founder and CEO of the Family Online Safety Institute, have done in recent essays on the issue.
Goldman’s latest essay in Forbes argues that “California’s New ‘Online Eraser’ Law Should Be Erased” and meticulously documents the many problems with the law. “The law is riddled with ambiguities,” Goldman argues, including the fact that:
First, it may not be clear when a website/app is “directed” to teens rather than adults. The federal law protecting kids’ privacy (Children’s Online Privacy Protection Act, or COPPA) only applies to pre-teens, so this will be a new legal analysis for most websites and apps.
Second, the law is unclear about when the minor can exercise the removal right. Must the choice be made while the user is still a minor, or can a centenarian decide to remove posts that are over 8 decades old? I think the more natural reading of the statute is that the removal right only applies while the user is still a minor. If that’s right, the law would counterproductively require kids to make an “adult” decision (what content do they want to stand behind for the rest of their lives) when they are still kids.
Third, the removal right doesn’t apply if the kids were paid or received “other consideration” for their content. What does “other consideration” mean in this context? If the marketing and distribution inherently provided by a user-generated content (UGC) website is enough, the law will almost never apply. Perhaps we’ll see websites/apps offering nominal compensation to users to bypass the law.
Goldman also notes that it is unclear why California should even have the right to be regulating the Internet in this fashion. It is his opinion that, “states categorically lack authority to regulate the Internet because the Internet is a borderless electronic network, and websites/apps typically cannot make their electronic packets honor state borders.” I’ve been moving in that direction for the past decade myself since patchwork policies for the Internet — regardless of the issue — can really muck up the free flow of both speech and commerce. I teased out my own concerns about this in my January essay on “The Perils of Parochial Privacy Policies” and argued that the a world of “50 state Internet Bureaus isn’t likely to help the digital economy or serve the long-term interests of consumers.” Sadly, some privacy advocates seem to be cheering on this sort of parochial regulation anyway without thinking through those consequences. They are probably just happy to have another privacy law on the books, but as I always try to point out not just in this context but also in debates over online child safety, cybersecurity, and digital copyright protection, the ends rarely justify the means. I just don’t understand why more people who care about true Internet freedom aren’t railing against these stepped-up state efforts (especially the flurry of California activity) and calling it out for the threat that it is.
In an essay over on LinkedIn entitled, “Let’s Delete The ‘Eraser Button,'” Stephen Balkam points out another mystery about the new California law: “It’s unclear why this law was even proposed when there exists a range of robust reporting mechanism across the Internet landscape.” Indeed, in this particular case it seems like much of the law is redundant and unnecessary. “What this bill should have been about is education and awareness, about taking responsibility for our actions and using the tools that already exist across the social media landscape,” Balkam says. “Here are three key actions that can already be taken:
Delete – you can take down or delete postings, comments and photos that you have put up on Facebook, Twitter, YouTube and most of the other platforms.
Report – anyone can report abusive comments or inappropriate content by others about you or other people and, in many cases, have them removed.
Request – you can ask that you be untagged from a photo or that a posting or photo be removed that has been uploaded by someone else.
In addition there are in-line privacy settings on many of the leading social media sites, so that you or your teen can choose who sees what.”
Balkam is exactly right. The tools are already there; it’s the education and awareness that are lacking. As I have pointed out countless times here before, there is no need for preemptive regulatory approaches when less-restrictive and potentially equally effective remedies already exist. We just need to do a better job informing users about the existence of those tools and methods and then explain how to take advantage of them. Just adding more layers of law — especially parochial regulation — is not going to make that happen magically. Worse yet, in the process, such laws open the barn door to far more creative and meddlesome forms of state-based Internet regulation that should concern us all.
And now for the really interesting question that I have no answer to: Will anyone step up and challenge this law in court?