California Eraser Button Passes

by on September 26, 2013 · 2 comments

California’s continuing effort to make the Internet their own digital fiefdom continued this week with Gov. Jerry Brown signed legislation that creates an online “Eraser Button” just for minors. The law isn’t quite as sweeping as the seriously misguided “right to be forgotten” notion I’ve critique here (1, 2, 3, 4) and elsewhere (5, 6) before. In any event, the new California law will:

require the operator of an Internet Web site, online service, online application, or mobile application to permit a minor, who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application, to remove, or to request and obtain removal of, content or information posted on the operator’s Internet Web site, service, or application by the minor, unless the content or information was posted by a 3rd party, any other provision of state or federal law requires the operator or 3rd party to maintain the content or information, or the operator anonymizes the content or information. The bill would require the operator to provide notice to a minor that the minor may remove the content or information, as specified.

As always, the very best of intentions motivate this proposal. There’s no doubt that some digital footprints left online by minors could come back to haunt them in the future, and that concern for their future reputation and privacy is the primary motivation for the measure. Alas, noble-minded laws like these often lead to many unintended consequences, and even some thorny constitutional issues. I’d be hard-pressed to do a better job of itemizing those potential problems than Eric Goldman, of Santa Clara University School of Law, and Stephen Balkam, Founder and CEO of the Family Online Safety Institute, have done in recent essays on the issue.

Goldman’s latest essay in Forbes argues that “California’s New ‘Online Eraser’ Law Should Be Erased” and meticulously documents the many problems with the law. “The law is riddled with ambiguities,” Goldman argues, including the fact that:

First, it may not be clear when a website/app is “directed” to teens rather than adults. The federal law protecting kids’ privacy (Children’s Online Privacy Protection Act, or COPPA) only applies to pre-teens, so this will be a new legal analysis for most websites and apps.

Second, the law is unclear about when the minor can exercise the removal right. Must the choice be made while the user is still a minor, or can a centenarian decide to remove posts that are over 8 decades old? I think the more natural reading of the statute is that the removal right only applies while the user is still a minor. If that’s right, the law would counterproductively require kids to make an “adult” decision (what content do they want to stand behind for the rest of their lives) when they are still kids.

Third, the removal right doesn’t apply if the kids were paid or received “other consideration” for their content. What does “other consideration” mean in this context? If the marketing and distribution inherently provided by a user-generated content (UGC) website is enough, the law will almost never apply. Perhaps we’ll see websites/apps offering nominal compensation to users to bypass the law.

Goldman also notes that it is unclear why California should even have the right to be regulating the Internet in this fashion. It is his opinion that, “states categorically lack authority to regulate the Internet because the Internet is a borderless electronic network, and websites/apps typically cannot make their electronic packets honor state borders.” I’ve been moving in that direction for the past decade myself since patchwork policies for the Internet — regardless of the issue — can really muck up the free flow of both speech and commerce. I teased out my own concerns about this in my January essay on “The Perils of Parochial Privacy Policies” and argued that the a world of “50 state Internet Bureaus isn’t likely to help the digital economy or serve the long-term interests of consumers.”  Sadly, some privacy advocates seem to be cheering on this sort of parochial regulation anyway without thinking through those consequences. They are probably just happy to have another privacy law on the books, but as I always try to point out not just in this context but also in debates over online child safety, cybersecurity, and digital copyright protection, the ends rarely justify the means. I just don’t understand why more people who care about true Internet freedom aren’t railing against these stepped-up state efforts (especially the flurry of California activity) and calling it out for the threat that it is.

In an essay over on LinkedIn entitled, “Let’s Delete The ‘Eraser Button,’” Stephen Balkam points out another mystery about the new California law: “It’s unclear why this law was even proposed when there exists a range of robust reporting mechanism across the Internet landscape.” Indeed, in this particular case it seems like much of the law is redundant and unnecessary. “What this bill should have been about is education and awareness, about taking responsibility for our actions and using the tools that already exist across the social media landscape,” Balkam says. “Here are three key actions that can already be taken:

Delete – you can take down or delete postings, comments and photos that you have put up on Facebook, Twitter, YouTube and most of the other platforms.

Report – anyone can report abusive comments or inappropriate content by others about you or other people and, in many cases, have them removed.

Request – you can ask that you be untagged from a photo or that a posting or photo be removed that has been uploaded by someone else.

In addition there are in-line privacy settings on many of the leading social media sites, so that you or your teen can choose who sees what.”

Balkam is exactly right. The tools are already there; it’s the education and awareness that are lacking. As I have pointed out countless times here before, there is no need for preemptive regulatory approaches when less-restrictive and potentially equally effective remedies already exist. We just need to do a better job informing users about the existence of those tools and methods and then explain how to take advantage of them. Just adding more layers of law — especially parochial regulation — is not going to make that happen magically. Worse yet, in the process, such laws open the barn door to far more creative and meddlesome forms of state-based Internet regulation that should concern us all.

And now for the really interesting question that I have no answer to: Will anyone step up and challenge this law in court?

  • Born6’5

    Mr.Thierer

    Once again, I am a student in a philosophy class focusing on technology and its progression in the modern world. I will be commenting on your ideas posted in your blog.

    I only have a few points to bring into question for you, the first being the discussion surrounding the clarifications of teen or young teen, young adult or a teen adult. This kind of language to be in a bill is not comforting. I believe when language like this is used in a bill there is room for interpretation, which I believe will be a problem in the court system. People will be coming in all day trying to get a certain picture of themselves off of the internet clogging up our already full judicial system. The mention you made about the social networks having to add these types of new laws into their agreement contracts with their users will help with this problem significantly. All the same I believe there would still be too much of a problem with the overflow of cases. Do you believe our legal system would suffer or is this just a necessary protection for the on coming years of new technology?

    The argument made about kids making “adult” decisions was interesting as well. As a society we like to say that the internet is growing so fast that many people don’t know how to use it correctly and therefore end up getting themselves in trouble. I think that the amount of responsibility that a kid has when going on the internet is enough that he or she has the right to get information taken off of the internet if it effects them negatively. There should be no age limit when it comes to legality and the internet. If the parents or the school is letting them on computers with internet access there is bound to be a few that make mistakes, they’re kids. Do you believe that “teens” cant make these kind of decisions for themselves.? What happens to the age 18 and 21? Do they only have significance in our physical world and not our progressing technological one?

    Thank you very much for your time

  • BBald

    Hello,

    I am in a philosophy class that focuses on using technology ethically and would like to make a few comments on your blog.

    I found you blog post here quite interesting as I just heard that the law had passed listening to the Radio on my way to work this morning. The announcer who was talking about it was all for the law, but obviously ignorant of the fact, which you state at the end of the blog, that you can already delete post/comments from many of the social networking platforms, report post and comments, and request that info be removed when you cannot remove it yourself thus making the law needless.

    I found Goldman’s statement: “the law would counterproductively require kids to make an “adult” decision (what content do they want to stand behind for the rest of their lives)
    when they are still kids.” very interesting. In real life (the life offline) there is no “undo/delete” button so the actions or statements you make you have to live with no matter what age you are. The difference between online and offline being that people tend to forget your actions over time if you do not continue doing them. This allows you to “correct” a mistake by dealing with the consequences and letting time pass. At some point only a few people, if that, will remember the mistake. On the Internet however that “mistake” has the possibility of never getting “forgotten” and hence being pulled up by a search engine. Personally I don’t have a problem with that accept that people have a tendency to take that “new info” that they just found out about a person that occurred years before as current information. When in actuality it’s in the past and so has no bearing on who the person is now.

    I think it impossible to restrict a person or a people to a set of moral or ethical code using laws. You must teach them moral’s and ethics and so I too believe that ignorance of how to use technology is the bigger problem and that educating people on how to use technology is by far the better solution.

Previous post:

Next post: