Online Child Safety – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 06 Jul 2022 00:35:49 +0000 en-US hourly 1 6772528 Again, We Should Not Ban All Teens from Social Media https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/ https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/#comments Wed, 06 Jul 2022 00:16:49 +0000 https://techliberation.com/?p=77004

A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.

Earlier this year, The Wall Street Journal ran a response I wrote to a proposal set forth by columnist Peggy Noonan in which she proposed banning everyone under 18 from all social-media sites (“We Can Protect Children and Keep the Internet Free,” Apr. 15). I expanded upon that letter in an essay here entitled, “Should All Kids Under 18 Be Banned from Social Media?” National Review also recently published an article penned by Christine Rosen in which she also proposes to “Ban Kids from Social Media.” And just this week, Zach Whiting of the Texas Public Policy Foundation published an essay on “Why Texas Should Ban Social Media for Minors.”

I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:

While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances. Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the  next generation that is doomed! Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.

I have a few more things to say beyond these brief comments.

First, as I alluded to in my short response to Rosen, we’ve heard similar “lost generation” stories before. Rosen might as well be channeling the ghost of Dr. Fredric Wertham (author of Seduction of the Innocent), who in the 1950s declared comics books a public health menace and lobbied lawmakers to restrict teen access to them, insisting such comics were “the cause of a psychological mutilation of children.” The same sort of “lost generation” predictions were commonplace in countless anti-video game screeds of the 1990s. Critics were writing books with titles like Stop Teaching Our Kids to Kill and referring to video games as “murder simulators,” Ironically, just as the video game panic was heating up, juvenile crime rates were plummeting. But that didn’t stop the pundits and policymakers from suggesting that an entire generation of so-called “vidiots” were headed for disaster. (See my 2019 short history: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics“).

It is consistently astonishing to me how, as I noted in 2012 essay, “We Always Sell the Next Generation Short.” There seems to be a never-ending cycle of generational mistrust. “There has probably never been a generation since the Paleolithic that did not deplore the fecklessness of the next and worship a golden memory of the past,” notes Matt Ridley, author of The Rational Optimist.

For example, in 1948, the poet T. S. Eliot declared: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.” We’ve heard parents (and policymakers) make similar claims about every generation since then.

What’s going on here? Why does this cycle of generational pessimism and mistrust persist? In a 1992 journal article, the late journalism professor Margaret A. Blanchard offered this explanation:

“[P]arents and grandparents who lead the efforts to cleanse today’s society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation’s adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children.”

In a 2009 book on culture, my colleague Tyler Cowen also noted how, “Parents, who are entrusted with human lives of their own making, bring their dearest feelings, years of time, and many thousands of dollars to their childrearing efforts.” Unsurprisingly, therefore, “they will react with extreme vigor against forces that counteract such an important part of their life program.” This explains why “the very same individuals tend to adopt cultural optimism when they are young, and cultural pessimism once they have children,” Cowen says.

Building on Blanchard and Cowen’s observation, I have explained how the most simple explanation for this phenomenon is that many parents and cultural critics have passed through their “adventure window.” The willingness of humans to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually settle in our ways. As the English satirist Douglas Adams once humorously noted: “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

There is no doubt social media can create or exacerbate certain social pathologies among youth. But pro-censorship conservatives wants to take the easy way out with a Big Government media ban for the ages.

Ultimately, it’s a solution that will not be effective. Raising children and mentoring youth is certainly the hardest task we face as adults because simple solutions rarely exist to complex human challenges–and the issues kids face are often particularly hard for many parents and adults to grapple with because we often fail to fully understand both the unique issues each generation might face, and we definitely fail to fully grasp the nature of each new medium that youth embrace.  Simplistic solution–even proposals for outright bans–will not work or solve serious problems.

An outright government ban on online platforms or digital devices is likely never going to happen due to First Amendment constraints, but even ignoring the jurisprudential barriers, bans won’t work for a reason that these conservatives never bother considering: Many parents will help their kids get access to those technologies and to evade restrictions on their use. Countless parents already do so in violation of COPPA rules, and not just because they worry that their kid won’t have access to what some other kids have. Rather, many parents (like me) both wanted to make sure I could more easily communicate with them, and also ensure that they could enjoy those technologies and use them to explore the world.

These conservatives might think some parents like me are monsters for allowing my (now grown) children to get on social media when they were teens. I wasn’t blind to the challenges, but recognized that sticking one’s head in the ground or hoping for divine intervention from the Nanny State was impractical and unwise. The hardest conversations I ever had with my kids were about the ugliness they sometimes experienced online, but those conversations were also countered by the many joys that I knew online interactions brought them. Shall I tell you about everything my son learned online before 13 about building model rockets or soapbox derby cars? Or the countless sites my daughter visited gathering ideas for her arts and crafts projects when, before the age of 13, she started hand-painting and selling jean jackets (eventually prompting her to pursue an art school degree)? Again, as I noted in my National Review response, Rosen’s prohibitionist proposal would deny teens these experiences and the countless other routine and entirely beneficial interactions that they have with their peers online every day.

There is simply no substitute for talking to your kids in the most open, understanding, and loving fashion possible. My #1 priority with my own children was not foreclosing all the new digital media platforms and devices at their disposal. That was going to be almost impossible. Other approaches are needed.

Yes, of course, the world can be an ugly place. I mean, have you ever watched the nightly news on television? It’s damn ugly. Shouldn’t we block youth access to it when scenes of war and violence are shown? Newspapers are full of ugliness, too. Should a kid be allowed to see the front page of the paper when it discusses or shows the aftermath of school shootings, acts of terrorism, or even just natural disasters? I could go on, but you get the point. And you could try to claim that somehow today’s social media environment is significantly worse for kids than the mass media of old, but you cannot prove it.

Of course you’ll have anecdotes, and many of them will again point to complex social pathologies. But I have entire shelves full of books on my office wall that made similar claims about the effects of books, the telephone, radio and television, comics, cable TV, every musical medium ever, video games, and advertising efforts across all these mediums. Hundreds upon hundreds of studies were done over the past half century about the effects of depictions of violence in movies, television, and video games. And endless court battles ensued.

In the end, nothing came out of it because the literature was inconclusive and frequently contradictory. After many years of panicking about youth and media violence, in 2020, the American Psychological Association issued a new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA now says: “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.”

This is exactly what we should expect to find true for youth and social media. Most of the serious scholars in the field already note studies and findings about youth and social media must be carefully evaluated and that many other factors need to be considered whenever evaluating claims about complex social phenomenon.

While Rosen belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to represent the best first-order response when compared to the repressive regulatory regime she would impose on society.

Finally, I want to just reiterate what I said in my brief  National Review response about the enormous challenges associated with mass criminalization or speech platforms. Rosen seems to image that all the costs and controversies will lie on the supply-side of social media. Just call for a ban and then magically all kids disappear from social media and the big evil tech capitalists eat all the costs and hassles. Nonsense. It’s the demand-side of criminalization efforts where the most serious costs lie. What do you really think kids are going to do if Uncle Sam suddenly does ban everyone under 18 from going on a “social media site,” whatever that very broad term entails? This will become another sad chapter in the history of Big Government prohibitionist efforts that fail miserably, but not before declaring mass groups of people criminals–this time including everyone under 18–and then trying to throw the book at them when they seek to avoid those repressive controls. There are better ways to address these problems than with such extremist proposals.


Additional Reading from Adam Thierer on Media & Content Regulation :

]]>
https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/feed/ 1 77004
Should All Kids Under 18 Be Banned from Social Media? https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/ https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/#respond Mon, 18 Apr 2022 15:00:00 +0000 https://techliberation.com/?p=76966

This weekend, The Wall Street Journal ran my short letter to the editor entitled, “We Can Protect Children and Keep the Internet Free.” My letter was a response to columnist Peggy Noonan’s April 9 oped, “Can Anyone Tame Big Tech?” in which she proposed banning everyone under 18 from all social-media sites. She specifically singled out TikTok, Youtube, and Instagram and argued “You’re not allowed to drink at 14 or drive at 12; you can’t vote at 15. Isn’t there a public interest here?”

I briefly explained why Noonan’s proposal is neither practical nor sensible, noting how it:

would turn every kid into an instant criminal for seeking access to information and culture on the dominant medium of their generation. I wonder how she would have felt about adults proposing to ban all kids from listening to TV or radio during her youth. Let’s work to empower parents to help them guide their children’s digital experiences. Better online-safety and media-literacy efforts can prepare kids for a hyperconnected future. We can find workable solutions that wouldn’t usher in unprecedented government control of speech.

Let me elaborate just a bit because this was the focus of much of my writing a decade ago, including my book, Parental Controls & Online Child Protection: A Survey of Tools & Methods, which spanned several editions. Online child safety is a matter I take seriously and the concerns that Noonan raised in her oped have been heard repeatedly since the earliest days of the Internet. Regulatory efforts were immediately tried. They focused on restricting underage access to objectionable online content (as well as video games), but were immediately challenged and struck down as unconstitutionally overbroad restrictions on free speech and a violation of the First Amendment of the U.S. Constitution.

But practically speaking, most of these efforts were never going to work anyway. There was almost no way to bottle up all the content flowing in the modern information ecosystem without highly repressive regulation, and it was going to be nearly impossible to keep kids off the Internet altogether when it was the dominant communications and entertainment medium of their generation. The first instinct of every moral panic wave–from the waltz to comic books to rock or rap music to video games–has often been to take the easy way out by proposing sweeping bans on all access by kids to the content or platforms of their generation. It never works.

Nor should it. There is a huge amount of entirely beneficial speech, content, and communications that kids would be denied by such sweeping bans. That would make such ban highly counter-productive. But, again, usually such efforts just were not practically enforceable because kids are often better at the cat-and-mouse game than adults give them credit for. Moreover, imposing age limitations of speech or content are far more difficult than age-related bans on specific tangible products, like tobacco or other dangerous physical products.

Acknowledging these realities, most sensible people quickly move on from extreme proposals like flat bans of all kids using the popular media platforms and systems of the day. Over the past half century in the U.S., this has led to a flowering of more decentralized governance approach to kids and media that I have referred to as the “3E approach.” That stands for empowerment (of parents), education (of youth), and enforcement (of existing laws). The 3E approach includes a variety of mechanisms and approaches, including: self-regulatory codes, private content rating systems, a wide variety of different parental control technologies, and much more.

Over the past two decades, many multistakeholder initiatives and blue-ribbon commissions were created to address online safety issues in a holistic fashion. I summarized their conclusions in my 2009 report, “Five Online Safety Task Forces Agree: Education, Empowerment & Self-Regulation Are the Answer.” The crucial takeaway from all these task forces and commissions is that no silver-bullet solutions exist to hard problems. Child safety demands a vigilant but adaptive approach, rooted in a variety of best practices, educational approaches, and technological empowerment solutions to address various safety concerns. Digital literacy is particularly crucial to building wiser, more resilient kids and adults, who can work together to find constructive approaches to hard problems.

Importantly, our task here is never done. This is an ongoing and evolving process. Issues like underage access to pornography or violent content have been with us for a very long time and will never be completely “solved.” We must constantly work to improve existing online safety mechanisms while also devising new solutions for our rapidly evolving information ecosystem. Nothing should be off the table except the one solution that Noonan suggested in her essay. Just proposing outright bans on kids on social media or any other new media platform (VR will be next) is an unworkable and illogical response that we should dismiss fairly quickly. No matter how well-intentioned such proposals may be, moral panic-induced prohibitions on kids and media ultimately are not going to help them learn to live better, safer, and more enriching lives in the new media ecosystems of today or the future. We can do better.

 

]]>
https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/feed/ 0 76966
Adam Thierer on Permissionless Innovation https://techliberation.com/2014/05/13/thierer/ https://techliberation.com/2014/05/13/thierer/#respond Tue, 13 May 2014 10:00:30 +0000 http://techliberation.com/?p=74547

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

]]>
https://techliberation.com/2014/05/13/thierer/feed/ 0 74547
Problematic “Do Not Track Kids” Bill Reintroduced https://techliberation.com/2013/11/14/problematic-do-not-track-kids-bill-reintroduced/ https://techliberation.com/2013/11/14/problematic-do-not-track-kids-bill-reintroduced/#respond Thu, 14 Nov 2013 20:27:58 +0000 http://techliberation.com/?p=73808

Sen. Edward J. Markey (D-Mass.) and Rep. Joe Barton (R-Texas) have reintroduced their “Do Not Track Kids Act,” which, according to this press release, “amends the historic Children’s Online Privacy Protection Act of 1998 (COPPA), will extend, enhance and update the provisions relating to the collection, use and disclosure of children’s personal information and establishes new protections for personal information of children and teens.” I quickly scanned the new bill and it looks very similar to their previous bill of the same name that they introduced in 2011 and which I wrote about here and then critiqued at much greater length in a subsequent Mercatus Center working paper (“Kids, Privacy, Free Speech & the Internet: Finding The Right Balance”).

Since not much appears to have changed, I would just encourage you to check out my old working paper for a discussion of why this legislation raises a variety of technical and constitutional issues. But I remain perplexed by how supporters of this bill think they can devise age-stratified online privacy protections without requiring full-blown age verification for all Internet users. And once you go down that path, as I note in my paper, you open up a huge Pandora’s Box of problems that we have already grappled with for many years now. As I noted in my paper, the real irony here is that the “problem with these efforts is that expanding COPPA would require the collection of more personal information about kids and parents. For age verification to be effective at the scale of the Internet, the collection of massive amounts of additional data is necessary.”

But that’s hardly the only problem. How about the free speech rights of teens? They do have some, after all, but this bill could create new limitations on their ability to freely surf the Internet, gather information, and communicate with others.

In the end, I don’t expect this bill to pass; it’s mostly just political grandstanding “for the children.” But it’s a real shame that smart people waste their time with counter-productive and constitutionally suspect measures such as these instead of focusing their energy on more constructive educational efforts and awareness-building approaches to online safety and privacy concerns. Again, read my paper for more details on that alternative approach to these issues.

]]>
https://techliberation.com/2013/11/14/problematic-do-not-track-kids-bill-reintroduced/feed/ 0 73808
California Eraser Button Passes https://techliberation.com/2013/09/26/california-eraser-button-passes/ https://techliberation.com/2013/09/26/california-eraser-button-passes/#comments Thu, 26 Sep 2013 14:32:09 +0000 http://techliberation.com/?p=73572

California’s continuing effort to make the Internet their own digital fiefdom continued this week with Gov. Jerry Brown signed legislation that creates an online “Eraser Button” just for minors. The law isn’t quite as sweeping as the seriously misguided “right to be forgotten” notion I’ve critique here (1, 2, 3, 4) and elsewhere (5, 6) before. In any event, the new California law will:

require the operator of an Internet Web site, online service, online application, or mobile application to permit a minor, who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application, to remove, or to request and obtain removal of, content or information posted on the operator’s Internet Web site, service, or application by the minor, unless the content or information was posted by a 3rd party, any other provision of state or federal law requires the operator or 3rd party to maintain the content or information, or the operator anonymizes the content or information. The bill would require the operator to provide notice to a minor that the minor may remove the content or information, as specified.

As always, the very best of intentions motivate this proposal. There’s no doubt that some digital footprints left online by minors could come back to haunt them in the future, and that concern for their future reputation and privacy is the primary motivation for the measure. Alas, noble-minded laws like these often lead to many unintended consequences, and even some thorny constitutional issues. I’d be hard-pressed to do a better job of itemizing those potential problems than Eric Goldman, of Santa Clara University School of Law, and Stephen Balkam, Founder and CEO of the Family Online Safety Institute, have done in recent essays on the issue.

Goldman’s latest essay in Forbes argues that “California’s New ‘Online Eraser’ Law Should Be Erased” and meticulously documents the many problems with the law. “The law is riddled with ambiguities,” Goldman argues, including the fact that:

First, it may not be clear when a website/app is “directed” to teens rather than adults. The federal law protecting kids’ privacy (Children’s Online Privacy Protection Act, or COPPA) only applies to pre-teens, so this will be a new legal analysis for most websites and apps. Second, the law is unclear about when the minor can exercise the removal right. Must the choice be made while the user is still a minor, or can a centenarian decide to remove posts that are over 8 decades old? I think the more natural reading of the statute is that the removal right only applies while the user is still a minor. If that’s right, the law would counterproductively require kids to make an “adult” decision (what content do they want to stand behind for the rest of their lives) when they are still kids. Third, the removal right doesn’t apply if the kids were paid or received “other consideration” for their content. What does “other consideration” mean in this context? If the marketing and distribution inherently provided by a user-generated content (UGC) website is enough, the law will almost never apply. Perhaps we’ll see websites/apps offering nominal compensation to users to bypass the law.

Goldman also notes that it is unclear why California should even have the right to be regulating the Internet in this fashion. It is his opinion that, “states categorically lack authority to regulate the Internet because the Internet is a borderless electronic network, and websites/apps typically cannot make their electronic packets honor state borders.” I’ve been moving in that direction for the past decade myself since patchwork policies for the Internet — regardless of the issue — can really muck up the free flow of both speech and commerce. I teased out my own concerns about this in my January essay on “The Perils of Parochial Privacy Policies” and argued that the a world of “50 state Internet Bureaus isn’t likely to help the digital economy or serve the long-term interests of consumers.”  Sadly, some privacy advocates seem to be cheering on this sort of parochial regulation anyway without thinking through those consequences. They are probably just happy to have another privacy law on the books, but as I always try to point out not just in this context but also in debates over online child safety, cybersecurity, and digital copyright protection, the ends rarely justify the means. I just don’t understand why more people who care about true Internet freedom aren’t railing against these stepped-up state efforts (especially the flurry of California activity) and calling it out for the threat that it is.

In an essay over on LinkedIn entitled, “Let’s Delete The ‘Eraser Button,'” Stephen Balkam points out another mystery about the new California law: “It’s unclear why this law was even proposed when there exists a range of robust reporting mechanism across the Internet landscape.” Indeed, in this particular case it seems like much of the law is redundant and unnecessary. “What this bill should have been about is education and awareness, about taking responsibility for our actions and using the tools that already exist across the social media landscape,” Balkam says. “Here are three key actions that can already be taken:

Delete – you can take down or delete postings, comments and photos that you have put up on Facebook, Twitter, YouTube and most of the other platforms. Report – anyone can report abusive comments or inappropriate content by others about you or other people and, in many cases, have them removed. Request – you can ask that you be untagged from a photo or that a posting or photo be removed that has been uploaded by someone else. In addition there are in-line privacy settings on many of the leading social media sites, so that you or your teen can choose who sees what.”

Balkam is exactly right. The tools are already there; it’s the education and awareness that are lacking. As I have pointed out countless times here before, there is no need for preemptive regulatory approaches when less-restrictive and potentially equally effective remedies already exist. We just need to do a better job informing users about the existence of those tools and methods and then explain how to take advantage of them. Just adding more layers of law — especially parochial regulation — is not going to make that happen magically. Worse yet, in the process, such laws open the barn door to far more creative and meddlesome forms of state-based Internet regulation that should concern us all.

And now for the really interesting question that I have no answer to: Will anyone step up and challenge this law in court?

]]>
https://techliberation.com/2013/09/26/california-eraser-button-passes/feed/ 2 73572
video: Education Beats Silver-Bullet Solutions for Privacy & Online Safety https://techliberation.com/2013/07/21/video-education-beats-silver-bullet-solutions-for-privacy-online-safety/ https://techliberation.com/2013/07/21/video-education-beats-silver-bullet-solutions-for-privacy-online-safety/#respond Sun, 21 Jul 2013 17:16:55 +0000 http://techliberation.com/?p=45248

Last month, it was my great pleasure to serve as a “provocateur” at the IAPP’s (Int’l Assoc. of Privacy Professionals) annual “Navigate” conference. The event brought together a diverse audience and set of speakers from across the globe to discuss how to deal with the various privacy concerns associated with current and emerging technologies.

My remarks focused on a theme I have developed here for years: There are no simple, silver-bullet solutions to complex problems such as online safety, security, and privacy. Instead, only a “layered” approach incorporating many different solutions–education, media literacy, digital citizenship, evolving society norms, self-regulation, and targeted enforcement of existing legal standards–can really help us solve these problems. Even then, new challenges will present themselves as technology continues to evolve and evade traditional controls, solutions, or norms. It’s a never-ending game, and that’s why education  must be our first-order solution. It better prepares us for an uncertain future. (I explained this approach in far more detail in this law review article.)

Anyway, if you’re interested in an 11-minute video of me saying all that, here ya go. Also, down below I have listed several of the recent essays, papers, and law review articles I have done on this issue.

Some of My Recent Essays on Privacy & Data Collection

Testimony / Filings:

Law Review Articles:

Blog posts:

]]>
https://techliberation.com/2013/07/21/video-education-beats-silver-bullet-solutions-for-privacy-online-safety/feed/ 0 45248
What Are We Going to Do after COPPA Fails? https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/ https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/#respond Tue, 09 Jul 2013 00:39:34 +0000 http://techliberation.com/?p=45114

This afternoon, Berin Szoka asked me to participate in a TechFreedom conference on “COPPA: Past, Present & Future of Children’s Privacy & Media.” [CSPAN video is here.] It was a in-depth, 3-hour, 2-panel discussion of the Federal Trade Commission’s recent revisions to the rules issued under the 1998 Children’s Online Privacy Protection Act (COPPA).

While most of the other panelists were focused on the devilish details about how COPPA works in practice (or at least should work in practice), I decided to ask a more provocative question to really shake up the discussion: What are we going to do when COPPA fails?

My notes for the event follow down below. I didn’t have time to put them into a smooth narrative, so please pardon the bullet points.

COPPA will fail in the long-run for two reasons:

(1)    With COPPA, the FTC is engaged in a technological arms race that it cannot win.

  • COPPA was formulated for a Web 1.0 world of static websites with limited interactivity. In that environment is worked reasonably well, although it certainly imposed costs on site developers and affected market structure.
  • As we moved into a Web 2.0 world of interactive social media in the mid to late-2000s, however, the rule has been strained by marketplace new realities. COPPA’s drafters never really envisioned sites like Facebook, Twitter, etc.
  • In our current environment—let’s call it the Web 2.5 world—we have added mobile geolocation and social discovery to the mix and that is straining COPPA to the breaking point.
  • But we are about to enter the Web 3.0 world of the “Internet of Things;” a sensor-based world in which the communication technology will literally be woven into the clothes we wear and all the devices we use.
    • Cisco has estimated that by 2020, 37 billion devices will be linked together and communicating.
    • It will be almost impossible for COPPA to keep up with the explosion of these technologies because everything in our lives and our children’s lives will be interconnected, communicating, and collecting data.
    • Information will be ubiquitously collected simply by nature of the technology itself.
    • The entire Web 3.0 world will be one of comprehensive passive information collection.
    • So, notions like “collection”, “directed at children” and “personal information” will be become impossible to enforce absence a flat-out ban on the technologies themselves

(2)    COPPA will also fail because of the simple reality that the more complicated and costly this regulatory regime becomes, the more likely it is that that both kids and parents will ignore it or seek to actively evade it.

  • The actual monetary cost of any online service may obviously be one thing parents and kids seek to avoid.
  • But the bigger cost is the mental hassle associated with delayed gratification.
    • When people demand certain services, they want them now. And they will get them even when law gets in the way. And sometimes they value the utility / functionality that those services provide more than they value privacy.
    • A 2011 Harvard-Berkeley study pointed out the evasion is already rampant and that many parents are facilitating that result by encouraging their kids to lie about their ages online.
      • This problem will only increase in the Internet of Things era as kids and parents come to expect all their devices to be communicating at all times and retaining data for them.

So, what are we going to do about? How do we prepare for the post-COPPA world that’s coming?

  • We shouldn’t just throw up our hands in defeat.
  • But we must accept the technological and practical challenges associated with regulation and seek out alternative approaches.
  • Best solution, therefore, is: Education, media literacy, and digital citizenship
    • We need to do a much better job educating both kids and adults about sensible online interactions.
    • We need to talk to our kids and each other about being more savvy, sensible, respectful, and resilient media consumers and digital citizens.
    • In encouraging our kids and fellow Netizens to be good “digital citizens,” we must stress smarter online hygiene (sensible personal data use) and better “Netiquette” (proper behavior toward others), which can further both online safety and digital privacy goals.
    • More generally, as part of these digital literacy and citizenship efforts, we must do more  to explain the potential perils of over-sharing information about ourselves and others while simultaneously encouraging consumers to delete unnecessary online information occasionally and cover their digital footprints in other ways.
    • These education and literacy efforts are also important because they help us adapt to new technological changes by employing a variety of coping mechanisms or new social norms. These efforts and lessons should start at a young age and continue on well into adulthood through other means, such as awareness campaigns and public service announcements.

Additional Reading:

]]>
https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/feed/ 0 45114
Mr. Bitcoin goes to Washington https://techliberation.com/2013/06/13/mr-bitcoin-goes-to-washington/ https://techliberation.com/2013/06/13/mr-bitcoin-goes-to-washington/#comments Thu, 13 Jun 2013 20:58:31 +0000 http://techliberation.com/?p=44967

Today I had the great pleasure of moderating a panel discussion at a conference on the “Virtual Economy” hosted by Thomson Reuters and the International Center for Missing and Exploited Children. On my panel were representatives from the Bitcoin Foundation, the Tor Project, and the DOJ, and we had a lively discussion about how these technologies can potentially be used by criminals and what these open source communities might be able to do to mitigate that risk.

The bottom line message that came out of the panel (and indeed every panel) is that the Tor and Bitcoin communities do not like to see the technologies they develop put to evil uses, and that they are more than willing to work with policymakers and law enforcement to the extent that they can. On the flip side, the message to regulators was that they need to be more open, inclusive, and transparent in their decision making if they expect cooperation from these communities.

I was therefore interested in the keynote remarks delivered by Jennifer Shasky Calvery, the Director of the Treasury Department’s Financial Crimes Enforcement Network. In particular, she addressed the fact that since there have been several enforcement actions against virtual currency exchangers and providers, the traditional banking sector has been wary of doing business with companies in the virtual currency space. She said:

I do want to address the issue of virtual currency administrators and exchangers maintaining access to the banking system in light of the recent action against Liberty Reserve. Again, keep in mind the combined actions by the Department of Justice and FinCEN took down a $6 billion money laundering operation, the biggest in U.S. history.

We can understand the concerns that these actions may create a broad-brush, reaction from banks. Banks need to assess their risk tolerance and the risks any particular client might pose. That’s their obligation and that’s what we expect them to do.

And this goes back to my earlier points about corporate responsibility and why it is in the best interest of virtual currency administrators and exchangers to comply with their regulatory responsibilities. Banks are more likely to associate themselves with registered, compliant, transparent businesses. And our guidance should help virtual currency administrators and providers become compliant, well-established businesses that banks will regard as desirable and profitable customers.

While it’s true that FinCEN’s March guidance provides clarity for many actors in the Bitcoin space, it is nevertheless very ambiguous about other actors. For example, is a Bitcoin miner who sells for dollars the bitcoins he mines subject to regulation? If I buy those bitcoins, hold them for a time as an investment, and then resell them for dollars, am I subject to regulation? In neither case are bitcoins acquired to purchase goods or services (the only use-case clearly not regulated according to the guidance). And even if one is clearly subject to the regulations, say as an exchanger, it takes millions of dollars and potentially years of work to comply with state licensing and other requirements. My concern is that banks will not do business with Bitcoin start-ups not because they pose any real criminal risk, but because there is too much regulatory uncertainty.

My sincere hope is that banks do not interpret Ms. Shasky Calvery’s comments as validation of their risk-aversion. Banks and other financial institutions should be careful about who they do business with, and they certainly should not do business with criminals, but it would be a shame if they felt they couldn’t do business with an innovative new kind of start-up simply because that start-up has not been (and may never be) adequately defined by a regulator. Unfortunately, I fear banks may take the comments to suggest just that, putting start-ups in limbo.

Entrepreneurs may want to comply with regulation in order to get banking services, and they may do everything they think they have to in order to comply, but the banks may nevertheless not want to take the risk given that the FinCEN guidance is so ambiguous. I asked Ms. Shasky Calvery if there was a way entrepreneurs could seek clarification on the guidance, and she said they could call FinCEN’s toll-free regulatory helpline at (800) 949–2732. That may not be very satisfying to some, but it’s a start. And I hope that any clarification that emerges from conversations with FinCEN are made public by the agency so that others can learn from it.

All in all, I think today we saw the first tentative steps toward a deeper conversation between Bitcoin entrepreneurs and users on the one hand, and regulators and law enforcement on the other. That’s a good thing. But I hope regulators understand that it’s not just the regulations they promulgate that have consequences for regulated entities, it’s also the uncertainty they can create through inaction.

Ms. Shasky Calvery also said:

Some in the press speculated that our guidance was an attempt to clamp down on virtual currency providers. I will not deny that there are some troublesome providers out there. But, that is balanced by a recognition of the innovation these virtual currencies provide, and the financial inclusion that they might offer society. A whole host of emerging technologies in the financial sector have proven their capacity to empower customers, encourage the development of innovative financial products, and expand access to financial services. And we want these advances to continue.

That is a welcome sentiment, but those advances can only continue if there are clear rules made in consultation with regulated parties and the general public. Hopefully FinCEN will revisit its guidance now that the conversation has begun, and as other regulators consider new rules, they will hopefully engage the Bitcoin community early in order to avoid ambiguity and uncertainty.

]]>
https://techliberation.com/2013/06/13/mr-bitcoin-goes-to-washington/feed/ 28 44967
CFAA and Prosecutorial Indiscretion https://techliberation.com/2013/04/05/cfaa-and-prosecutorial-indiscretion/ https://techliberation.com/2013/04/05/cfaa-and-prosecutorial-indiscretion/#comments Fri, 05 Apr 2013 20:32:50 +0000 http://techliberation.com/?p=44447

With renewed interest in the failings of the Computer Fraud and Abuse Act and the role of prosecutorial discretion in its application in light of the tragic outcome in the Aaron Swartz case, I went back to what I wrote about the law in 2009.

Back then, the victim of both the poorly-drafted amendments to CFAA that expanded its scope from government to private computer networks and the politically-motivated zeal of federal prosecutors reaching for something—anything—with which to punish otherwise legal but disfavored behavior was trained on Lori Drew, a far less sympathetic defendant.

But the dangers lurking in the CFAA were just as visible in 2009 as they are today.  Those who have recently picked up the banner calling for reform of the law might ask themselves where they were back then, and why the ultimately unsuccessful Drew prosecution didn’t raise their hackles at the time.

The law was just as bad in 2009, and just as dangerously twisted by the government.  Indeed, the Drew case, as I wrote at the time, gave all the notice anyone needed of what was to come later.

Here’s the section of The Laws of Disruption from 2009 discussing CFAA:

What did Lori Drew do?

The late-forties suburban St. Louis mother was apparently unhappy about the “mean” behavior of Megan Meier, a thirteen-year-old former friend of Drew’s daughter Sarah. The Drews, along with Ashley Grills, the eighteen-year-old employee of Lori Drew’s home business, hatched a plan. They created a fake MySpace profile for a bare-chested sixteen-year-old boy named “Josh,” who would befriend Megan and encourage her to gossip about other girls. Then they would take printouts to Megan’s mother to show her what the girl was up to.

Not only was the idea stupid, it wasn’t even original—Sarah and Megan, back when they were friends, had done the same thing, creating a profile for a boy who didn’t exist as a way to talk to other boys. This time, however, the plan went awry. Megan became deeply infatuated with Josh. She pressed for his phone number. She wanted to meet him in person. The women behind his account looked for a way out.

According to Grills, “We decided to be mean to her so she would leave him alone . . . and we could get rid of the page.” After deliberating on the easiest way to end an ill-conceived hoax that was going very wrong, Grills sent an instant message to Meier: “The world would be a better place without you.”

The consequences were tragic. Meier, who was being treated for depression, took the suggestion all too literally. After an argument with her parents, who had closely monitored the relationship with Josh from the beginning, Meier went to her room and hanged herself.

Media accounts of the teen’s suicide and the subsequent revelation of who was behind “Josh” created a froth of outrage and hand-wringing. Commentators invented and then proclaimed an epidemic of “cyberbullying.”

When it became clear that the mother of one of Meier’s former friends was involved, Drew herself was subjected to death threats and vandalism. A fake MySpace page for her husband was created. On cable news and the blogosphere, Drew was instantly convicted and sentenced to hell. (“Call me vindictive,” a typical blog entry read, “but i hope that someone kills the woman who is responsible.”)

In the midst of the media storm, state attorneys in Missouri announced there would be no prosecution of Drew for the simple reason that no criminal law had been broken. Federal prosecutors weren’t so sure. They found a 1986 law, the Computer Fraud and Abuse Act, that set stiff penalties for breaking into and damaging computers.

Drew was charged under the novel theory that since the MySpace terms of service agreement prohibits posting false information in one’s profile, the creation of Josh violated Drew’s contract. Hence, she “accessed” MySpace computers without “authorization.” The creation of Josh, in other words, was a kind of hacking. The victim was not Meier (who with her parents’ permission had also violated the TOS, which requires users to be at least fourteen years old). The victim was MySpace.

Although the jury ultimately refused to convict Drew on the felony charge, they did convict her of the lesser crime of unauthorized access. Valentina Kunasz, the jury’s foreperson, made no apologies for the conviction. “It was so very childish; so very pathetic,” she told reporters after the trial. “She could have done quite a few things to stop it, and she chose not to. And I think she got kind of a rise out of doing this to another person and that bothers me, it really irks me.” Drew faces up to three years in prison and $300,000 in fines.

Legal scholars were generally in agreement that the prosecution was deeply flawed and will very likely be set aside or reversed on appeal. (N.B.  Later, it was.) First, there were gaping holes in the government’s case. For one thing, it was Grills, and not Drew, who set up the Josh account and therefore agreed to the TOS (Grills, testifying for the prosecution in exchange for immunity, admitted she never read the TOS). Drew herself was only occasionally involved in the hoax.

By a weird twist of irony, one of the few times she communicated with Meier it turned out she was talking to Meier’s mother, who told Josh he ought to be looking for friends his own age. The fateful message was sent by Grills without Drew’s knowledge, and wasn’t even sent through MySpace.

As a matter of public policy, the prosecution is even more disturbing. Even assuming Drew was bound by the TOS, these contracts are notoriously long and intentionally unreadable. Most of us, even lawyers, don’t read them.

Yet following the logic of the Drew prosecution, anyone who misrepresents some of their personal details on an online dating service has committed a federal crime. Anyone who gives a nonworking telephone number when signing up for a Web site has committed a federal crime.

Indeed, after the verdict, one social network researcher was pained to admit, “We’ve been telling our kids to lie about ID information for a long time now.”

The computer fraud law began as a protection against hackers targeting government computers. The law has never before been used in connection with the violation, willful or otherwise, of private terms of service. There’s no reason to believe Congress intended to criminalize cyberbullying in 1986 or any other time.

Supporters of the conviction argue that the real problem here was a hole in the law—the lack of a statute outlawing whatever it was Lori Drew had done.  But the decision of lawmakers not to criminalize a behavior is no reason to correct the problem in a way that undermines the very idea of law. People are often cruel to each other. Other children, adults, and even parents can and do humiliate children in the real world. No laws, in all but extreme cases, are being broken.

It’s difficult to see how this case differs in any respect other than the use of a computer and the tragic outcome.

If the conviction stands, it effectively gives every federal prosecutor a blank check to charge anyone they want with criminal behavior, subject only to their discretion of whether and when to use that power.

Some commentators, pleased with the result if not the process, argued that there was no cause for alarm. Prosecutors, they said, will only use this power in extreme cases.

The Drew prosecution suggests precisely the opposite. For elected prosecutors, the real temptation is to exercise discretion not when the law would otherwise let a heinous crime slip through the cracks but when passions are high and the facts (at least the version presented by the media) are the most lurid—when, in other words, an angry mob demands it.

]]>
https://techliberation.com/2013/04/05/cfaa-and-prosecutorial-indiscretion/feed/ 6 44447
On the Pursuit of Happiness… and Privacy https://techliberation.com/2013/03/31/on-the-pursuit-of-happiness-and-privacy/ https://techliberation.com/2013/03/31/on-the-pursuit-of-happiness-and-privacy/#comments Sun, 31 Mar 2013 19:14:31 +0000 http://techliberation.com/?p=44261

Defining “privacy” is a legal and philosophical nightmare. Few concepts engender more definitional controversies and catfights. As someone who is passionate about his own personal privacy — but also highly skeptical of top-down governmental attempts to regulate and/or protect it — I continue to be captivated by the intellectual wrangling that has taken place over the definition of privacy. Here are some thoughts from a wide variety of scholars that make it clear just how frustrating this endeavor can be:

  • Perhaps the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” – Judith Jarvis Thomson, “The Right to Privacy,” in Philosophical Dimensions of Privacy: An Anthology, 272, 272 (Ferdinand David Schoeman ed., 1984).
  • privacy is “exasperatingly vague and evanescent.” – Arthur Miller, The Assault on Privacy: Computers, Data Banks, and Dossiers, 25 (1971).
  • [T]he concept of privacy is infected with pernicious ambiguities.” – Hyman Gross,  The Concept of Privacy, 42 N.Y.U. L. REV. 34, 35 (1967).
  • Attempts to define the concept of ‘privacy’ have generally not met with any success.” – Colin Bennett, Regulating Privacy: Data Protection and Public Policy In Europe and the United States,  25 (1992).
  • When it comes to privacy, there are many inductive rules, but very few universally accepted axioms.” – David Brin, The Transparent Society: Will Technology Force Us To Choose Between Privacy and Freedom? 77 (1998).
  • Privacy is a value so complex, so entangled in competing and contradictory dimensions, so engorged with various and distinct meanings, that I sometimes despair whether it can be usefully addressed at all.” – Robert C. Post, Three Concepts of Privacy, 89 GEO. L.J. 2087, 2087 (2001).
  • [privacy] can mean almost anything to anybody.” – Fred H. Cate & Robert Litan, Constitutional Issues in Information Privacy, 9 Mich. Telecomm. & Tech. L. Rev. 35, 37 (2002).
  • privacy has long been a “conceptual jungle” and a “concept in disarray.” “[T]he attempt to locate the ‘essential’ or ‘core’ characteristics of privacy has led to failure.” – Daniel J. Solove, Understanding Privacy 196, 8 (2008).
  • Privacy has really ceased to be helpful as a term to guide policy in the United States.” – Woodrow Hartzog, quoted in Cord Jefferson, Spies Like Us: We’re All Big Brother Now, Gizmodo, Sept. 27, 2012.
  • for most consumers and policymakers, privacy is not a rational topic. It’s a visceral subject, one on which logical arguments are largely wasted.” – Larry Downes,  A Rational Response to the Privacy “Crisis,” Cato Institute, Policy Analysis No. 716 (Jan. 7, 2013), at 6.

In my new Harvard Journal of Law & Public Policy article, “The Pursuit of Privacy in a World Where Information Control is Failing” I build on these insights to argue that:

  1. precisely because privacy has always been a highly subjective philosophical concept;
  2. and is also a constantly morphing notion that evolves as societal attitudes adjust to new cultural and technological realities;
  3. America may never be able to achieve a coherent fixed definition of the term or determine when it constitutes a formal right outside of some narrow contexts.

That doesn’t mean the privacy isn’t profoundly important to many of us, but privacy is, first and foremost, an exercise of personal determination and personal responsibility. To some extent, we have to make our own privacy in this world. In this sense, we can liken it to our right to pursue happiness. Here’s how I put it in Part I of my Harvard JLPP article:

Even if agreement over the scope of privacy rights proves elusive, however, everyone would likely agree that citizens have the right to pursue privacy. In this sense, we might think about the pursuit of privacy the same way we think about the pursuit of happiness. Recall the memorable line from America’s Declaration of Independence: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Consider the importance of that qualifying phrase—“and the pursuit of”—before the mention of the normative value of happiness. America’s Founders obviously felt happiness was an important value, but they did not elevate it to a formal positive right alongside life, liberty, physical property, or even freedom of speech.

This framework provides a useful way of thinking about privacy. Even if we cannot agree whether we have a right to privacy, or what the scope of any particular privacy right should be, the right to pursue it should be as uncontroversial as the right to pursue happiness. In fact, pursing privacy is probably an important element of achieving happiness for most citizens. Almost everyone needs some time and space to be free with their own thoughts or to control personal information or secrets that they value. But that does not make it any easier to define the nature of privacy as a formal legal right, or any easier to enforce it, even if a satisfactory conception of privacy could be crafted to suit every context.

The most stable and widely accepted privacy rights in the United States have long been those that are tethered to unambiguous tangible or physical rights, such as rights in body and property, especially the sanctity of the home. Moreover, these rights have been focused on limiting the power of state actors, not private parties. By contrast, privacy claims premised on intangible or psychological harms have found far less support, and those claims have been particularly limited for private actors relative to the government. All this will likely remain the case for online privacy. Importantly, if privacy is enshrined as a positive right even in narrowly drawn contexts, it imposes obligations on the government to secure that right. These obligations create corresponding commitments and costs that must be taken into account since government regulation always entails tradeoffs.

Therefore, even as America struggles to reach political consensus over the scope of privacy rights in the information age, it makes sense to find methods and mechanisms—most of which will lie outside of the law—that can help citizens cope with social and technological changes that affect their privacy. Part III will outline some of the ways citizens can pursue and achieve greater personal privacy.

I fully realize that this way of thinking about privacy leaves many challenging questions at the margin and I also understand how it will be unsatisfactory to those who view privacy as a “dignity right” that trumps all other values and considerations. But, to reiterate, what I am suggesting here is that we will likely never be able to achieve a coherent fixed definition of the term or determine when it constitutes a formal right outside of some narrow contexts (such as for sensitive health or financial information, where the potential harms of collection, sharing, and use are more tangible).  The primary reason for this is that privacy primary comes down to assertions about “harms” that are primarily psychological in character. But precisely because such asserted harms (1) lack a tangible/physical/monetary nature and (2) also can come into conflict with other liberty rights (especially the right to freely gather information and speak about it; i.e., First Amendment rights), it makes it more difficult to classify psychological “harms” as harms at all.

I feel the same way about concepts like “safety” and “security.” Who among us doubts these values and goals are important? As the father of two young digital natives, I am living a constant struggle to mentor my kids and ensure they have safe and healthy online interactions. But that doesn’t mean I think anyone in this world — including my own children — has an amorphous “right to safety.” What they do have a right to is not to be harmed by others in their online interactions. Where things become sticky, however, is when some child safety advocates adopt an extremely expansive view of what constitutes “harm” in this context and suggest that hearing a single dirty word or seeing a fleeting dirty image somehow irrevocably “harms” their mental well-being and development, or perhaps just their personal morality. (I have written about this here in dozens of essays through the years such as this one on “The Problem of Proportionality in Debates about Online Privacy and Child Safety” as well in longer papers, such as my recent law review article about, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.”)

While I appreciate the diverse beliefs and values that drives sensitivities about potentially objectionable online content, it is an entirely different matter when one claims “rights” and actionable “harms” in this context. It means that politics will essentially answer what are fundamentally deeply personal “eye of the beholder” questions. It is better, I believe to educate and empower citizens about safe and sensible online interactions and then let them determine what works best for them. Again, whether we are talking about safety or privacy, this model relies upon a certain amount of personal (and parental) responsibility.

To be sure, real harms exist and, at times, law will need to be brought in to right certain wrongs. For example, in the online safety context I favor strong penalties for anyone attempting predatory behavior or extreme forms of incessant harassment. In the privacy context, we’ll still need laws to deal with identity/data theft and certain uses of highly sensitive health and financial information. Outside of those narrow contexts, however, it is better to let people define their own online experiences free of top-down, one-size-fits-all regulatory enactments that attempt to make those determinations for all of us. To reiterate, we all have the right to pursue the objectives we care about–safety, privacy, or just happiness more generally–according to our own value systems. But we should be careful about elevating such amorphous concepts to the level of “rights” and then expecting the State to enforce one set of values and choices on a diverse citizenry.

The Pursuit of Privacy in a World Where Information Control is Failing

]]>
https://techliberation.com/2013/03/31/on-the-pursuit-of-happiness-and-privacy/feed/ 1 44261
Sean Flaim on the private enforcement of copyrights https://techliberation.com/2013/03/26/sean-flaim/ https://techliberation.com/2013/03/26/sean-flaim/#comments Tue, 26 Mar 2013 10:00:16 +0000 http://techliberation.com/?p=44345

Sean Flaim, an attorney focusing on antitrust, intellectual property, cyberlaw, and privacy, discusses his new paper “Copyright Conspiracy: How the New Copyright Alert System May Violate the Sherman Act,” recently published in the New York University Journal of Intellectual Property and Entertainment Law.

Flaim describes content owners early attempts to enforce copyright through lawsuit as a “public relations nightmare” that humanized piracy and created outrage over large fines imposed on casual downloaders. According to Flaim, the Copyright Alert System is a more nuanced approach by the content industry to crack down on copyright infringement online, which arose in response to a government failure to update copyright law to reflect the nature of modern information exchange.

Flaim explains the six stages of the Copyright Alert System in action, noting his own suspicions about the program’s states intent as a education tool for repeat violators of copyright law online. In addition to antitrust concerns, Flaim worries that appropriate cost-benefit analysis has not been applied to this private regulation system, and, ultimately, that private companies are being granted a government-like power to punish individuals for breaking the law.

Download

Related Links

]]>
https://techliberation.com/2013/03/26/sean-flaim/feed/ 2 44345
Thoughts on Latest FTC COPPA Rule Revisions & Online Child Safety / Privacy https://techliberation.com/2012/08/09/thoughts-on-latest-ftc-coppa-rule-revisions-online-child-safety-privacy/ https://techliberation.com/2012/08/09/thoughts-on-latest-ftc-coppa-rule-revisions-online-child-safety-privacy/#comments Thu, 09 Aug 2012 19:00:10 +0000 http://techliberation.com/?p=41996

It was my honor today to be a panelist at a Hill event on “Apps, Ads, Kids & COPPA: Implications of the FTC’s Additional Proposed Revisions,” which was co-sponsored by the Family Online Safety Institute and the Association for Competitive Technology. It was a free-wheeling discussion, but I prepared some talking points for the event that I thought I would share here for anyone interested in my views about the Federal Trade Commission’s latest proposed revisions to the Children’s Online Privacy Protection Act (COPPA).

________

The Commission deserves credit for very wisely ignoring calls by some to extend the coverage of COPPA’s regulatory provisions from children under 13 all the way up to teens up to 18.

  • that would have been a constitutional and technical enforcement nightmare. But the FTC realized that long ago and abandoned any thought of doing that. So that is a huge win since we won’t be revisiting the COPA age verification wars.
  • That being said, each tweak or expansion of COPPA, the FTC opens the door a bit wider to a discussion of some sort age verification or age stratification scheme for the Internet.
  • And we know from recent AG activity (recall old MySpace age verification battle) and Hill activity (i.e. Markey-Barton bill) that there remains an appetite for doing something more to age-segment Internet populations

But challenging compliance issues remain with expanded COPPA regulations.

  • How do third parties accurately determine whether a site where they place a cookie or serve an ad is “directed at children” or “likely to attract an audience that included a disproportionately large percentage of children under age 13”
  • Let’s be clear about what is happening here:  = the redefinition of terms we see the agency undertaking here will result in an expansion of liability via regulatory relabeling
  • there certainly is an incremental benefit associated with tweaks to the COPPA rule that strengthen its privacy protections, but it is equally true that there are corresponding incremental costs…

With each tweak or expansion of COPPA, the FTC potentially increased regulatory compliance costs, which could impact market structure, innovation, and consumers options and costs.

  • FTC estimates that approximately 85-90% of operators potentially subject to the COPPA rule qualify as small entities; up from prior estimate of 80%.
  • “Rule may entail some added cost burden to operators, including those that qualify as small entities.” (p. 28) Specifically, “operators will each spend approximately 60 hours” complying with the disclosure requirements of the rule (p. 32), although the agency doesn’t offer much of any explanation for how it came up with that number and, despite hearing from several  commenters that compliance hours were being underestimated by the agency, the FTC says it won’t revise that estimate upward.
  • Regardless, the agency at least acknowledges that a real burden exists and, if it is true that these burdens will expand because of the latest revisions to the rule, then competition and innovation could suffer
  • We should want to foster an online ecosystem where small entrepreneurs can thrive and compete against giants like Disney and Viacom
  • They can comply with these expanded regulatory compliance costs, but not everyone else can, esp. to the little guys
  • Which means fewer options for both parents and kids
  • Or, it could also mean that we start seeing prices go up where none currently exist.

Still not clear to me what the actual harm is here that we are trying to address, nor is it clear to me how these new rules really do much on the ground to make kids safer online.

  • Parental notification is not the end of the online safety story.
  • Indeed, when it comes to online safety, it is not what happens before kids get in the door that counts, it’s what happens after kids get inside that really matters.

The Constructive Alternative: Education, Self-Regulation, Codes of Conduct & Best Practices

  • When sites create digital communities and invite kids in, I think we can all agree that we want them to be well-lit online neighborhoods where they can interact safely
  • A major recent report on parental attitudes about COPPA revealed that what the vast majority of parents want—and this certainly includes me—is helpful tips and advice about what sort of sites and services are appropriate for their kids at a particular age.
  • And parents also want some assurances that those online communities take some simple, common sensical steps to keep their digital worlds and applications safe.
  • This is why the ongoing dialog about best practices for these sites is so important. Specifically, what is most needed are:
    • Smart ground rules for acceptable behavior;
    • Clear standards for what will not be tolerated; and,
    • Limitations on certain types of functionality and data collection.
  • Ex: Everloop’s “3 Cs of Conduct”
    • “BE COOL: Everloop is a safe, fun place for everyone…so no swearing, cheating, bullying or general bad behavior allowed. If you do any of that, we might have to boot you from the loop.”
    • “BE CLEAN: Everloop is not about drugs, alcohol, sex, race or any inappropriate stuff like that. We will block offensive posts.”
    • “BE CONFIDENTIAL: Play safe on Everloop — don’t share your real name, address, phone number, email or passwords with anybody.”
  • Ex: Club Penguin = limits functionality within a well-protected walled garden; with outstanding moderation
  • But let’s be clear: Even with those sorts of sensible ground rules and best practices in place, a lot of kid-oriented sites and apps are still going collect some data and serve up some ads.
  • I know many of you have heard me say it a million times before and are probably getting a little tired of it, but I am going to go ahead and say it again (and with passion): There really is no free lunch! Trade-offs are inescapable in these matters.
  • Perhaps in a perfect world we’d have:
    • An infinite number of highly innovative sites
    • That never collected any data or served any ads
    • But yet were still free of charge to parents and kids
  • But that is pure fantasy-land talk.
  • Yet, what I fear most about the constant expansion of the COPPA regulatory regime is that some people get caught up in that sort of a fairytale and ask us to pretend that no such trade-offs exist.  In other words, some seem to believe that we can have something for nothing.
  • Before we go further with more extensive Internet regulation, therefore, I hope we think hard about those trade-offs and about the more constructive steps we might take to encourage education, self-regulation, and best practices for sites that cater to kids and not get caught up in a technopanic about the supposed threat of kids seeing a few ads and having a little data collected about them.
  • Because, in most cases, those fears are being greatly overblown while the wondrous benefits we currently enjoy thanks to advertising are being greatly discounted or ignored.
]]>
https://techliberation.com/2012/08/09/thoughts-on-latest-ftc-coppa-rule-revisions-online-child-safety-privacy/feed/ 6 41996
Stefan Krappitz on Troll Culture https://techliberation.com/2012/08/07/stefan-krappitz-on-troll-culture/ Tue, 07 Aug 2012 18:30:41 +0000 http://techliberation.com/?p=41889

Stefan Krappitz, writer of the book Troll Culture: A Comprehensive Guide, discusses the phenomenon of internet trolling. For Krappitz trolling is disrupting people for personal amusement. Trolling is largely a positive phenomenon argues Krappitz. While it can become very negative in some cases, for the most part trolling is simply an amusing practice that is no different than playing practical jokes. Krappitz believes that trolling has been around since before the age of the internet. He notes that the behavior of Socrates is reminiscent of trolling because he pretended to be a student and then used his questioning to mock people who did not know what they were talking about. Krappitz also discusses anonymity and how it contributes and takes away from trolling as well as discussing where the line is between good trolling and cyber-bullying.

Download

Related Links

]]>
41889
Journalists, Technopanics & the Risk Response Continuum https://techliberation.com/2012/07/15/journalists-technopanics-the-risk-response-continuum/ https://techliberation.com/2012/07/15/journalists-technopanics-the-risk-response-continuum/#comments Mon, 16 Jul 2012 01:26:23 +0000 http://techliberation.com/?p=41704

[Based on forthcoming article in the Minnesota Journal of Law, Science & Technology, Vol. 14 Issue 1, Winter 2013, http://mjlst.umn.edu]

I hope everyone caught these recent articles by two of my favorite journalists, Kashmir Hill (“Do We Overestimate The Internet’s Danger For Kids?”) and Larry Magid (“Putting Techno-Panics into Perspective.”) In these and other essays, Hill and Magid do a nice job discussing how society responds to new Internet risks while also explaining how those risks are often blown out of proportion to begin with.

Both Hill and Magid are rarities among technology journalists in that they spend more time debunking fears rather than inflating them. Whether its online safety, cybersecurity, or digital privacy, we all too often see journalists distorting or ignoring how humans find constructive ways to cope with technological change. Why do journalists fail to make that point? I suppose it is because bad news sells–even when there isn’t much to substantiate it.

I’ve spent a lot of time thinking about “moral panics” and “technopanics” in recent years (here’s a compendium of roughly two dozen essays I’ve penned on the topic) and earlier this year I brought all my work together in an 80-page paper entitled, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.

In that paper, I identified several reasons why pessimistic, fear-mongering attitudes often dominate discussions about the Internet and information technology. I began by noting that the biggest problem is that for a variety of reasons, humans are poor judges of risks to themselves or those close to them. But I identified other explanations for why human beings are predisposed toward pessimism and are risk-averse, including:

  • Generational Differences
  • Hyper-Nostalgia, Pessimistic Bias, and Soft Ludditism
  • Bad News Sells: The Role of the Media, Advocates, and the Listener
  • The Role of Special Interests and Industry Infighting
  • Elitist Attitudes among Academics and Intellectuals
  • The Role of “Third-Person-Effect Hypothesis”

You can read my paper for fuller descriptions of each point. But let me return to my primary concern here regarding the role that the media plays in the process. It seems logical why journalists inflate fears: In an increasingly crowded and cacophonous modern media environment, there’s a strong incentive for them to use fear to grab attention. But why are we, the public, such eager listeners and so willing to lap up bad news, even when it is overhyped, exaggerated, or misreported?

“Negativity bias” certainly must be part of the answer. Michael Shermer, author of The Believing Brain, notes that psychologists have identified “negativity bias” as “the tendency to pay closer attention and give more weight to negative events, beliefs, and information than to positive.” Negativity bias, which is closely related to the phenomenon of “pessimistic bias,” is frequently on display in debates over online child safety, digital privacy, and cybersecurity.

But even with negativity bias at work, what I still cannot explain is why so many of these inflated fears exists when we have centuries of experience and empirical results that prove humans are able to again and again adapt to technological change. We are highly resilient, adaptable mammals. We learn to cope.

In my paper, I try to develop a model for how humans deal with new technological risks. I identify four general groups of responses and place them along a “risk response continuum”:

  1. Prohibition: Prohibition attempts to eliminate potential risk through suppression of technology, product or service bans, information controls, or outright censorship.
  2. Anticipatory Regulation: Anticipatory regulation controls potential risk through preemptive, precautionary safeguards, including administrative regulation, government ownership or licensing controls, or restrictive defaults. Anticipatory regulation can lead to prohibition, although that tends to be rare, at least in the United States.
  3. Resiliency: Resiliency addresses risk through education, awareness building, transparency and labeling, and empowerment steps and tools.
  4. Adaptation: Adaptation involves learning to live with risk through trial-and-error experimentation, experience, coping mechanisms, and social norms. Adaptation strategies often begin with, or evolve out of, resiliency-based efforts.

For reasons I outline in the paper, I believe that it almost always makes more sense to use bottom-up resiliency and adaptation solutions instead of top-down anticipatory regulation or prohibition strategies. And, more often than not, that’s what we eventually opt for as a society, at least when it comes to information technology. Sure, you can find plenty of examples of knee-jerk prohibition and regulatory strategies being proposed initially as a response to an emerging technology. In the long-run, however–and sometimes even in the short-run–we usually migrate down the risk response continuum and settle into resiliency and adaptation solutions. Sometimes we adopt those approaches because we come to understand they are more sensible or less costly. Other times we get there only after several failed experiments with prohibition and regulation strategies.

I know I am being a bit too black and white here. Sometimes we utilize hybrid approaches–a bit of anticipatory regulation with a bit of resiliency, for example. We use such an approach for both privacy and security matters, for example. But I have argued in my work that the sheer velocity of change in the information age makes it less and less likely that anticipatory regulation strategies–and certainly prohibition strategies–will work in the long-haul. In fact, they often break down very rapidly, making it all the more essential that we begin thinking seriously about resiliency strategies as soon as we are confronted with new technological risks. Adaptation isn’t usually the correct strategy right out of the gates, however. Just saying “learn to to live with it” or “get over it” won’t work as a short-term strategy, even if that’s exactly what will happen over the long-term. But resiliency strategies often help us get to adaption strategies and solutions more quickly.

Anyway, back to journalists and fear. It strikes me that sharp journalists like Hill and Magid just seem to get everything I’m saying here and they weave these insights into all their reporting. By why do so few others? Again, I suppose it is because the incentives are screwy here and make it so that even those reporters who know better will sometimes use fear-based tactics to sell copy. But I am still surprised by how often even respected mainstream media establishments play this game.

In any event, those others reporters need to learn to give humans a bit more credit and acknowledge that (a) we often learn to cope with technological risks quite rapidly and (b) sometimes those risks are greatly inflated to begin with.

]]>
https://techliberation.com/2012/07/15/journalists-technopanics-the-risk-response-continuum/feed/ 1 41704
New Study on the Unintended Consequences of COPPA https://techliberation.com/2011/11/01/new-study-on-the-unintended-consequences-of-coppa/ https://techliberation.com/2011/11/01/new-study-on-the-unintended-consequences-of-coppa/#comments Tue, 01 Nov 2011 14:10:09 +0000 http://techliberation.com/?p=38885

I highly recommend this important new study on “Why Parents Help Their Children Lie to Facebook about Age: Unintended Consequences of the Children’s Online Privacy Protection Act” by danah boyd of New York University, Eszter Hargittai from Northwestern University, Jason Schultz from University of California, Berkeley, and John Palfrey from Harvard University. COPPA is a complicated and somewhat open-ended law and regulatory regime. COPPA requires that commercial operators of websites and services obtain “verifiable parental consent” before collecting, disclosing, or using “personal information” (name, contact inform­ation) of children under the age of 13 if either their website or service (or “portion thereof”) is “directed at children” or they have actual knowledge that they are collecting personal information from a child.

The new study, which surveyed over 1,000 parents of children between the ages of 10 and 14, reveals that, despite the best of intentions, COPPA is having many unintended costs and consequences:

Although many sites restrict access to children, our data show that many parents knowingly allow their children to lie about their age — in fact, often help them to do so — in order to gain access to age–restricted sites in violation of those sites’ ToS. This is especially true for general–audience social media sites and communication services such as Facebook, Gmail, and Skype, which allow children to connect with peers, classmates, and family members for educational, social, or familial reasons.

The authors conclude that “COPPA inadvertently undermines parents’ ability to make choices and protect their children’s data” and that their results “have significant implications for policy–makers, particularly in light of ongoing discussions surrounding COPPA and other age–based privacy laws.” Indeed, this paper could really shake up the debate over online kids’ privacy regulation. I will have more analysis of the paper in my weekly Forbes column this weekend.

Additional reading for COPPA background and current controversies: Berin Szoka & Adam Thierer, “COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech,” (May 21, 2009); and Adam Thierer, “Kids, Privacy, Free Speech & the Internet: Finding the Right Balance,” (August 12, 2011).

]]>
https://techliberation.com/2011/11/01/new-study-on-the-unintended-consequences-of-coppa/feed/ 2 38885
TechFreedom/FOSI COPPA (Livecasted) Panel in DC October 12 https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/ https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/#respond Fri, 07 Oct 2011 15:56:51 +0000 http://techliberation.com/?p=38632

TechFreedom, in association with the Family Online Safety Institute (FOSI), will host a lunch panel with a number of leading experts to discuss the FTC’s recently-proposed revisions to the Children’s Online Privacy Protection Act (COPPA). Opening remarks will be delivered by the Federal Trade Commission’s Phyllis Marcus, a Senior Staff Attorney at the Division of Advertising Practices. Afterwards, the panel will discuss the FTC’s proposals and what they mean for children, parents, Internet companies and innovation.

FOSI CEO Stephen Balkam will serve as master of ceremonies. The panel will be moderated by Berin Szoka, President of TechFreedom, and will include:

The event will take place at the Top of the Hill Banquet and Conference Center at the Reserve Officers Association (One Constitution Ave NE, Washington DC 20002) on Wednesday, October 12 from 12:30 to 2:30pm, and include a complimentary lunch. Space is limited so please click here to register.

In addition, you can let everyone else know you’ll be coming or watching the livestream (page will be updated when event begins) by joining the Facebook event page.

You can also keep up with the event by following the Twitter discussion at the #COPPA hashtag.

]]>
https://techliberation.com/2011/10/07/techfreedomfosi-coppa-livecasted-panel-in-dc-october-12/feed/ 0 38632
What Censoring Violent Content Looks Like https://techliberation.com/2011/10/06/what-censoring-violent-content-looks-like/ https://techliberation.com/2011/10/06/what-censoring-violent-content-looks-like/#respond Thu, 06 Oct 2011 14:49:23 +0000 http://techliberation.com/?p=38585

A year ago, I filed a joint amicus brief with the Electronic Frontier Foundation urging the Supreme Court to overturn California’s paternalistic law on the dangerous grounds that videogame depictions of violence constituted “obscenity” unprotected by the First Amendment. Fortunately, we won. Thus, the First Amendment protects all media, while parents have a variety of tools available to them to limit what content their kids can consume, or games they can play.

But in case you’re wondering what the world might look like had the decision gone the other way, check out the contrast between the US version of Maroon 5’s hit song “Misery” and the UK version. First, here’s the (raucous and sexy) US version:

Now, here’s the UK version, where the sexually suggestive parts remain (kids love that stuff) but all the “violent” parts have been replaced with, or covered by, ridiculous cartoon images. Really, it’s just too funny. The best part is where the knife she uses to stab the gaps between his fingers on the table has been replaced with a cartoon ice cream cone. Don’t try that at home, kids—you’ll make a chocolatey mess!

In case you were wondering, the US version has nearly 47 million views while the UK version  has a paltry half million views. Gee, I wonder why… (Actually, I suspect that most of the UK version’s viewers watched it because it’s so hilariously stupid.)

Parents can easily turn on  YouTube’s safe search tool to block many objectionable videos—and lock it so kids can’t turn it off.  (CBS made a great video explaining how to do this for parents.)

Given the enormous scale of videos on YouTube, Safe Search isn’t perfect: It wouldn’t block this particular video, probably because the video doesn’t trigger any obvious keywords like “porn” and not enough users have complained about it to bring it to the attention of Google’s human review team. But it’s easy to find a wide variety of tools that will restrict kids’ access to specific domains, such as YouTube. This allows parents to supervise their kids’ use of those sites.

Now, if you don’t think Google’s blocking enough, it’s easy to flag a video as inappropriate by clicking on the flag below the video, which expands a dialogue box, like so:

Like all web tools, parental controls are always evolving. In the next update, I’d love to see Google allow parents to restrict their kids’ use of YouTube to certain playlists, either set up by the parents themselves or by, say, third party groups dedicated to screening content for parents. That would empower parents to configure YouTube as they see fit and trust that their kids can use the site wisely, without parents having to watch the whole time or rely on a necessarily imperfect (but still pretty darn good) tool like SafeSearch.

As a constitutional matter, the important point here is the one Adam Thierer always makes: parental control tools need not be perfect to be preferable to government regulation. That’s an (accurate) paraphrase of the Supreme Court’s clear 2000 decision on this subject in U.S. v. Playboy:

 It is no response that voluntary blocking requires a consumer to take action, or may be inconvenient, or may not go perfectly every time. A court should not assume a plausible, less restrictive alternative would be ineffective; and a court should not presume parents, given full information, will fail to act.

That’s freedom for you! As the Court went on to add:

Technology expands the capacity to choose; and it denies the potential of this revolution if we assume the Government is best positioned to make these choices for us.

Parental controls aren’t perfect but they’re getting better all the time. Would you rather live in the UK’s world of crude, cartoonish and clumsy censorship?

]]>
https://techliberation.com/2011/10/06/what-censoring-violent-content-looks-like/feed/ 0 38585
Cops Abuse Cyberstalking Law, Target Anonymous Speech https://techliberation.com/2011/08/26/cops-abuse-cyberstalking-law-target-anonymous-speech/ https://techliberation.com/2011/08/26/cops-abuse-cyberstalking-law-target-anonymous-speech/#comments Fri, 26 Aug 2011 19:08:24 +0000 http://techliberation.com/?p=38151

Hot-tempered police offers, pushover judges, and vague laws make for a dangerous combination. In July, a controversy erupted in Renton, Washington (a Seattle suburb) when the town’s police department launched a legal assault on an anonymous YouTube user for merely uploading a few sarcastic videos poking fun at the department’s scandals.

In an op-ed in The Seattle Times, Nicole Ciandella and I explain what happened in Renton and discuss the saga’s implications for constitutional rights in the digital age:

According to Washington state law, a person is guilty of criminal “cyberstalking” if he makes an electronic communication using lewd or indecent language with the intent to embarrass another person. In other words, a Washingtonian who creates a raunchy email message, blog post or Web video to embarrass a foe isn’t just playing dirty; he’s technically breaking the law. One YouTube user recently learned this lesson the hard way.

Last month, the scandal-ridden Renton Police Department launched a criminal cyberstalking investigation against a YouTube user known only as “MrFuddlesticks.” The user had uploaded a series of lewd, animated videos poking fun at recent allegations of wrongdoing by Renton police officers. In one video, a character talks about his civilian superior’s lack of law-enforcement experience; in another, characters discuss the impropriety of a police officer who slept with a murder suspect.

Even though none of MrFuddlesticks’ videos mention the city of Renton or any police officers by name, Renton police managed to convince a county judge to issue a warrant to compel Google, YouTube’s parent company, to disclose identifying information about MrFuddlesticks’ accounts, including credit-card details and even contents of Gmail messages.

You can read the rest of the essay here. (For more on the controversy, see Jacob Sullum at Reason’s Hit & Run; also see Mike Masnick at Techdirt. For an exploration of the case’s constitutional implications, see Eugene Volokh at The Volokh Conspiracy.)

Here on the TLF, we’ve repeatedly cautioned lawmakers about the dangers of criminalizing cyberstalking  (1234). Back in 2006, CNET’s Declan McCullagh explained why all Internet users should be worried about vague, overbroad cyberstalking laws. As the troubling actions of Renton’s finest illustrate, the potential for such laws to be abused is very real. Let’s hope lawmakers in Washington and in the numerous other states with cyberstalking laws on the books take a hard look at their laws.

 
]]>
https://techliberation.com/2011/08/26/cops-abuse-cyberstalking-law-target-anonymous-speech/feed/ 7 38151
New Paper on Online Child Safety, Kids’ Privacy & Internet Free Speech https://techliberation.com/2011/08/18/new-paper-on-online-child-safety-kids-privacy-internet-free-speech/ https://techliberation.com/2011/08/18/new-paper-on-online-child-safety-kids-privacy-internet-free-speech/#respond Thu, 18 Aug 2011 13:53:49 +0000 http://techliberation.com/?p=38111

My latest Mercatus Center white paper is entitled “Kids, Privacy, Free Speech & the Internet: Finding The Right Balance.” From the intro:

Concerns about children’s privacy are an important part of [the ongoing privacy debate]. The Children’s Online Privacy Protection Act of 1998 (COPPA) already mandates certain online-privacy protections for children under the age of 13. The goal of COPPA was to enhance parents’ involvement in their children’s online activities and better safeguard kids’ personal information online. The FTC is currently considering an expansion of COPPA, and lawmakers in the House of Representatives introduced legislation that would expand COPPA and apply additional FIPPS regulations to teenagers. Some state-based measures also propose expanding COPPA While well-intentioned, efforts to expand privacy regulation along these lines would cause a number of unintended consequences of both a legal and economic nature. In particular, expanding COPPA raises thorny issues about online free speech and anonymity. Ironically, it might also require that more information about individuals be collected to enforce the law’s parental-consent provisions. There are better ways to protect the privacy of children online than imposing burdensome new regulatory mandates on the Internet and online consumers. Education, empowerment, and targeted enforcement of unfair and deceptive practice policies represent the better way forward.

The paper can be downloaded on SSRN, Scribd, or directly from the Mercatus website at the link above.

]]>
https://techliberation.com/2011/08/18/new-paper-on-online-child-safety-kids-privacy-internet-free-speech/feed/ 0 38111
Sen. Klobuchar’s Child Safety Hearing: Bring Your Own Salt! https://techliberation.com/2011/02/28/sen-klobuchars-child-safety-hearing-bring-your-own-salt/ https://techliberation.com/2011/02/28/sen-klobuchars-child-safety-hearing-bring-your-own-salt/#respond Mon, 28 Feb 2011 17:03:33 +0000 http://techliberation.com/?p=35378

The Senate Judiciary Committee will hold a hearing on March 2 entitled “Helping Law Enforcement Find Missing Children.” While this is just about the most popular topic for a hearing one could imagine, and I’m as much in favor of finding missing children as anyone, I’m a little concerned to see Sen. Klobuchar presiding over a hearing that could lead to new proposals for Internet regulation. As a former prosecutor, it certainly makes sense for her to have taken over Judiciary’s Subcommittee on Administrative Oversight and the Courts. But she’s engaged in blatant fear-mongering about online child safety in the past, so I think it’s fair to say that anyone listening to this hearing should take it with at least a grain of salt—especially if the hearing calls for new mandates for internet intermediaries to address a supposed “crisis.”

Last summer, as I noted, the Senator sent an angry letter to Facebook demanding the site require “a prominent safety button or link on the profile pages of users under the age of 18″ that included the following:

Recent research has shown that one in four American teenagers have been victims of a cyber predator.

The letter didn’t actually cite anything, so it’s not clear what research she was relying on, as I noted:

The 25% statistic is particularly incendiary, suggesting a nationwide cyber-predation crisis—perhaps leading the public to believe 8 or 9 million teens have been lured into sexual encounters offline. Perhaps the Senator considers every cyber-bully a cyber predator—which might get to the 25% number. But there are two serious problem with that moral equivalence. First, to equate child predation with peer bullying is to engage in a dangerous game of defining deviancy down. Predation and bullying are  radically different things. The first (sexual abuse) is a clear and heinous crime that can lead to long-term psychological damage. The second might be a crime in certain circumstances, but generally not.  And it is even less likely to be a crime when it occurs among young peers, which research shows constitutes the vast majority of cases. As Adam Thierer and I noted in our Congressional testimony last year, there are legitimate concerns about cyberbullying, but it’s something best dealt with by parents and schools rather than prosecutors (like Klobuchar in her pre-Senate career).

I went on to cite summaries of the statistics on actual child predation rates—not even close to Sen. Klobuchar’s figure. If she had made these unsubstantiated claims in an academic paper, she would have been roundly criticized by her peers in the “reality-based community.” Yet in Congress, a willingness to sensationalize seems to have little consequence—other than a promotion to a larger bully pulpit from which to harangue. With her experience, she could be an an excellent Chairman and leader on these issues.  I only hope it starts with a commitment to accuracy, lest unsubstantiated concerns about child safety lead to bad policy-making while real and substantiated concerns are under-emphasized.

]]>
https://techliberation.com/2011/02/28/sen-klobuchars-child-safety-hearing-bring-your-own-salt/feed/ 0 35378
FTC to probe Apple for in-app purchases? https://techliberation.com/2011/02/22/ftc-to-probe-apple-for-in-app-purchases/ https://techliberation.com/2011/02/22/ftc-to-probe-apple-for-in-app-purchases/#comments Tue, 22 Feb 2011 22:40:51 +0000 http://techliberation.com/?p=35228

.bbpBox40164296998649860 {background:url(http://a0.twimg.com/profile_background_images/188721185/background.gif) #333333;padding:20px;} p.bbpTweet{background:#fff;padding:10px 12px 10px 12px;margin:0;min-height:48px;color:#000;font-size:18px !important;line-height:22px;-moz-border-radius:5px;-webkit-border-radius:5px} p.bbpTweet span.metadata{display:block;width:100%;clear:both;margin-top:8px;padding-top:12px;height:40px;border-top:1px solid #fff;border-top:1px solid #e6e6e6} p.bbpTweet span.metadata span.author{line-height:19px} p.bbpTweet span.metadata span.author img{float:left;margin:0 7px 0 0px;width:38px;height:38px} p.bbpTweet a:hover{text-decoration:underline}p.bbpTweet span.timestamp{font-size:12px;display:block}

FTC Chairman says will probe Apple in-app purchases for marketing practices: http://wapo.st/fX3uWn less than a minute ago via TweetDeck


The Washington Post’s Cecilia Kang reports that the FTC will probe Apple for in-app purchases marketing practices. According to Kang,

FTC Chairman Jon Leibowitz wrote in a letter to Rep. Ed Markey (D-Mass.) that the practice of “in-app purchases” for certain applications on Apple iPhones, iPads and iPods raised concerns that consumers may not fully understand the ramifications of those charges. The Washington Post wrote about hefty charges amassed by children using Apple device games that public interest groups said should not be included in software geared for children. Some parents said their children didn’t understand the difference between real and pretend purchases for items such as $99 barrels of Smurfberries on the Capcom Interactive game Smurfs Village.

I’ll skip the question of whether it’s the proper role of the federal government to be a surrogate parent to children given iPhones by their real parents. Instead I’ll simply say that I don’t know how much easier we can expect Apple to make it for parents to supervise their children.

  • Passwords All purchases on iOS devices require the user to enter a password before it can be completed. Don’t give your child the password and you don’t have to worry about charges.

  • Allowances If you do want to allow your child to make purchases, but what to set some limits, Apple makes it easy to create an iTunes allowance account that allows a parent to specify an amount that is added to a child’s account each month. Once the child uses the amount, he can’t spend any more.

What more do we want Apple to do?

]]>
https://techliberation.com/2011/02/22/ftc-to-probe-apple-for-in-app-purchases/feed/ 2 35228
Some Sense on Sexting https://techliberation.com/2011/02/10/some-sense-on-sexting/ https://techliberation.com/2011/02/10/some-sense-on-sexting/#comments Thu, 10 Feb 2011 17:34:21 +0000 http://techliberation.com/?p=34970

Bucking a trend seen in other states, Texas lawmakers are taking steps to separate teen “sexting,” the sending and receiving sexually explicit photos via cell phone or email, from child pornography.

A bill proposed by State Sen. Kirk Watson of Austin, and backed by Texas State Attorney General Greg Abbott, would classify sexting as a Class C misdemeanor for first time violators under 18. Under current law, sexting is a Class C felony carrying penalties of two to 10 years in prison, a fine up to $10,000 and lifelong registration as a sex offender.

The Lone Star State deserves credit for taking a sensible approach to addressing what is without doubt stupid behavior that comes with serious consequences, but is far from the predation that child pornography laws are intended to target.

As the Houston Chronicle reports, instead of sending young people to jail for sexting, the bill, SB 407, would authorize judges to sentence minors — and one of the minor’s parents — to participate in an education program about sexting’s consequences.

The new law also would allow teens to apply to the court to have the offense expunged from their records.

“This bill ensures that prosecutors — and, frankly, parents — will have a new, appropriate tool to address this issue,” [Abbott] said. “It helps Texas laws keep up with technology and our teenagers.” According to the Chronicle, Texas has never prosecuted a teen for sexting under child porn laws.

Texas joins Vermont, Illinois, Utah and Ohio among states seeking to decriminalize sexting.  These states stand in stark contrast to others where attorneys general apparently want to use the threat of lifelong sex offender designation as a bludgeon.

In northeastern Pennsylvania, a prosecutor recently threatened to file child porn charges against three teenage girls who authorities say took racy cell-phone pictures that ended up on classmates’ cell phones. In New Jersey, a 14-year old girl was charged with distributing child pornography after she posted nude pictures of herself on MySpace. The charge brought criticism from Maureen Kanka, whose daughter Megan became the namesake of Megan’s Law after she was raped and killed by a twice-convicted sex offender.

The teen needs help, not legal trouble, Kanka told the Associated Press. “This shouldn’t fall under Megan’s Law in any way, shape or form. She should have an intervention and counseling, because the only person she exploited was herself.”

Finally, prosecuting sexting as child pornography creates problems in the long-term because it defines predation down. We don’t want to give truly dangerous child predators an opportunity to credibly dismiss their sex offender status as a result of poor teenage judgment when it came to pressing the “send” button on a cell phone. Yet, if overzealous prosecutors keep this up, the cynics will be predicting that, in the future, everyone will be a registered sex offender. That’s not a very funny joke.

]]>
https://techliberation.com/2011/02/10/some-sense-on-sexting/feed/ 10 34970
Why Online Dating Criminal Background Checks Aren’t As Advertised https://techliberation.com/2010/12/21/why-online-dating-criminal-background-checks-aren%e2%80%99t-as-advertised/ https://techliberation.com/2010/12/21/why-online-dating-criminal-background-checks-aren%e2%80%99t-as-advertised/#comments Tue, 21 Dec 2010 19:26:45 +0000 http://techliberation.com/?p=33784

Recent media attention has resurrected the notion that criminal background checks for online dating sites are helpful and should even be required by law. Sunday’s front page article in the New York Times described how companies selling background checks can “unmask Mr. or Ms. Wrong.” And today’s Good Morning America featured a segment called “Online Dating: Are you Flirting with a Felon?”

I was interviewed by both the Times and Good Morning America to say that these background checks are superficial, create a false sense of security, and that government should never mandate these for online dating sites. First of all, I should say that I’m personally involved in this issue. I met my wife on Match.com. We didn’t screen each other, at least not for a criminal past. I remember doing a simple search on her screen name however, and for a while thinking she could be someone who she wasn’t, though.

But for fun, I did a postmortem background check on myself, just to see what my now wife would have seen. First, I went to Intelius and spent $58 (warning: there’s a constant barrage of confusing upsells) to see criminal, civil judgment, property, name, telephone and social networking data. The result: nothing harmful thankfully! But also nothing particularly helpful, either. And the report included a family member that isn’t, and left out my brother that is. Then I went to MyMatchChecker and ordered the basic level screening (the two most expansive products–“Getting Serious” and “All About Me”–require social security numbers, which I doubt most people will not learn about the other until they actually get married). The site made it easy to not include all relevant info, and I didn’t, so there’s a delay on my check. But let’s assume it’s all good too (ahem).

So would my wife have used the absence of a negative history to assume I was a good person? Well, she shouldn’t have. Although these criminal screenings can help in some situations, they still have some serious shortcomings. They result in false negatives when criminal records don’t appear or may not include felony arrests that were plead down to misdemeanors.

And these sort of criminal screenings are not very inclusive–at all. According to True.com, the only dating site that screens every member, their database for the District of Columbia would catch only those people sent to jail between 1987 and 2002 (in addition to registered sex offenders, which anyone can search for free). But here’s the clincher, many counties don’t even report their criminal records to a publicly accessible central database. The last time I checked, in Illinois only 4 out of 102 counties report to a centralized database accessible to companies that perform background screenings. That’s a huge amount of people excluded from the background pool.

When I went to testify in Illinois a few years ago, one member off a House Judiciary committee, an ex-FBI agent, understood the failures of screenings that are conducted with a name only. He differentiated criminal screenings with the more thorough and reliable background check (based on social security number, date of birth, fingerprints, employment history, etc.) and helped persuade his colleagues that a dating bill that promotes screenings would create more harm than good.

Because these criminal checks are incomplete and often inaccurate, I also worry about the false positives that could exist, mistakenly leading one to believe that the other person is worse than they actually are. If my report came back with some speeding tickets—hypothetically speaking, of course—would she have met me for our first date? Well, I guess it’s too late now!

But there is good new here. There’s zero evidence that meeting people online is any more dangerous than meeting them at bars, or social functions or through friends. Indeed, there is anecdotal evidence to suggest that the Internet makes such exchanges more transparent, rather than less.

Still, you should always be cautious when meeting people offline that you’ve met online. As a newly married Internet policy expert I know its good policy to share my wife’s feelings on this topic. And to protect yourself, she says to keep in mind the 3 Ps:

  • Meet in a public space,
  • Limit the amount of personal information you give out
  • Phone a friend to let them know where you’ll be on your date

-Braden Cox

]]>
https://techliberation.com/2010/12/21/why-online-dating-criminal-background-checks-aren%e2%80%99t-as-advertised/feed/ 6 33784
At FCC’s “Generation Mobile” Event, the Kids Speak Plainly & Pointedly https://techliberation.com/2010/12/14/at-fcc%e2%80%99s-%e2%80%9cgeneration-mobile%e2%80%9d-event-the-kids-speak-plainly-pointedly/ https://techliberation.com/2010/12/14/at-fcc%e2%80%99s-%e2%80%9cgeneration-mobile%e2%80%9d-event-the-kids-speak-plainly-pointedly/#respond Tue, 14 Dec 2010 22:46:27 +0000 http://techliberation.com/?p=33616

At today’s FCC “Generation Mobile” forum — chock-full of online safety experts, company reps, Jane Lynch of the TV show Glee, and even Chairman Genachowski himself — it was the kids that made the show about mobile technology worthwhile. On a panel about generation mobile, here are a few of the statements we heard from high school kids:

  1. “Don’t just take the phone away.”
  2. “When parents snoop too much, it’s a privacy invasion.”
  3. “We’ll listen more if you present us with concrete evidence for behavioral restrictions.”

These are the kinds of arguments tech policy advocates make, only we would have said them in our unique brand of policy speak:

  1. Don’t regulate the technology, regulate bad behavior.
  2. Privacy is important and governments/companies must respect the privacy interests of their citizens/customers.
  3. Policymakers should collect sufficient data and analysis before introducing new legislation

Policy geek speak aside, here are some interesting facts we heard about teen use of mobile technology:

  • According to Genachowski, 80% of fatal teen driving accidents are caused by distracted driving
  • According to Amanda Lenhart of Pew, 15% of kids have received a sext message; only 4% have sent a sext
  • Also from Amanda, 62% of schools allow cell phones in the school, but not in the classroom. 12% permissively allow anywhere.
]]>
https://techliberation.com/2010/12/14/at-fcc%e2%80%99s-%e2%80%9cgeneration-mobile%e2%80%9d-event-the-kids-speak-plainly-pointedly/feed/ 0 33616
Thoughts on Oral Arguments in Schwarzenegger v. EMA Video Game Case https://techliberation.com/2010/11/04/thoughts-on-oral-arguments-in-schwarzenegger-v-ema-video-game-case/ https://techliberation.com/2010/11/04/thoughts-on-oral-arguments-in-schwarzenegger-v-ema-video-game-case/#respond Thu, 04 Nov 2010 18:40:08 +0000 http://techliberation.com/?p=32793

I’m still digesting the transcript from Tuesday’s Supreme Court oral arguments in the important First Amendment video game case, Schwarzenegger v. EMA. [Full transcript is here.]  I thought I would post just a couple of quick thoughts here. [Reminder: here is the amicus brief that Berin Szoka and I filed in the case, and here is some analysis of the case by Larry Downes.]

On Defining “Deviant Violence”

Much of the discussion during oral arguments was preoccupied with defining the contours of the term “deviant violence.”  I was pleased to see the Justices asking some sharp questions about the interpretation of that term for regulatory purposes. In particular, I enjoyed Justice Scalia’s remarks and questions to California Deputy Attorney General Zackery Morazzini, who argued the case on behalf of the state. Scalia said:

I am not just concerned with the vagueness. I am concerned with the vagueness, but I am concerned with the First Amendment, which says Congress shall make no law abridging the freedom of speech. And it was always understood that the freedom of speech did not include obscenity. It has never been understood that the freedom of speech did not include portrayals of violence. You are asking us to create…  a whole new prohibition which the American people never — never ratified when they ratified the First Amendment.  They knew they were — you know, obscenity was — was bad, but — what’s next after violence? Drinking? Smoking? Movies that show smoking can’t be shown to children? Does — will that affect them? Of course, I suppose it will.  But is — is that — are — are we to sit day by day to decide what else will be made an exception from the First Amendment? Why — why is this particular exception okay, but the other ones that I just suggested are not okay? (p. 15-16)

Indeed, that’s what is at stake in this case: The beginning of a new class of exceptions to the First Amendment based upon concerns about children’s exposure to depictions of “excessive” or “deviant” violence.”  Once you open up this can of worms, the sky is likely the limit in terms of how far governments might go to regulate speech in the name of “protecting children.”

If a majority of the Justices choose to side with the State of California and open the floodgates to a new era of speech regulation, I very much looking forward to seeing how they reconcile that with their decision last term in the controversial case of United States v. Stevens. In Stevens, the Court struck down a federal law that criminalized the creation or sale of videos showing animal cruelty. The law that the Court overturned was particularly concerned with “crush videos,” which, according to the Court, “feature the torture and killing of helpless animals and are said to appeal to persons with a specific sexual fetish.” As I pointed out in this earlier essay, it would seem rather peculiar that the Court would allow the dissemination of videos of real kittens having their heads crushed by naked women in high heels, which kids might be able to see on the Internet, but then hold here in the Schwarzenegger case that allowing a minor to buy an M-rated video game with depictions of violence is verboten.  Hard to find the logic in that!

But the Court is going to have an even harder time reconciling regulation of depictions of violence with obscenity law and then delineating the boundaries of what governments can and cannot censor or control the sale of.  At least with obscenity, we have one bright-line test: Is sexual penetration shown?  Of course, things get pretty pretty murky after that.  Regardless, what is the equivalent test for violence in video games, movies, or television? Is it decapitation or exploding heads?  What if it’s a zombie head?  What if it’s just a ear that gets blown off a zombie’s head? What if you beat the zombie over the head with a baseball bat to kill him but his head never comes off? Or, as Justice Sotomayor asked, “what happens when the character gets maimed, head chopped off and immediately after it happens they spring back to life and they continue their battle. Is that covered by your act? Because they haven’t been maimed and killed forever. Just temporarily.” (p. 58)

You get the point: A lot of line-drawing is going to need to be done if the Court goes down this path.

On Juries & “Community Standards”

So, let’s drill a little deeper into the line-drawing issue and the enforcement of such regulatory ordinances. During oral arguments, there was an interesting exchange regarding how the State of California, or any other local government, might go about enforcing more speech-limiting ordinances on this front. Justice Ginsburg asked Assistant AG  Morazzini: “does California have any kind of an advisory opinion, an office that will view these videos and say, yes, this belongs in this, what did you call it, deviant violence, and this one is just violent but not deviant? Is there — is there any kind of opinion that the — that the seller can get to know which games can be sold to minors and which ones can’t?”  A terrific question and one followed up by Justice Scalia, who joked (I think): “You should consider creating such a one. You might call it the California office of censorship. It would judge each of these videos one by one.”

In response, Mr. Morazzini defaulted to the old obscenity playbook and argued that:

California’s not doing that here. The standard is quite similar to that in the sexual material realm. California is not acting as a censor. It is telling manufacturers and distributors to look at your material and to judge for yourselves whether or not the level of violent content meets the prongs of this definition. (p. 24)

Thus, Mr. Morazzini wants to dismiss the entire inquiry with the retort: “we ask juries to judge sexual material and its appropriateness for minors as well.”  But that doesn’t necessarily make such regulation any less offensive in the eyes of the First Amendment.  If the state empowers juries to censor, well, it’s still censorship. It’s just censorship with a slightly more democratic face!

Of course, in the field of First Amendment jurisprudence, this is all filed under the banner of “community standards” regulation. As Mr. Morazzini suggests, these is, indeed, a history of it in this country when it comes to obscenity law, although its increasingly rare.  Regardless, I have argued that the time has come to think differently about the appropriateness of “community standards” regulation.  Here’s how I put it in some remarks I made at the Oxford University Internet Institute last year:

It is my hope and belief that we are now in a position to more fully empower parents such that government regulation of content and communications will be increasingly unnecessary. In the past, it was thought to be too difficult for families to enforce their own “household standard” for acceptable content. Thus, many believed government needed to step in and create a baseline “community standard” for the entire citizenry.  Unfortunately, those “community standards” were quite amorphous and sometimes completely arbitrary when enforced through regulatory edicts.  Worse yet, those regulatory standards treated all households as if they had the same tastes or values—which is clearly not the case in most pluralistic societies. If it is the case that families now have the ability to effectively tailor media consumption and communications choices to their own preferences—that is, to craft their own “household standard”—then the regulatory equation can and should change.  Regulation can no longer be premised on the supposed helplessness of households to deal with content flows if families have been empowered and educated to make content determinations for themselves.  Luckily, that is the world we increasingly live in today. Parents have more tools and methods at their disposal to help them decide what constitutes acceptable media content in their homes and in the lives of their children. Going forward, our goal should be to ensure that parents or guardians have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information. Optimally, those tools and methods would give them the ability to not only block objectionable materials, but also to more easily find content they feel is appropriate for their families. In my work, I refer to this as the “household empowerment vision.”

What we have with the Schwarzenegger case is the perfect test case for which direction the Court wants to take us.  Will the Court hold on to the past and the old vision of “community standards” regulation that the State of California wants to extend?  Or will the Court recognize that that standard was really a second-best surrogate for more direct parental and household-based standards of control?  The latter position is the one more consistent with a free, diverse society.  As I argued in my old book on Parental Controls & Online Child Protection:

Decisions about acceptable media content are extraordinarily personal; no two people or families will have the same set of values, especially in a nation as diverse as ours. Consequently, it would be optimal if public policy decisions in this field took into account the extraordinary diversity of citizen and household tastes and left the ultimate decision about acceptable content to them. That’s especially the case in light of the fact that most U.S. households are made up entirely of adults.
The ideal state of affairs, therefore, would be a nation of fully empowered parents who have the ability to perfectly tailor their family’s media consumption habits to their specific values and preferences. Specifically, parents or guardians would have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information. Importantly, those tools and methods would give them the ability to not only block  objectionable materials, but also to more easily find content they feel is appropriate for their families.

On The Role of Parental Controls in First Amendment Jurisprudence

Finally, let’s talk about those parental controls for a moment and the role they play in debates over First Amendment jurisprudence.  At one point during the oral arguments on Tuesday, Chief Justice Roberts interrupted video game industry lawyer Paul M. Smith of Jenner & Block to say that, “any 13-year-old can bypass parental controls in about 5 minutes.”  In response, Mr. Smith correctly noted that “That is one element of about five different elements” and cited a couple of other things such as the information conveyed by the video game’s excellent ratings system, as well as household-level controls / restrictions and the “power of the purse” that parents can exercise when junior asks for $50-$60 bucks to buy one of these games.

What Mr. Smith was getting at here is that today we have access to what I have called “a mosaic of parental control tools and methods” and what is really essential for First Amendment jurisprudence is that the Court not pin everything on just one of those tool or method.  Yes, some kids can evade parental controls, ignore household rules, steal money from Mom or Dad’s wallet to buy a game, etc.  But the combination of these many layers of control constitute what the court has repeatedly called “the less restrictive means” of dealing with these concerns compared to the sweeping nature of government content controls.

Importantly, we should recall what the Supreme Court said about the less restrictive means test in its 2000 decision in U.S. v. Playboy Entertainment Group (2000), which echoed its earlier holding in Reno v. ACLU.  Specifically, in the Playboy case, the Court held that:

[T]argeted blocking [by parents] enables the government to support parental authority without affecting the First Amendment interests of speakers and willing listeners — listeners for whom, if the speech is unpopular or indecent, the privacy of their own homes may be the optimal place of receipt. Simply put, targeted blocking is less restrictive than banning, and the Government cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests.

Moreover, the Court held that:

It is no response that voluntary blocking requires a consumer to take action, or may be inconvenient, or may not go perfectly every time. A court should not assume a plausible, less restrictive alternative would be ineffective; and a court should not presume parents, given full information, will fail to act.

This is an extraordinarily high bar the Supreme Court has set for policymakers wishing to regulate modern media content.  As constitutional law scholar Geoffrey R. Stone of the University of Chicago School of Law has noted:

The bottom line, then, is that even in dealing with material that is “obscene for minors,” the government cannot directly regulate such material… Rather, it must focus on empowering parents and other adults to block out such material at their own discretion, by ensuring that content-neutral means exist that enable individuals to exclude constitutionally protected material they themselves want to exclude. Any more direct regulation of such material would unnecessarily impair the First Amendment rights of adults.

This is why parental control tools and methods are more important than ever before. The courts have largely foreclosed government censorship and placed responsibility over what enters the home squarely in the hands of parents.  But will the Supreme Court reverse this jurisprudential trend with its decision in the Schwarzenegger v. EMA decision?  I hope not.  If they do, it will undo about 15 years of really excellent case law on this front.

]]>
https://techliberation.com/2010/11/04/thoughts-on-oral-arguments-in-schwarzenegger-v-ema-video-game-case/feed/ 0 32793
Violent Video Games Head to Supreme Court https://techliberation.com/2010/11/01/violent-video-games-head-to-supreme-court/ https://techliberation.com/2010/11/01/violent-video-games-head-to-supreme-court/#comments Tue, 02 Nov 2010 02:59:04 +0000 http://techliberation.com/?p=32754

Today, the U.S. Supreme Court will hear arguments in Schwarzenegger v. EMA, a case that challenges California’s 2005 law banning the sale of “violent” video games to minors. The law has yet to take effect, as rulings by lower federal courts have found the law to be an unconstitutional violation of the First Amendment.

There’s little doubt that banning the sale of nearly any content to adults violates the protections of Free Speech, including, as decided last year, video depictions of cruelty to animals.

But over the years the Court has ruled that minors do not stand equal to adults when it comes to the First Amendment. The Court has upheld restrictions on the speech of students in and out of the classroom, for example, in the interest of preserving order in public schools.

And in the famous Pacifica case, the Court upheld fines levied against a radio station for airing the famous George Carlin monologue that, not-so-ironically, satirizes the FCC for banning seven particular words from being uttered over the public airwaves.

The basis for that decision was that children could be negatively influenced from hearing such language. And children have easy access to radio and TV, while parents had no effective way to keep particular broadcasts out of the house.

In today’s argument, California’s legal arguments center largely on another case, the Supreme Court’s 1968 decision in Ginsberg. There, the Court upheld state restrictions on the sale of pornography to minors, even though the material was protected speech for adult purchasers.

In Schwarzenegger v EMA, California is urging the Court to extend Ginsberg’s reasoning to include content that meets it definition for violent video games. The statute defines “violent video games” as those “in which the range of options available to a player includes killing, maiming, dismembering, or sexually assaulting an image of a human being, if those acts are depicted” in a manner that “[a] reasonable person, considering the game as a whole, would find appeals to a deviant or morbid interest of minors,” that is “patently offensive to prevailing standards in the community as to what is suitable for minors,” and that “causes the game, as a whole, to lack serious literary, artistic, political, or scientific value for minors.”

Ginsberg, the state argues in its brief, upheld a ban the sale of sexual content to minors because such content is dangerous to their development. So too, they argue here, with violent video games. (Parents and other adults, of course, could still buy the games for minors if the statute were to go into effect.)

Indeed, the state argues that such material has as much if not more of a negative impact on the development of children than does sexual material.

That, of course, is a question open to considerable debate. After the fact, the state cites a number of academic studies that find a correlation between violent video game exposure (including games, such as Super Mario Brothers, well outside the the California definition) and anti-social behavior. But, as excellent reply briefs from the Entertainment Merchants Association and a joint brief from the Progress and Freedom Foundation and the Electronic Frontier Foundation point out, the methodology in these studies has been roundly criticized.

Moreover, California doesn’t seem to understand that the statistical significance of a correlation does not necessarily translate to real-world behavior—correlation is not the same as causation, no matter how strong the statistics. And even the authors of the studies most relied on by the state recognize that it isn’t clear in which direction the correlation moves—are children who play violent video games more likely to have violent thoughts because they played the game, or are pre-existing violent thoughts what attracts them to the games?

Why Video Games? Why Now?

The Court may focus on those studies in its decision, but I have a different question. Why are California and other states picking on video games, and why now? That, to me, is the more interesting problem, one that gets little attention in the briefs and, I would guess, in the Court’s eventual decision.

Perhaps the why is obvious: as EMA’s brief points out, similar attacks have accompanied the rise in popularity of every new form of media to emerge throughout U.S. history.

The California statute … is the latest in a long history of overreactions to new expressive media. In the past, comic books, true-crime novels, movies, rock music, and other new media have all been accused of harming our youth. In each case, the perceived threat later proved unfounded. Video games are no different.

The PFF/EFF brief goes farther, accusing California legislators of succumbing to “moral panic, as lawmakers have so often done when confronted with the media of a new generation.”

Examples as varied as Greek classics, the Bible, the Brothers Grimm and Star Wars all suggest, EMA points out, that extreme–even gruesome–violence has always been a favorite subject of literature, often aimed specifically at children. As federal appellate judge Richard A. Posner wrote in rejecting a similar Indiana law, “Self defense, protection of others, dread of the ‘undead,’ fighting against overwhelming odds—these are all age-old themes of literature, and ones particularly appealing to the young.”

But why now? The answer is, not surprisingly, Moore’s Law. Laws regulating the content or distribution of video games are a classic example of the conflict I described in The Laws of Disruption.

As technology has made video game graphics more realistic and lifelike, they have captured the attention—and here the nightmares—of regulators in the real world who equate what they see on the screen with behaviors that would clearly violate laws and norms of the real world. They don’t like what they see in games including Grand Theft Auto and Resident Evil, and their impulse is to find a way, somehow, to stop it, even if it’s only a simulation.

It was not that long ago—in my life time, in any case—that video games were still in their Neolithic Era. Consider Pong, the first home video game from Atari in 1975. It would take an imagination greater than mine to think of the batting of a block of monochrome pixels by a bar of pixels to be violent enough to corrupt youth; likewise the breaking of a wall of pixels one at a time in the follow-on game Breakout.

But a few years later, consider the commercial (courtesy of YouTube) for Activision’s ice hockey game.

http://www.youtube.com/v/lROb1vWNiig?fs=1&hl=en_US

The game promises to be one of the “roughest” video games ever, “battling for the puck” with “fierce body checking” and “ruthless tripping.” Just watching the players fight it out drives a meek-looking Phil Hartman into a frenzy; within a few seconds he seems ready to attack the clerk who teases him that he’s not yet ready for it.

But despite an ad that explicitly suggests a connection between playing (or even watching the game) and becoming violent, the actual graphical quality of the violence is so disconnected from visual reality that it never occurred to any state legislature to ban or otherwise restrict it.

Now fast-forward just a few short decades later to the imminent release of Xbox 360’s Kinetics and one of the games that takes advantage of it called Kinectimals.

http://www.youtube.com/v/jFNVITpZXTM?fs=1&hl=en_US

Using Microsoft’s new sensor technology, realistically-rendered animals can be controlled simply by issuing voice commands or by mimicking the desired movements by standing in front of the images. It hardly seems possible that the same beings who invented Pong could have advanced to Kinectimals within the span of one human lifetime. But we did.

Coupled with new 3D technology and increasingly large, high-fidelity displays, video games have in the course of only a few decades and a few cycles of Moore’s Law, advanced to the point of challenging the cinematic qualities of movies. Indeed, games and films are converging, and now use much of the same technology to produce and to display. A new sub-genre of user-produced content involves taking the cinematic interludes within the games and using them to produce original films. After all, video game users today not only control game play but also lighting, camera angles, and point of view.

Why not? As Nicholas Negroponte would say, bits are bits.

So now that video games offer fidelity in imagery and movement that is comparable to film, the law has awakened to both their positive and negative impacts on those who interact with them. Since the First Amendment clearly doesn’t allow interference with the sale of violent content to adults, California focused on children. But it’s clear from the tone of the state’s brief that they just plain don’t like certain video games, just as they didn’t like certain movies and certain books in an early age of mass-market technologies. As before, they would like, if they could, to turn the clock back.

Of course that is always the response of the law to new technologies that challenge our conceptions of reality. The only difference between the comic book burnings of the 1950’s and the emotional responses of legislators today is the speed with which those new technologies are arriving. The killer apps come faster all the time. And with them, the counter-revolutionaries.

Frozen in Time, Lost in Relevance

Which is why the California statute suffers from another common and fatal flaw of laws attempting to hold back new technologies: early obsolescence. Even if the Supreme Court upholds the law, its effect will be minimal at best.

Why? Lost in the legal arguments (and reduced to a mere footnote in the EMA brief) is the impending anachronism of the California statute. It assumes a world, disappearing almost as quickly as it arrived, in which video games are imported into California as physical media in packages, and sold in retail stores.

Consider, for example, Section 1746.2:

Each violent video game that is imported into or distributed in California for retail sale shall be labeled with a solid white “18” outlined in black. The “18” shall have dimensions of no less than 2 inches by 2 inches. The “18” shall be displayed on the front face of the video game package.

But sales of video games in media form are rapidly declining as broadband connections make it possible for game developers and platform manufacturers to transport the software over the Internet. So even if the law is ruled constitutional, it will apply to an ever-shrinking portion of the video game market. There will soon be no “retail sale” and no “front face” of a “package” onto which to put a label in the first place.

These industry changes, of course, aren’t being made to evade laws like California’s. Digital distribution reduces costs and eliminates middlemen who add little or no value (the retailers, the packagers, the truckers). More to the point, they allow the companies to establish on-going relationships with their customers, which can be leveraged to selling add-on chapters and levels, on-line play, and the sale of related product and content, including films and movies.

The industry, in other words, is not only evolving in terms of sophistication and realism of the product. The same technologies are also scrambling its supply chain. And what is emerging as the new model for “games” is something in which California and other states have almost no regulatory interest.

So it seems an odd time to target legislation at a particular and disappearing version of the industry’s content and retail channels. Even if the Court upholds the California law, it will likely have little impact on the material at which it is aimed.

But that’s often the case with laws trying to manage the unpleasant social side effects of new technologies just as they become visible to the outside world. The pace of legal change can’t hope to keep up with the pace of technological change, making this law, like many others, out-of-date even before the ink is dry.

Which is not to say that the Supreme Court’s decision in this case won’t matter. Another feature of statutes like this, unfortunately, is a high likelihood of unintended consequences. The potential for the Court’s decision—pro or con–to do mischief in the future, however, to unrelated industries and dissimilar content, is legion.

For example? As the PFF/EFF brief points out, California and other states may try to extend the ban on sales to minors to online channels. But it isn’t so easy to determine the age of an online buyer as someone in your brick-and-mortar store. “Applying the law online would likely require mandatory age verification of all online gamers because the law prohibits any sale or rental to a minor,” PFF/EFF argues, “even if the vendor had no evidence that the buyer was a minor.” That feature of an earlier federal effort to control pornography online was the undoing of the statute.

But in the Supreme Court, and the lower courts who interpret its decisions, anything can happen, and usually does.

]]>
https://techliberation.com/2010/11/01/violent-video-games-head-to-supreme-court/feed/ 3 32754
NGOs, Law Enforcement and Internet Companies –Coming Together to Fight Commercial Sexual Exploitation https://techliberation.com/2010/10/19/ngos-law-enforcement-and-internet-companies-coming-together-to-fight-commercial-sexual-exploitation/ https://techliberation.com/2010/10/19/ngos-law-enforcement-and-internet-companies-coming-together-to-fight-commercial-sexual-exploitation/#comments Wed, 20 Oct 2010 00:22:49 +0000 http://techliberation.com/?p=32560

Today I testified at a hearing by Massachusetts Attorney General Martha Coakley on commercial sexual exploitation and the Internet. When I first learned about it, I feared the worst: time to demonize the Internet. After all, the hearing announcement openly targeted Craigslist and websites generally. But this was not the case at all—as we heard, NGOs, law enforcement, and industry all have roles to play.

Instead of Internet-bashing, the hearing was a constructive dialogue. We learned why children are forced into prostitution and how classified ads on the Internet can promote this illegal activity. I was there to learn how we can help.

Commercial sexual exploitation is big business. Over 100,000 women are in the illegal sex trade. Often these women are actually teenage girls, vulnerable and with no place to go. Their lives are run by pimps, they cater to “johns,” and their lives are a living hell – except that these women become so desensitized that they eventually have no life at all.

These child prostitutes show up in advertisements for “escort services” or “adult services.” Traditionally, these ads were in the yellow pages. Now they exist on the Internet, and these listings can often be graphic. But it’s hard to tell whether these ads involve women against their will or underage girls. That’s why there are folks who would like to see all these ads disappear. And they’ll blame Internet classifieds—indeed, one witness called sites like Craigslist and Backpage “electronic pimps.”

Unfortunately, there are those that think it is better to force the shut down of the adult services section of these sites. But as we heard from danah boyd of Microsoft and a fellow at the Harvard Berkman Center, merely shutting down the listed supply of adult services is superficial. It’s shutting off the most visible aspect of human anti-trafficking, which is a huge honeypot where pimps advertise and johns congregate. This should be the first place to start an investigation, not end a prosecution.

It’s far better for law enforcement to use these sites to identify what they think are ads of women in forced prostitution, and then infiltrate their criminal networks to reduce both the supply of women and the demand for their services. If we can develop strategies to break the networks, we can get to the root of the problem.

To this end, danah boyd also made great points about not getting distracted by the technology. Bad actors are sexually exploiting young girls by using the Internet to further their criminal enterprise, but it’s not an Internet problem
per se. Focusing on removing websites or portions of sites addresses symptoms of a much deeper criminal syndicate. For the most part, I think this point resonated with the Attorney General’s staff.

What certainly resonated throughout the entire hearing was that sex trafficking is a complex problem that requires a multi-disciplinary approach. We heard this from child welfare and victimization groups, law enforcement, and the online industry.

And that’s why we heard AG Coakley call for a task force to study the issue. We support her desire for all the interested groups to come together, and look forward to working with her to help eliminate commercial sexual exploitation.

]]>
https://techliberation.com/2010/10/19/ngos-law-enforcement-and-internet-companies-coming-together-to-fight-commercial-sexual-exploitation/feed/ 2 32560
Contension Over Privacy in the Cloud? There Shouldn’t Be… https://techliberation.com/2010/09/30/contension-over-privacy-in-the-cloud-there-shouldnt-be/ https://techliberation.com/2010/09/30/contension-over-privacy-in-the-cloud-there-shouldnt-be/#respond Thu, 30 Sep 2010 23:00:26 +0000 http://techliberation.com/?p=32032

I’d like to recommend Sonia Arrison’s recent article on the need for updating the Electronic Privacy Communications Act (ECPA). She makes a good case why citizens should feel a bit worried about the ability of government to invade their privacy when they keep data in the cloud. And citizens are customers, so online businesses are worried if people may use less of their services. But here’s another angle for why we need to update ECPA…it’s to promote online safety. From an excellent analysis by Becky Burr, ECPA reform:

Would establish uniform, clear, and easily understood rules about when and what kind of judicial review is needed by law enforcement to access electronic content; and Would, by clarifying the applicable rules, enable business to respond more quickly and with greater confidence to law enforcement requests and to avail themselves of hosted productivity technology.

Right now the law is muddled, and online services have a hard time determining legitimate requests from those that are overreaching. When the law is clarified, businesses and law enforcement can (with appropriate legal process) share information that can help find sexual predators and other online miscreants.

]]>
https://techliberation.com/2010/09/30/contension-over-privacy-in-the-cloud-there-shouldnt-be/feed/ 0 32032
Precrime Regulation of Internet Innovation https://techliberation.com/2010/09/21/precrime-regulation-of-internet-innovation/ https://techliberation.com/2010/09/21/precrime-regulation-of-internet-innovation/#respond Tue, 21 Sep 2010 17:28:25 +0000 http://techliberation.com/?p=31896

Up on the NetChoice blog, Steve DelBianco writes about how online child safety was a hot topic at the Internet Governance Forum (IGF) last week in Lithuania. There was one workshop on location-based services that allow users to publish their mobile phone location info to their parents or social network pages (e.g. Foursquare, Loopt, and Facebook Places).

The entire workshop reminded Steve of the movie Minority Report, where a ‘precrime’ police unit relies on the visions of psychics to predict future crimes, then arrests the potential perpetrators before they do anything wrong:

In the world of Internet governance, the future is now, as regulators want online services to predict and prevent safety threats before they actually occur. According to some privacy advocates and lawmakers, the precrime problem here is that location data might be seen by someone with bad intentions.  In the name of protecting children, panelists here favor a policy framework that would require innovators to clear new location-based services with regulators before making them available to users.

Think of the irony with this regulatory approach. Lawmakers are not likely to predict all the ways that bad people can abuse a good service, and regulatory approvals are notoriously slow and inflexible.  On the other hand, Internet innovation is marked by rapid development of new services and quick reactions to fine-tune new features or fix unexpected problems.

Thankfully, there was a young person in the audience that actually knows how kids use the Internet and what will help them the most:

More sage advice came from young people – the anticipated victims of precrimes that might use location-based info. Joonas Makinen of the Youth Coalition on Internet Governance told the IGF, “It is better to focus on fighting ignorance and building digital literacy than applying safety strategies based on restriction.”

Indeed.

]]>
https://techliberation.com/2010/09/21/precrime-regulation-of-internet-innovation/feed/ 0 31896
MPAA Ratings Are Better Than the Alternative https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/ https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/#comments Fri, 20 Aug 2010 14:08:29 +0000 http://techliberation.com/?p=31255

Back in March, the Motion Picture Association of America re-launched its film-rating website, filmratings.com. While this may be old news to some, I just learned about it from a post on BoingBoing which makes fun of the rationales given for the ratings, which are available on the new website. Example: The movie “3 Ninjas Knuckle Up” was “rated PG-13 for non-stop ninja action.”

It’s fine to joke about particular ratings, but we shouldn’t forget that the MPAA’s rating system was created to avoid government censorship, which was a real possibility after the 1915 U.S. Supreme Court case Mutual Film Corporation v. Industrial Commission of Ohio, which ruled that “the exhibition of moving pictures is a business, pure and simple, originated and conducted for profit … not to be regarded, nor intended to be regarded by the Ohio Constitution, we think, as part of the press of the country, or as organs of public opinion.” By a unanimous vote, the Supreme Court ruled that the First Amendment did not apply to motion pictures because “they may be used for evil.” (There was also an issue of whether the First Amendment applied to state actions, but because the state constitution at issue was substantially similar to the U.S. Constitution, that was not a factor in the opinion).

After a number of Hollywood scandals and public outcry over the immorality of Hollywood in the 1920s, the Motion Pictures Producers and Distributors Association (the precursor to the MPAA), adopted the Motion Pictures Production Code (known as the “Hays Code” after the first MPAA president) in 1930. The code required that “No picture shall be produced that will lower the moral standards of those who see it. Hence the sympathy of the audience should never be thrown to the side of crime, wrongdoing, evil or sin.”

]]>
https://techliberation.com/2010/08/20/mpaa-ratings-are-better-than-the-alternative/feed/ 5 31255