The European Commission has a new report out today on “Implementation of the Safer Social Networking Principles for the EU.” It’s a status report on the implementation of “Safer Social Networking Principles for the EU“, a “self-regulatory” agreement the EC brokered with 17 social networking sites and other online operators back in 2009. (Co-regulatory would be more accurate here, since the EC is steering, and industry is simply rowing.) The goal was to make the profiles of minors more private and provide other safeguards.
Generally speaking, the EC’s evaluation suggests that great progress has been made, although there’s always room for improvement. For example, the report found that “13 out of the 14 sites tested provide safety information, guidance and/or educational materials specifically targeted at minors;” “Safety information for minors is quite clear and age-appropriate on all sites that provide it, good progress since the first assessment last year; “Reporting mechanisms are more effective now than in 2010;” and most sites have improved Terms of Use that are easy for minors to understand and/or a child-friendly version of the Terms of Use or Code of Conduct; and many “provide safety information for children and parents which is both easy to find and to understand.” Again, there’s always room for improvement, but the general direction is encouraging, especially considering how new many of these sites are.
Unfortunately, Neelie Kroes, Vice President of the European Commission for the Digital Agenda, spun the report in the opposite direction. She issued a statement saying:
I am disappointed that most social networking sites are failing to ensure that minors’ profiles are accessible only to their approved contacts by default. I will be urging them to make a clear commitment to remedy this in a revised version of the self-regulatory framework we are currently discussing. This is not only to protect minors from unwanted contacts but also to protect their online reputation. Youngsters do not fully understand the consequences of disclosing too much of their personal lives online. Education and parental guidance are necessary, but we need to back these up with protection until youngsters can make decisions based on full awareness of the consequences.
This position is misguided, as explained below. But here’s the crucial point: What this Kroes statement once again proves is that, ultimately, every major public policy debate about online privacy and child safety comes down to a question of where to set the defaults and who should set them. Rarely, however, do policymakers or regulatory advocates acknowledge the downsides associated with mandating highly restrictive defaults from the top down.
Back in 2008, I penned a paper on “The Perils of Mandatory Parental Controls and Restrictive Defaults” in which I argued that, “Government regulation mandating restrictive parental control defaults for media devices would likely have unintended consequences and would not achieve the goal of better protecting children from objectionable content, whereas increased consumer education efforts would be more effective in helping parents control their child’s media consumption.” The general point was that if government defaulted all sites and/or devices to be in a “locked-down” state right out of the gates, it would mean products and services would, in essence, be shipped to market in a crippled state. This would have a variety of unintended consequences, including consumer confusion and such restrictions would discourage the maximum amount of utility / experimentation associated with those products and services.
The same is true of highly restrictive privacy defaults. How are you even to network with others and make new friends if everything is private by default? Worst of all is the fact that the EC seems to want websites to make it practically impossible for minors to even search for each other. That’s increasingly how users of all ages connect with their real world acquaintances, for whom they may have no other contact information. Isn’t the point of social networking to be social and share more? If a child or a parent doesn’t like that openness, why isn’t it sufficient that they be empowered to change that setting on their own? Why must the law mandate it by default and tell them what is supposedly best for them?
Nicklas Lundblad & Betsy Masiello made a similar point in their important recent essay on “Opt-In Dystopias.” They noted that more formal opt-in consent models may involve many trade-offs and downsides that need to be considered relative to opt-out models, which are currently more prevalent online. “The decisions a user makes under an opt-in model are less informed” they argue, because “the initial decision to opt-in to a service is made without any knowledge of what value that service provides,” and, therefore, “under an opt-in regime a decision can probably never be wholly informed.” They continue: “If instead of thinking about privacy decisions as requiring ex-ante consent, we thought about systems that structured an ongoing contractual negotiation between the user and service provider, we might mitigate some of these harmful effects.”
The crucial point here is that choice should lie with the consumer and not be set from above. Companies should empower the consumer — including kids — with more and better tools and then let them decide what their privacy settings should be. Government need not “nudge” consumers or companies in paternalistic ways based upon the values of unelected bureaucrats. Most importantly, policymakers should not not conflate “privacy by design” with privacy by default. Let experimentation continue and let consumers make these determinations for themselves.