censorship – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 06 Jul 2022 00:35:49 +0000 en-US hourly 1 6772528 Again, We Should Not Ban All Teens from Social Media https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/ https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/#comments Wed, 06 Jul 2022 00:16:49 +0000 https://techliberation.com/?p=77004

A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.

Earlier this year, The Wall Street Journal ran a response I wrote to a proposal set forth by columnist Peggy Noonan in which she proposed banning everyone under 18 from all social-media sites (“We Can Protect Children and Keep the Internet Free,” Apr. 15). I expanded upon that letter in an essay here entitled, “Should All Kids Under 18 Be Banned from Social Media?” National Review also recently published an article penned by Christine Rosen in which she also proposes to “Ban Kids from Social Media.” And just this week, Zach Whiting of the Texas Public Policy Foundation published an essay on “Why Texas Should Ban Social Media for Minors.”

I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:

While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances. Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the  next generation that is doomed! Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.

I have a few more things to say beyond these brief comments.

First, as I alluded to in my short response to Rosen, we’ve heard similar “lost generation” stories before. Rosen might as well be channeling the ghost of Dr. Fredric Wertham (author of Seduction of the Innocent), who in the 1950s declared comics books a public health menace and lobbied lawmakers to restrict teen access to them, insisting such comics were “the cause of a psychological mutilation of children.” The same sort of “lost generation” predictions were commonplace in countless anti-video game screeds of the 1990s. Critics were writing books with titles like Stop Teaching Our Kids to Kill and referring to video games as “murder simulators,” Ironically, just as the video game panic was heating up, juvenile crime rates were plummeting. But that didn’t stop the pundits and policymakers from suggesting that an entire generation of so-called “vidiots” were headed for disaster. (See my 2019 short history: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics“).

It is consistently astonishing to me how, as I noted in 2012 essay, “We Always Sell the Next Generation Short.” There seems to be a never-ending cycle of generational mistrust. “There has probably never been a generation since the Paleolithic that did not deplore the fecklessness of the next and worship a golden memory of the past,” notes Matt Ridley, author of The Rational Optimist.

For example, in 1948, the poet T. S. Eliot declared: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.” We’ve heard parents (and policymakers) make similar claims about every generation since then.

What’s going on here? Why does this cycle of generational pessimism and mistrust persist? In a 1992 journal article, the late journalism professor Margaret A. Blanchard offered this explanation:

“[P]arents and grandparents who lead the efforts to cleanse today’s society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation’s adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children.”

In a 2009 book on culture, my colleague Tyler Cowen also noted how, “Parents, who are entrusted with human lives of their own making, bring their dearest feelings, years of time, and many thousands of dollars to their childrearing efforts.” Unsurprisingly, therefore, “they will react with extreme vigor against forces that counteract such an important part of their life program.” This explains why “the very same individuals tend to adopt cultural optimism when they are young, and cultural pessimism once they have children,” Cowen says.

Building on Blanchard and Cowen’s observation, I have explained how the most simple explanation for this phenomenon is that many parents and cultural critics have passed through their “adventure window.” The willingness of humans to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually settle in our ways. As the English satirist Douglas Adams once humorously noted: “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

There is no doubt social media can create or exacerbate certain social pathologies among youth. But pro-censorship conservatives wants to take the easy way out with a Big Government media ban for the ages.

Ultimately, it’s a solution that will not be effective. Raising children and mentoring youth is certainly the hardest task we face as adults because simple solutions rarely exist to complex human challenges–and the issues kids face are often particularly hard for many parents and adults to grapple with because we often fail to fully understand both the unique issues each generation might face, and we definitely fail to fully grasp the nature of each new medium that youth embrace.  Simplistic solution–even proposals for outright bans–will not work or solve serious problems.

An outright government ban on online platforms or digital devices is likely never going to happen due to First Amendment constraints, but even ignoring the jurisprudential barriers, bans won’t work for a reason that these conservatives never bother considering: Many parents will help their kids get access to those technologies and to evade restrictions on their use. Countless parents already do so in violation of COPPA rules, and not just because they worry that their kid won’t have access to what some other kids have. Rather, many parents (like me) both wanted to make sure I could more easily communicate with them, and also ensure that they could enjoy those technologies and use them to explore the world.

These conservatives might think some parents like me are monsters for allowing my (now grown) children to get on social media when they were teens. I wasn’t blind to the challenges, but recognized that sticking one’s head in the ground or hoping for divine intervention from the Nanny State was impractical and unwise. The hardest conversations I ever had with my kids were about the ugliness they sometimes experienced online, but those conversations were also countered by the many joys that I knew online interactions brought them. Shall I tell you about everything my son learned online before 13 about building model rockets or soapbox derby cars? Or the countless sites my daughter visited gathering ideas for her arts and crafts projects when, before the age of 13, she started hand-painting and selling jean jackets (eventually prompting her to pursue an art school degree)? Again, as I noted in my National Review response, Rosen’s prohibitionist proposal would deny teens these experiences and the countless other routine and entirely beneficial interactions that they have with their peers online every day.

There is simply no substitute for talking to your kids in the most open, understanding, and loving fashion possible. My #1 priority with my own children was not foreclosing all the new digital media platforms and devices at their disposal. That was going to be almost impossible. Other approaches are needed.

Yes, of course, the world can be an ugly place. I mean, have you ever watched the nightly news on television? It’s damn ugly. Shouldn’t we block youth access to it when scenes of war and violence are shown? Newspapers are full of ugliness, too. Should a kid be allowed to see the front page of the paper when it discusses or shows the aftermath of school shootings, acts of terrorism, or even just natural disasters? I could go on, but you get the point. And you could try to claim that somehow today’s social media environment is significantly worse for kids than the mass media of old, but you cannot prove it.

Of course you’ll have anecdotes, and many of them will again point to complex social pathologies. But I have entire shelves full of books on my office wall that made similar claims about the effects of books, the telephone, radio and television, comics, cable TV, every musical medium ever, video games, and advertising efforts across all these mediums. Hundreds upon hundreds of studies were done over the past half century about the effects of depictions of violence in movies, television, and video games. And endless court battles ensued.

In the end, nothing came out of it because the literature was inconclusive and frequently contradictory. After many years of panicking about youth and media violence, in 2020, the American Psychological Association issued a new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA now says: “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.”

This is exactly what we should expect to find true for youth and social media. Most of the serious scholars in the field already note studies and findings about youth and social media must be carefully evaluated and that many other factors need to be considered whenever evaluating claims about complex social phenomenon.

While Rosen belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to represent the best first-order response when compared to the repressive regulatory regime she would impose on society.

Finally, I want to just reiterate what I said in my brief  National Review response about the enormous challenges associated with mass criminalization or speech platforms. Rosen seems to image that all the costs and controversies will lie on the supply-side of social media. Just call for a ban and then magically all kids disappear from social media and the big evil tech capitalists eat all the costs and hassles. Nonsense. It’s the demand-side of criminalization efforts where the most serious costs lie. What do you really think kids are going to do if Uncle Sam suddenly does ban everyone under 18 from going on a “social media site,” whatever that very broad term entails? This will become another sad chapter in the history of Big Government prohibitionist efforts that fail miserably, but not before declaring mass groups of people criminals–this time including everyone under 18–and then trying to throw the book at them when they seek to avoid those repressive controls. There are better ways to address these problems than with such extremist proposals.


Additional Reading from Adam Thierer on Media & Content Regulation :

]]>
https://techliberation.com/2022/07/05/again-we-should-not-ban-all-teens-from-social-media/feed/ 1 77004
How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/ https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/#comments Thu, 20 Jun 2019 01:09:52 +0000 https://techliberation.com/?p=76507

I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet.

The bill would gut the immunities Internet companies enjoy under 47 USC 230 (“Section 230”) of the Communications Decency Act. Eric Goldman of the Santa Clara University School of Law has described Section 230 as the “best Internet law” and “a big part of the reason why the Internet has been such a massive success.” Indeed, as I pointed out in a Forbes column on the occasion of its 15th anniversary, Section 230 is “the foundation of our Internet freedoms” because it gives online intermediaries generous leeway to determine what content and commerce travels over their systems without the fear that they will be overwhelmed by lawsuits if other parties object to some of that content.

The Hawley bill would overturn this important legal framework for Internet freedom and instead replace it with a new “permissioned” approach. In true “Mother-May-I” style, Internet companies would need to apply for an “immunity certification” from the FTC, which would undertake investigations to determine if the petitioning platform satisfied a “requirement of politically unbiased content moderation.”

The vague language of the measure is an open invitation to massive political abuse. The entirety of the bill hinges upon the ability of Federal Trade Commission officials to define and enforce “political neutrality” online. Let’s consider what this will mean in practice.

Under the bill, the FTC must evaluate whether platforms have engaged in “politically biased moderation,” which is defined as moderation practices that are supposedly, “designed to negatively affect” or “disproportionately restricts or promote access to … a political party, political candidate, or political viewpoint.” As Blake Reid of the University of Colorado Law School rightly asks, “How, exactly, is the FTC supposed to figure out what the baseline is for ‘disproportionately restricting or promoting’? How much access or availability to information about political parties, candidates, or viewpoints is enough, or not enough, or too much?”

There is no Goldilocks formula for getting things just right when it comes to content moderation. It’s a trial-and-error process that is nightmarishly difficult because of the endless eye-of-the-beholder problems associated with constructing acceptable use policies for large speech platforms. We struggled with the same issues in the broadcast and cable era, but they have been magnified a million-fold in the era of the global Internet with the endless tsunami of new content that hits our screens and devices every day. “Do we want less moderation?” asks Sec, 230 guru Jeff Kosseff. “I think we need to look at that question hard.  Because we’re seeing two competing criticisms of Section 230,” he notes. “Some argue that there is too much moderation, others argue that there is not enough.”

The Hawley bill seems to imagine that a handful of FTC officials will magically be able to strike the right balance through regulatory investigations. That’s a pipe dream, of course, but let’s imagine for a moment that regulators could somehow sort through all the content on message boards, tweets, video clips, live streams, gaming sites, and whatever else, and then somehow figure out what constituted a violation of “political neutrality” in any given context. That would actually be a horrible result because let’s be perfectly clear about what that would really be: It would be a censorship board. By empowering unelected bureaucrats to make decisions about what constitutes “neutral” or “fair” speech, the Hawley measure would, as Elizabeth Nolan Brown of Reason summarizes, “put Washington in charge of Internet speech.” Or, as Sen. Ron Wyden argues more bluntly, the bill “will turn the federal government into Speech Police.” “Perhaps a more accurate title for this bill would be ‘Creating Internet Censorship Act,'” Eric Goldman is forced to conclude.

The measure is creating other strange bedfellows. You won’t see Berin Szoka of TechFreedom and Harold Feld of Public Knowledge ever agreeing on much, but they both quickly and correctly labelled Hawley’s bill a “Fairness Doctrine for the Internet.” That is quite right, and much like the old Fairness Doctrine, Hawley’s new Internet speech control regime would be open to endless political shenanigans as parties, policymakers, companies, and the various complainants line up to have their various political beefs heard and acted upon. “That’s the kind of thing Republicans said was unconstitutional (and subject to FCC agency capture and political manipulation) for decades,” says Daphne Keller of the Stanford Center for Internet & Society. Moreover, during the Net Neutrality holy wars, GOP conservatives endlessly blasted the notion that bureaucrats should be determining what constitute “neutrality” online because it, too, would result in abuses of the regulatory process. Yet, Sen. Hawley’s bill would now mandate that exact same thing.

What is even worse is that, as law professor Josh Blackman observes, “the bill also makes it exceedingly difficult to obtain a certification” because applicants need a supermajority of 4 of the 5 FTC Commissioners. This is public choice fiasco waiting to happen. Anyone who has studied the long, sordid history of broadcast radio and television licensing understands the danger associated with politicizing certification processes. The lawyers and lobbyists in the DC “swamp” will benefit from all the petitioning and paperwork, but it is not clear how creating a regulatory certification regime for Internet speech really benefits the general public (or even conservatives, for that matter).

Former FTC Commissioner Josh Wright identifies another obvious problem with the Hawley Bill: it “offers the choice of death by bureaucratic board or the plaintiffs’ bar.” That’s because by weakening Sec. 230’s protections, Hawley’s bill could open the floodgates to waves of frivolous legal claims in the courts if companies can’t get (or lose) certification. The irony of that result, of course, is that this bill could become a massive gift to the tort bar that Republicans love to hate!

Of course, if the law ever gets to court, it might be ruled unconstitutional. “The terms ‘politically biased’ and ‘moderation’ would have vagueness and overbreadth problems, as they can chill protected speech,” Josh Blackman argues. So it could, perhaps, be thrown out like earlier online censorship efforts. But a lot of harm could be done—both to online speech and competition—in the years leading up to a final determination about the law’s constitutionality by higher courts.

What is most outrageous about all this is that the core rationale behind Hawley’s effort—the idea that conservatives are somehow uniquely disadvantaged by large social media platforms—is utterly preposterous. In May, the Trump Administration launched a “tech bias” portal which “asked Americans to share their stories of suspected political bias.” The portal is already closed and it is unclear what, if anything, will come out of this effort. But this move and Hawley’s proposal point to the broader trend of conservatives getting more comfortable asking Big Government to redress imaginary grievances about supposed “bias” or “exclusion.”

In reality, today’s social media tools and platforms have been the greatest thing that ever happened to conservatives. Mr. Trump owes his presidency to his unparalleled ability to directly reach his audience through Twitter and other platforms. As recently as June 12, President Trump tweeted, “The Fake News has never been more dishonest than it is today. Thank goodness we can fight back on Social Media.” Well, there you have it!

Beyond the President, one need only peruse any social media site for a few minutes to find an endless stream of conservative perspectives on display. This isn’t exclusion; it’s amplification on steroids. Conservatives have more soapboxes to stand on and preach than ever before in the history of this nation.

Finally, if they were true to their philosophical priors, then conservatives also would not be insisting that they have any sort of “right” to be on any platform. These are private platforms, after all, and it is outrageous to suggest that conservatives (or any other person or group) are entitled to have a spot on any other them.

Some conservatives are fond of ridiculing liberals for being “snowflakes” when it comes to other free speech matters, such as free speech on college campuses. Many times they are right. But one has to ask who the real snowflakes are when conservative lawmakers are calling on regulatory bureaucracies to reorder speech on private platform based on the mythical fear of not getting “fair” treatment. One also cannot help but wonder if those conservatives have thought through how this new Internet regulatory regime will play out once a more liberal administration takes back the reins of power. Conservatives will only have themselves to blame when the Speech Police come for them.


Addendum: Several folks have pointed out another irony associated with Hawley’s bill is that it would greatly expand the powers of the administrative state, which conservatives already (correctly) feel has too much broad, unaccountable power. I should have said more on that point, but here’s a nice comment from David French of National Review, which alludes to that problem and then ties it back to my closing argument above: i.e., that this proposal will come back to haunt conservatives in the long-run:

when coercion locks in — especially when that coercion is tied to constitutionally suspect broad and vague policies that delegate immense powers to the federal government — conservatives should sound the alarm. One of the best ways to evaluate the merits of legislation is to ask yourself whether the bill would still seem wise if the power you give the government were to end up in the hands of your political opponents. Is Hawley striking a blow for freedom if he ends up handing oversight of Facebook’s political content to Bernie Sanders? I think not.

Additional thoughts on the Hawley bill:

Josh Wright

Daphne Keller

Blake Reid

TechFreedom

Josh Blackman

Sen. Ron Wyden

Jeff Kosseff

Eric Goldman

CCIA

NetChoice

Internet Association

David French at National Review

John Samples

]]>
https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/feed/ 1 76507
There Was No “Golden Age” of Broadcast Regulation https://techliberation.com/2019/06/07/there-was-no-golden-age-of-broadcast-regulation/ https://techliberation.com/2019/06/07/there-was-no-golden-age-of-broadcast-regulation/#respond Fri, 07 Jun 2019 19:45:45 +0000 https://techliberation.com/?p=76499

Slate recently published an astonishing piece of revisionist history under the title, “Bring Back the Golden Age of Broadcast Regulation,” which suggested that the old media regulatory model of the past would be appropriate for modern digital media providers and platforms. In the essay, April Glaser suggests that policymakers should resurrect the Fairness Doctrine and a host of old Analog Era content controls to let regulatory bureaucrats address Digital Age content moderation concerns.

In a tweetstorm, I highlighted a few examples of why the so-called Golden Era wasn’t so golden in practice. I began by noting that the piece ignores the troubling history of FCC speech controls and unintended consequences of regulation. That regime gave us limited, bland choices–and a whole host of First Amendment violations. We moved away from that regulatory model for very good reasons.

For those glorifying the Fairness Doctrine, I encourage them to read the great Nat Hentoff’s excellent essay, “The History & Possible Revival of the Fairness Doctrine,” about the real-world experience of life under the FCC’s threatening eye. Hentoff notes:

Others were harassed in ways that were both humorous and horrifying. For example, go back and review the amazing FCC (and FBI!) investigation of The Kingsmen’s song “Louie Louie,” due to fears about its unintelligible lyrics:

Or go back and read former CBS president Fred Friendly’s 1975 book (The Good Guys, the Bad Guys & the First Amendment) about abuses of the Fairness Doctrine during both Republican and Democratic administrations. This stuff from Kennedy years, which I summarized in old book, is quite shocking:

And then there was the “golden era” of broadcast regulation under Nixon, where regulatory harassment intensified to counter what had happened during Democratic administrations. Here’s Jesse Walker summarizing those dark days:

Also read Tom Hazlett on the Nixon years and all the broadcast meddling that happened on a ongoing basis. “License harassment of stations considered unfriendly to the Administration became a regular item on the agenda at White House policy meetings,” he notes. And then even the Smothers Brothers became victims!

This is how Tom Hazlett perfectly summarized the choice before us regarding whether to let regulatory bureaucracies decide what is “fair” in media. This is exactly the same question we should be asking today when people suggest reviving the old “golden era” media control regime.

Keep in mind, the examples of content meddling cited here most involve the Fairness Doctrine. Indecency rules, the Financial Interest and Syndication Rules, and other FCC policies gave politicians others levers of exerting influence and control over speech. The meddling was endless.

This was no “Golden Era” or broadcast regulation–unless, that is, you prefer bland, boring, limited choices and endless bureaucratic harassment of media. No amount of wishful thinking or revisionist history can counter the hard realities of that dismal era in our nation’s history. We should wholeheartedly and vociferously reject calls to resurrect it.

]]>
https://techliberation.com/2019/06/07/there-was-no-golden-age-of-broadcast-regulation/feed/ 0 76499
Celebrating 20 Years of Internet Free Speech & Free Exchange https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/ https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/#comments Thu, 22 Jun 2017 14:47:15 +0000 https://techliberation.com/?p=76149

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago.

First, on June 26, 1997, the Supreme Court handed down its landmark decision in Reno v. ACLU, which struck down the Communications Decency Act’s provisions seeking to regulate online content under the old broadcast media standard. The Court concluded that there was “no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium” and rejected the congressional effort to pigeonhole this exciting new medium into the archaic censorship regimes of the past.

The Reno decision was tremendously important in protecting online speakers from the chilling effect of government “indecency” regulations. The decision also set a strong legal precedent and was cited in countless subsequent decisions involving not only online speech, but also efforts to regulate video game content.

Second, in July 1997, the Clinton Administration released The Framework for Global Electronic Commerce, a document that outlined the US government’s new policy approach toward the Internet and the emerging digital economy. The Framework was a bold vision statement that endorsed comprehensive online freedom of exchange, saying that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” The Administration rejected a restrictive regulatory regime for commercial activities and instead recommended reliance on civil society, contractual negotiations, voluntary agreements, and industry self-regulation.

To “avoid undue restrictions on electronic commerce,” the vision statement recommended that “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.” But, “[w]here governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.”

Taken together, the Reno decision and the Clinton Administration’s Framework acted as a Magna Carta moment for the Internet and digital technologies. It signaled that “permissionless innovation” would become America’s governance stance toward online speech and commerce.

As I defined it in a book on the subject, permissionless innovation, “refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.” The primary advantage of permissionless innovation as a governance disposition is that it sends a clear green light to citizens telling them they are at liberty to pursue their own interests and passions, free from the suffocating grip of prior restraints on free speech and free exchange.

But the Reno decision and the Clinton Administration’s Framework are not the only critical policy decisions that helped enshrine permissionless innovation as the lodestar of online policy in the US. In the mid-1990s, the Clinton Administration made the decision to allow open commercialization of the Internet, which was previously just the domain of government agencies and university researchers. Even more crucially, when Congress passed and President Bill Clinton signed into law the Telecommunications Act of 1996, lawmakers made it clear that traditional analog-era communications and media regulatory regimes would generally not be applied to the Internet.

The Telecom Act also included an obscure provision known as “Section 230,” which immunized online intermediaries from onerous liability for the content and communications that traveled over their networks. Section 230 was hugely important in that it let online speech and commerce flourish without the constant threat of frivolous lawsuits looming overhead. Internet scholar David Post has argued that “it is impossible to imagine what the Internet ecosystem would look like today without [Section 230]. Virtually every successful online venture that emerged after 1996 — including all the usual suspects, viz. Google, Facebook, Tumblr, Twitter, Reddit, Craigslist, YouTube, Instagram, eBay, Amazon — relies in large part (or entirely) on content provided by their users, who number in the hundreds of millions, or billions,” he notes. It is unlikely that the vibrant marketplace of online speech and commerce we enjoy today could have existed without the protections afforded by Section 230.

Finally, in 1998, another important legislative development occurred when Congress passed the Internet Tax Freedom Act, which blocked all levels of government in the US from imposing discriminatory taxes on the Internet. That made it clear that the Net would not be milked as a “cash cow” the way previous communications systems had been.

So, let’s recap how policymakers generally got policy right for the Internet in the mid-1990s by enshrining permissionless innovation as the law of the land:

  • The Executive Branch set the tone for online freedom by fully privatizing the underlying network and then establishing a governance vision based upon minimal government interference with online speech and exchange.
  • The Legislative Branch generally endorsed the Clinton Administration’s vision for the Internet and digital technologies by ensuring that new policies would not be based upon the failed regulatory and tax policies of the past.
  • The Judicial Branch upheld the centrality of the First Amendment in the Information Age and made it clear that this new medium for speech would be granted the strongest protection against government encroachments on freedom of speech and expression.

The combined effect of these wise, bipartisan policy decisions was that the Net and digital tech were “born free” instead of being born into regulatory captivity. We continue to enjoy the fruits of these freedoms today as citizens here in the US and across the world take advantage of the unprecedented ability to connect and communicate to pursue their passions and interests as they see fit.

There’s still more work to be done, however. Online platforms and digital technologies continue to come under attack from regulatory activists both here and abroad. Many governments continue to push back against these online speech and commercial freedoms, meaning we’ll need to redouble our efforts to highlight and defend the benefits of preserving these important victories.

Finally, as the underlying drivers of the Digital Revolution continue to spread into other segments of the economy, these freedoms will come into conflict with older top-down regulatory regimes for automobiles, aviation, medical technology, finance, and much more. This will create an epic conflict of governance visions between the Internet’s permissionless innovation model versus the precautionary, command-and-control regulatory regimes of the industrial age. We already see tension at work in policy deliberations over the Internet of Things, “big data,” driverless cars, commercial drones, robotics, artificial intelligence, 3D printing, virtual reality, the sharing economy, and others.

If policymakers hope to preserve and extend the benefits of the hard-fought victories of the Internet’s past twenty years, they will need to restate and reinvigorate their commitment to permissionless innovation to help spur the next great technological revolutions in these and other fields.

]]>
https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/feed/ 1 76149
Patrick Ruffini on the defeat of SOPA https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/ https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/#respond Tue, 02 Jul 2013 10:00:23 +0000 http://techliberation.com/?p=45095

Patrick Ruffini, political strategist, author, and President of Engage, a digital agency in Washington, DC, discusses his latest book with coauthors David Segal and David Moon: Hacking Politics: How Geeks, Progressives, the Tea Party, Gamers, Anarchists, and Suits Teamed Up to Defeat SOPA and Save the Internet. Ruffini covers the history behind SOPA, its implications for Internet freedom, the “Internet blackout” in January of 2012, and how the threat of SOPA united activists, technology companies, and the broader Internet community.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/feed/ 0 45095
Declan McCullagh on the NSA leaks https://techliberation.com/2013/06/18/declan-mccullagh/ https://techliberation.com/2013/06/18/declan-mccullagh/#respond Tue, 18 Jun 2013 10:00:21 +0000 http://techliberation.com/?p=44980

Declan McCullagh, chief political correspondent for CNET and former Washington bureau chief for Wired News, discusses recent leaks of NSA surveillance programs. What do we know so far, and what more might be unveiled in the coming weeks? McCullagh covers legal challenges to the programs, the Patriot Act, the fourth amendment, email encryption, the media and public response, and broader implications for privacy and reform.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/06/18/declan-mccullagh/feed/ 0 44980
Chilean in Chains: Net Neutrality Does Not Mean Internet Freedom https://techliberation.com/2013/03/26/chilean-in-chains-net-neutrality-does-not-mean-internet-freedom/ https://techliberation.com/2013/03/26/chilean-in-chains-net-neutrality-does-not-mean-internet-freedom/#comments Tue, 26 Mar 2013 11:59:16 +0000 http://techliberation.com/?p=44349

Free Press is holding its National Conference for Media Reform next week. The conference agenda describes the Internet as “central” to freedom of expression, which is how all mass media technologies have been described since the invention of the printing press ushered in the mass communications era. Despite recognizing that the Internet is a mass media technology, Free Press does not believe the Internet should be accorded the same constitutional protections as other mass media technologies. Like so many others, Free Press has forgotten that the dangers posed by government control of the Internet are similar to those posed by earlier mass media technologies. In a stunning reversal of the concepts embodied in the Bill of Rights, Free Press believes the executive and legislative branches of government are the source of protection for the freedom of expression. In their view, “Internet freedom means net neutrality.

Tell that to Rodrigo Ferrari, a Chilean blogger who knows firsthand that net neutrality laws don’t protect freedom of expression on the Internet. According to a letter sent by PEN America to United States Attorney General Eric Holder and Secretary of State John Kerry, Ferrari created what was clearly a parody Twitter account that mimicked Andronico Luksic, a wealthy Chilean businessman. PEN America alleges that, at the request of Chilean authorities, the US Departments of State and Justice pressured Twitter to release data identifying Rodrigo as the owner of the parody account and may have provided this information to Chilean authorities without a subpoena or a formal request from the court hearing the case. As a result of US government action, Twitter shut down Ferrari’s parody account, and Chilean authorities are prosecuting Ferrari for the crime of “usurpation of identity” (a form of identity theft), which could result in a prison sentence.

If net neutrality actually meant Internet freedom, Ferrari would not be facing time in prison. Though both Chile and the US have imposed net neutrality regulations, that didn’t stop government authorities in either country from conspiring to deprive Rodrigo Ferrari of his privacy and subject him to criminal prosecution for expressing his views using the Internet. In 2010, Chile became the first country to impose net neutrality regulations, including a regulation that expressly requires that Internet service providers guarantee the privacy of users. At the time, one enthusiastic blogger said, “Chile is China’s antonym in Internet world.” Another asked, “If Chile can, why can’t we?” Later that same year, the Federal Communications Commission followed the Chilean example by imposing net neutrality regulations on Internet service providers in the US, shortly before the European Union rejected net neutrality rules as “unnecessary” and potentially harmful to innovation and investment.

Why didn’t Chilean and US net neutrality regulations “mean” Internet freedom for Ferrari? Because net neutrality regulations don’t restrain government authorities and are not  intended to protect consumers in any event. Net neutrality regulations are designed to maximize the access of content providers to consumers and consumer information that can be used to sell advertisements. That is why the net neutrality regulations adopted in Chile and the US apply only to “Internet service providers” (i.e., the companies that provide residential Internet connections). Twitter is not currently considered an “Internet service provider,” which means it can “block” whatever expression it wants.

To be clear, though I don’t support the actions of US authorities as alleged by PEN America, I’m not advocating that net neutrality regulations be extended to Twitter or other content providers. Ironically, however, neither is Free Press or any other net neutrality advocate. They support government restrictions applicable only to network operators despite evidence that Internet content is being “blocked” by content providers for commercial reasons. Net neutrality advocates also oppose First Amendment protection for Internet service providers despite evidence that the US government is using the Internet in a way that could chill our freedom of expression.

The asymmetrical approach to Internet regulation supported by net neutrality advocates may have succeeded in distorting the economics of the Internet marketplace, but the evidence indicates it has done nothing to enhance “Internet freedom.” Just ask Rodrigo Ferrari.

]]>
https://techliberation.com/2013/03/26/chilean-in-chains-net-neutrality-does-not-mean-internet-freedom/feed/ 1 44349
Gabriella Coleman on the ethics of free software https://techliberation.com/2013/01/08/gabriella-coleman-2/ https://techliberation.com/2013/01/08/gabriella-coleman-2/#respond Tue, 08 Jan 2013 14:15:33 +0000 http://techliberation.com/?p=43410

Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy in the Art History and Communication Studies Department at McGill University, discusses her new book, “Coding Freedom: The Ethics and Aesthetics of Hacking,” which has been released under a Creative Commons license.

Coleman, whose background is in anthropology, shares the results of her cultural survey of free and open source software (F/OSS) developers, the majority of whom, she found, shared similar backgrounds and world views. Among these similarities were an early introduction to technology and a passion for civil liberties, specifically free speech.

Coleman explains the ethics behind hackers’ devotion to F/OSS, the social codes that guide its production, and the political struggles through which hackers question the scope and direction of copyright and patent law. She also discusses the tension between the overtly political free software movement and the “politically agnostic” open source movement, as well as what the future of the hacker movement may look like.

Download

Related Links

]]>
https://techliberation.com/2013/01/08/gabriella-coleman-2/feed/ 0 43410
Morozov’s Algorithmic Auditing Proposal: A Few Questions https://techliberation.com/2012/11/19/morozovs-algorithmic-auditing-proposal-a-few-questions/ https://techliberation.com/2012/11/19/morozovs-algorithmic-auditing-proposal-a-few-questions/#comments Mon, 19 Nov 2012 15:25:58 +0000 http://techliberation.com/?p=42844

In a New York Times op-ed this weekend entitled “You Can’t Say That on the Internet,” Evgeny Morozov, author of The Net Delusion, worries that Silicon Valley is imposing a “deeply conservative” “new prudishness” on modern society. The cause, he says, are “dour, one-dimensional algorithms, the mathematical constructs that automatically determine the limits of what is culturally acceptable.” He proposes that some form of external algorithmic auditing be undertaken to counter this supposed problem. Here’s how he puts it in the conclusion of his essay:

Quaint prudishness, excessive enforcement of copyright, unneeded damage to our reputations: algorithmic gatekeeping is exacting a high toll on our public life. Instead of treating algorithms as a natural, objective reflection of reality, we must take them apart and closely examine each line of code. Can we do it without hurting Silicon Valley’s business model? The world of finance, facing a similar problem, offers a clue. After several disasters caused by algorithmic trading earlier this year, authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modifications of computer systems used in such trades. Why couldn’t auditors do the same to Google? Silicon Valley wouldn’t have to disclose its proprietary algorithms, only share them with the auditors. A drastic measure? Perhaps. But it’s one that is proportional to the growing clout technology companies have in reshaping not only our economy but also our culture.

It should be noted that in a Slate essay this past January, Morozov had also proposed that steps be taken to root out lies, deceptions, and conspiracy theories on the Internet.  Morozov was particularly worried about “denialists of global warming or benefits of vaccination,” but he also wondered how we might deal with 9/11 conspiracy theorists, the anti-Darwinian intelligent design movement, and those that refuse to accept the link between HIV and AIDS.

To deal with that supposed problem, he recommended that Google “come up with a database of disputed claims” or “exercise a heavier curatorial control in presenting search results,” to weed out such things. He suggested that the other option “is to nudge search engines to take more responsibility for their index and exercise a heavier curatorial control in presenting search results for issues” that someone (he never says who) determines to be conspiratorial or anti-scientific in nature.

Taken together, these essays can be viewed as a preliminary sketch of what could become a comprehensive information control apparatus instituted at the code layer of the Internet. Morozov absolutely refuses to be nailed down on the details of that system, however. In a response to his earlier Slate essay, I argued that Morozov seemed to be advocating some sort of Ministry of Truth for online search, although he came up short on the details of who or what should play that role. But in both that piece and his New York Times essay this weekend, he implies that greater oversight and accountability are necessary.  “Is it time for some kind of a quality control system [for the Internet]?” he asked in his Slate oped. Perhaps it would be the algorithmic auditors he suggests in his new essay. But who, exactly, are those auditors? What is the scope of their powers?

When I (and others) made inquiries via Twitter requesting greater elaboration on these questions, Morozov summarily dismissed any conversation on the point. Worse yet, he engaged in what is becoming a regular Morozov debating tactic on Twitter: nasty, sarcastic, dismissive responses that call into question the intellectual credentials of anyone who even dares to ask him a question about his proposals.  Unless you happen to be Bruno Latour — the obtuse French sociologist and media theorist who Morozov showers with boundless, adorning praise — you can usually count on Morozov to dismiss you and your questions or concerns in a fairly peremptory fashion.

I’m perplexed by what leads Morozov to behave so badly. When I first met him a couple of years ago, it was at a Georgetown University event he invited me to speak at. He seemed like an agreeable, even charming, fellow in person. But on Twitter, Morozov bears his fangs at every juncture and spits out venomous missives and retorts that I would call sophomoric except that it would be an insult to sophomores everywhere. Morozov even accuses me of “trolling” him whenever I ask him questions on Twitter, even though I am doing nothing more that posing the same sort of hard questions to him that he regularly poses to others (albeit in a much more snarky fashion).  He always seems eager to dish it out, but then throws a Twitter temper tantrum whenever the roles are reversed and the tough questions come his way. Perhaps Morozov is miffed by some of what I had to say in my mixed review of his first book, The Net Delusion, or my Forbes column that raised questions about his earlier proposal for an Internet “quality control” regime.  But I invite others to closely read the tone of those two essays and tell me whether I said anything to warrant Morozov’s wrath. (In fact, I actually said some nice things about his book in that review and later named it the most important information technology policy book of the year.)

Regardless of what motivates his behavior, I do not think it is unreasonable to ask for more substantive responses from Morozov when he is making grand pronouncements and recommendations about how online culture and commerce should be governed. The best I could get him to say on Twitter is that is that he only had 1,200 words to play with in his latest Times oped and that more details about his proposal would be forthcoming. Well, in the spirit of getting that conversation going, allow me to outline a few questions:

1)      What is the specific harm here that needs to be addressed?

  • Do you have evidence of systematic algorithmic manipulation or abuse by Google, Apple, or anyone else, for that matter? Or is this all just about a handful of anecdotes that seemed to be corrected fairly quickly?

2)      What standard or metric should we use to determine the extent of this problem, to the extent we determine it is a problem at all?

  • To the extent autocomplete results are what troubles you, can you explain how individuals or entities are “harmed” by those results?
  • If this is about reputation, what is your theory of reputational harm and when it is legally actionable?
  • If this is about informational quality or “truth,” can you explain what would constitute success?
  • Can you appreciate the concerns / values on the other side of this that might motivate some degree of algorithmic tailoring? For example, some digital intermediaries may seek to curb the use of a certain amount of vulgarity, hate speech, or other offensive content on their sites since they are broad-based platforms with diverse audiences. (That’s why most search providers default to “moderate” filtering for image searches, for example.) While I think we both favor maximizing free speech online, do you accept that some of this private speech and content balancing is entirely rational and has, to some extent, always gone on? Also, aren’t there plenty of other ways to find the content you’re looking for besides just Google, which you seem preoccupied with?

3)      What is the proposed remedy and what are its potential costs and unintended consequences?

  • Can you explain the mechanism of control that you would like to see put in place to remedy this supposed problem? Would it be a formal regulatory regime?
  • Have you considered the costs and /or potentially unintended consequences associated with an algorithmic auditing regime if it takes on a regulatory character?
  • For example, if you are familiar with how long many regulatory proceedings can take to run their course, do you not fear the consequences of interminable delays and political gaming?
  • How often should the “auditing” you propose take place? Would it be a regular affair, or would it be driven by complaints?

4)      Is this regime national in scope? Global? How will it be coordinated /administered?

  • In the United States, presumably the Federal Communications Commission or Federal Trade Commission would be granted new authority to carry out algorithmic audits, or would a new entity need to be created?
  • Is additional regulatory oversight necessary and, if so, how is this coordinated by nationally and globally?

5)      Are there freedom of speech / censorship considerations that flow from (3) and (4)?

  • At least in the United States, algorithmic audits that had the force of law behind them could raise serious freedom of speech concerns. (See Yoo’s paper on “architectural censorship” and the recent work of Volokh & Grimmelmann on search regulation) and long-settled First Amendment law (see, e.g., Tornillo) ensures that editorial discretion is housed in private hands. How would you propose we get around these legal obstacles?

6)      Are there less-restrictive alternatives to administrative regulation?

  • Might we be able to devise various alternative dispute resolution techniques to flag problems and deal with them in a non-regulatory / non-litigious fashion?
  • Could voluntary industry best practices and/or codes of conduct be developed to assist these efforts?
  • Could an entity like the Broadband Internet Technical Advisory Group (BITAG) help sort out “neutrality” claims in this context, as they do in the broadband context?
  • Might it be the case that social norms and pressure can keep this problem in check? The very act of shining light on silly algorithmic screw-ups — much as you have in your recent opeds — has a way of keeping this problem in check.

I hope that Morozov finds these questions to be reasonable. My skepticism of most Internet regulation is no secret, so I suppose that Morozov or others might attempt to dismiss some of these questions as the paranoid delusions of a wild-eyed libertarian. But I suspect that I’m not the only one who feels uneasy with Morozov’s proposals since they could open the door to regulators across the globe to engage in “algorithmic auditing” on the flimsy assumption that some great harm exists from a few silly autocomplete suggestions or a couple conspiratorial websites. We deserve answers to questions like these before we start calling in the Code Cops to assume greater control over online speech.

]]>
https://techliberation.com/2012/11/19/morozovs-algorithmic-auditing-proposal-a-few-questions/feed/ 3 42844
If You Meet a Censor, Ask Them This One Question https://techliberation.com/2012/05/10/if-you-meet-a-censor-ask-them-this-one-question/ https://techliberation.com/2012/05/10/if-you-meet-a-censor-ask-them-this-one-question/#comments Thu, 10 May 2012 20:57:57 +0000 http://techliberation.com/?p=41134

Via Twitter, Andrew Grossman brought to my attention this terrifically interesting interview with a Kuwaiti censor that appeared in the Kuwait Times (“Read No Evil – Senior Censor Defends Work, Denies Playing Big Brother“). In the interview, the censor, Dalal Al-Mutairi, head of the Foreign Books Department at the Ministry of Information, speaks in a remarkably candid fashion and casual tone about the job she and other Kuwaiti censors do every day. My favorite line comes when Dalal tells the reporter how working as a censor is so very interesting and enlightening: “I like this work. It gives us experience, information and we always learn something new.”  I bet!  But what a shame that others in her society will be denied the same pleasure of always learning something new. Of course, like all censors, Dalal probably believes that she is doing a great public service by screening all culture and content to make sure the masses do not consume offensive, objectionable, or harmful content.

But here’s where the reporter missed a golden opportunity to ask Dalal the one question that you must always ask a censor if you get to meet one: If the content you are censoring is so destructive to the human soul or psyche, how then is it that you are such a well-adjusted person?  And Dalal certainly seems like a well-adjusted person. Although the reporter doesn’t tell us much about her personal life or circumstances, Dalal volunteers this much about herself and her fellow censors: “Many people consider the censor to be a fanatic and uneducated person, but this isn’t true. We are the most literate people as we have read much, almost every day. We receive a lot of information from different fields. We read books for children, religious books, political, philosophical, scientific ones and many others.” Well of course you do… because you are lucky enough to have access to all that content! But you are also taking steps to make sure the rest of your society doesn’t consume it on the theory that it would harm them or harm public morals in some fashion.  But, again, how is it that you have not been utterly corrupted by it all, Ms. Dalal? After all, you get to consume all that impure, sacrilegious, and salacious stuff! Shouldn’t you be some kind of monster by now?

How can this inconsistency be explained? The answer to this riddle can be found in the “Third-Person Effect Hypothesis.” First formulated by psychologist W. Phillips Davison in 1983, “this hypothesis predicts that people will tend to overestimate the influence that mass communications have on the attitudes and behavior of others. More specifically, individuals who are members of an audience that is exposed to a persuasive communication (whether or not this communication is intended to be persuasive) will expect the communication to have a greater effect on others than on themselves.” While originally formulated as an explanation for how people convinced themselves “media bias” existed where none was present, the third-person-effect hypothesis has provided an explanation for other phenomenon and forms of regulation, especially content censorship. Indeed, one of the most intriguing aspects about censorship efforts historically is that it is apparent that many censorship advocates desire regulation to protect others, not themselves, from what they perceive to be persuasive or harmful content. That is, many people imagine themselves immune from the supposedly ill effects of “objectionable” material, or even just persuasive communications or viewpoints they do not agree with, but they claim it will have a corrupting influence on others.

In his brilliant paper, Davison tells this wonderful story of one of the last censor boards in America (and think about that Kuwati censor as you read this):

The phenomenon of censorship offers what is perhaps the most interesting field for speculation about the role of the third-person effect. Insofar as faith and morals are concerned, at least, it is difficult to find a censor who will admit to having been adversely affected by the information whose dissemination is to be prohibited. Even the censor’s friends are usually safe from pollution. It is the general public that must be protected. Or else, it is youthful members of the general public, or those with impressionable minds. When Maryland’s State Board of  Censors, which had been filtering smut from motion pictures since 1916, was finally allowed to die in June 1981, some of its members issued dire forecasts about the future morals of Maryland and the nation (New York Times, June  29, 1981). Yet the censors themselves had apparently emerged unscathed. One of them stated that over the course of 21 years she had “looked at more naked bodies than 50,000 doctors,” but the effect of this experience was apparently more on her diet than on her morals. “I had to stop eating a lot of food because of what they do with it in these movies,” she is quoted as having told the Maryland Legislature.

I just love that story because it gets to the heart of what is so horribly elitist and ironic about censorship: No one every thought to test how corrupted the censors themselves had become because they consumed all the same stuff they were censoring!  If there was anything to the “monkey see, monkey do” theory of media effects theory (i.e., if you read, see, or hear bad things, then you will do bad things), then these censors should all be dope-smoking, axe-wielding, sex addicts. But I bet most of them weren’t. Like Ms. Dalal, they were probably generally well-adjusted members of society. They probably learned how to properly process all that content, even as they had zero faith in the ability of their fellow citizens to do the same.

So, if you ever get a chance to meet an actual censor, make sure to ask them about all the fun stuff they’ve been consuming lately and why it hasn’t turn them into total freaks or madmen!

]]>
https://techliberation.com/2012/05/10/if-you-meet-a-censor-ask-them-this-one-question/feed/ 9 41134
Have We Reached the End of the Road for Video Game Censorship? https://techliberation.com/2011/11/28/have-we-reached-the-end-of-the-road-for-video-game-censorship/ https://techliberation.com/2011/11/28/have-we-reached-the-end-of-the-road-for-video-game-censorship/#comments Mon, 28 Nov 2011 21:13:38 +0000 http://techliberation.com/?p=39189

Yes, we pretty much have. That’s the inescapable conclusion following the U.S. Supreme Court’s historic First Amendment decision in Brown v. EMA back in June, which struck down a California law governing the sale of “violent video games” to minors.  By a 7-2 margin, the court held that video games have First Amendment protections on par with books, film, music and other forms of entertainment.

The folks over at ALEC asked me to explore what happens next and what steps state and local lawmakers can take in a post-Brown world if they wish to address concerns about video game content. My essay appears in the Nov/Dec Inside ALEC newsletter. You can read the entire thing here or via the Scribd embed I have placed down below the fold.

I argue that, going forward, this ruling will force state and local governments to change their approach to regulating all modern media content. Education and awareness-building efforts will be the more fruitful alternative since censorship has now been largely foreclosed.

Game Over for Video Game Censorship – Adam Thierer INSIDE ALEC [November 2011]

(function() { var scribd = document.createElement(“script”); scribd.type = “text/javascript”; scribd.async = true; scribd.src = “http://www.scribd.com/javascripts/embed_code/inject.js”; var s = document.getElementsByTagName(“script”)[0]; s.parentNode.insertBefore(scribd, s); })();]]>
https://techliberation.com/2011/11/28/have-we-reached-the-end-of-the-road-for-video-game-censorship/feed/ 1 39189
Stop the Stop Online Piracy Act! https://techliberation.com/2011/11/01/stop-the-stop-online-piracy-act/ https://techliberation.com/2011/11/01/stop-the-stop-online-piracy-act/#comments Tue, 01 Nov 2011 17:31:55 +0000 http://techliberation.com/?p=38900

For CNET today, I have a long analysis and commentary on the “Stop Online Piracy Act,” introduced last week in the House. The bill is advertised as the House’s version of the Senate’s Protect-IP Act, which was voted out of Committee in May.

It’s very hard to find much positive to say about the House version. While there’s considerable evidence its drafters heard the criticisms of engineers, legal academics, entrepreneurs and venture capitalists, their response was unfortunate.

Engineers pointed out, for example, that court orders requiring individual ISPs to remove or redirect domain name requests was a futile and dangerous way to block access to “rogue” websites. Truly rogue sites can easily relocate to another domain, or simply have users access them with their IP address and bypass DNS altogether.

There are millions of DNS servers, according to Verisign, so getting all of them to make the change would be impossible, splintering the system. And redirecting DNS requests is some sense introducing a bug in the system, one that is inconsistent with upcoming security measures aimed at protecting users from being hijacked.

But all the drafters of SOPA seemed to have heard was the part about “futile.” Their response has been to make the DNS provisions vaguer and more open-ended, in hopes that whatever mechanisms the rogue sites come up with to evade the law will also be illegal.  Blocking is now extended not just to “parasite” sites but to a “portion thereof,” for example.

And the Attorney General can now apply for injunctive relief against any “entity” that provides “a product or service designed or marketed for the circumvention or bypassing of measures” taken in response to an earlier court order.

Similar efforts are found throughout SOPA, particularly in the felony streaming provision, and the private right of action (or what the bill calls the “market-based system”) for private enforcement of copyright and trademark abuses.  Where clarity isn’t possible, the drafters have opted for vagueness, open-ended definitions, and hedges.  Even the term “including” is defined, to be clear that it means “including but not limited to.”

The point to criticism of Protect-IP was instead that it was impossible to regulate technology that is changing so quickly, and that any effort to do so would only prove obsolete on arrival.  As previous efforts from CAN-SPAM to ECPA and back make clear, you cannot future-proof legislation aimed at specfiic features of emerging technologies.

That, unfortunately, is exactly what SOPA tries to do.  And beyond making the legislation clumsy and imprecise, the intentional vagueness greatly increases the potential for unintended consequences.  I describe several unintentionally dangerous examples from SOPA in the CNET piece; other analysts have done the same in pieces listed at the end of this post.

Two good things I found in the 79-page draft:

1.  The failure of Protect-IP to define “nonauthoritative domain name server” has been addressed.  That term is now defined, and the definition looks correct to me.

2.  SOPA recognizes, at least, the better approach to solving the problem of foreign websites that blatantly violate copyright and trademark.  Near the back, Section 205 calls on the State and Commerce Departments to make enforcement of existing international law and treaties regarding information products and services a priority.  This includes the assignment of new attaches dedicated to information products.

Would that SOPA started and ended with this provision, there would be little basis to fault its drafters.  If the problem SOPA is attempting to solve, after all, is the scourge or foreign websites that distribute movies, music, and counterfeit goods without a license (often pretending to be legitimate), then surely the solution is one of foreign and trade policy and not micromanaging Internet protocols.

Instead, we have a bill that treats all U.S. consumers as guilty until proven innocent, and hands Hollywood the keys to the inner workings of the Internet.  Just what they’ve always wanted.

 

Worth reading:

 

]]>
https://techliberation.com/2011/11/01/stop-the-stop-online-piracy-act/feed/ 4 38900
Net Neutrality goes to Court…Again https://techliberation.com/2011/10/04/net-neutrality-goes-to-court-again/ https://techliberation.com/2011/10/04/net-neutrality-goes-to-court-again/#comments Tue, 04 Oct 2011 15:52:56 +0000 http://techliberation.com/?p=38525

On NPR’s Marketplace this morning, I talk about net neutrality litigation with host John Moe.

Nearly a year after the FCC passed controversial new “Open Internet” rules by a 3-2 vote, the White House finally gave approval for the rules to be published last week, unleashing lawsuits from both supporters and detractors.

The supporters don’t have any hope or expectation of getting a court to make the rules more comprehensive.  So why sue?  When lawsuits challenging federal regulations are filed in multiple appellate courts, a lottery determines which court hears a consolidated appeal.

So lawsuits by net neutrality supporters are a procedural gimmick, an effort to take cases challenging the FCC’s authority out of the D.C. Circuit Court of Appeals, which has already made clear the FCC has no legal basis here.

But Verizon’s lawsuit challenges the rules as material changes to existing licenses for spectrum, a challenge that is exclusive to the D.C. Circuit.  If the D.C. Circuit agrees that the rules can be challenged under that provision of the law, then the case stays in D.C..

Beyond the procedure, the substance of Verizon’s challenge will be formidable.  In the 2010 Comcast case, the court eviscerated the FCC’s argument that various provisions of the Communications Act give them the authority to police broadband providers.

And the Open Internet order largely repeated those arguments, a sure sign that the agency really doesn’t expect to win here.  The December vote was largely symbolic, fulfilling an Obama campaign promise to codify net neutrality and moving the noisy and messy proceeding from the agency to the courts.

The real issue here is convergence.  In 1996, when the Communications Act was last overhauled, the commercial use of the Internet was in its infancy.  Broadcast TV, radio, telephone, cable, mobile and data services were still separate technologies, and the 1996 Act gave the FCC separate and different authority over each.  For the Internet, the agency got next to no authority.

In the 15 years since President Clinton signed the 1996 act, of course, the world of communications has been revolutionized by the Internet and broadband.  The FCC’s traditional regulatory subjects have largely converged onto the TCP/IP protocol, generating a flowering of innovation and new devices and services.  Cable providers are phone companies, phone companies are content providers, and computer companies such as Apple and Google are, well, everything.

Consumers are living in a golden age of communications, but the agency has been left with little to oversee.  Wireline voice has become an unprofitable and shrinking business as consumers cut the cord.  The audience for broadcast TV is getting older and smaller at a rapid pace.  This term, the Supreme Court is likely to slap the agency again for its Victorian sensibilities with regard to TV and radio content censorship.

Perhaps Congress will someday decide that broadband services require the kind of oversight and micromanagement the FCC once had over traditional forms of communication.  Then again, wiser heads may take note of the success of the Internet in a world without much regulation, and decide to leave well enough alone.  Perhaps a great overhaul of communications law will clear the decks altogether, creating a single body of law for the converged industries.

Who can say?  But in the meantime, the FCC can’t simply grant itself new authority to regulate.  Regardless of the sincerity of its belief that “prophylactic” rules to preserve the Open Internet are important, federal agencies can’t regulate without Congressional authorization.  Whether in the courts or in Congress itself, the net neutrality rules will be struck down, first and foremost because the FCC had no power to enact them.

]]>
https://techliberation.com/2011/10/04/net-neutrality-goes-to-court-again/feed/ 1 38525
Brown v EMA and net neutrality? https://techliberation.com/2011/06/28/brown-v-ema-and-net-neutrality/ https://techliberation.com/2011/06/28/brown-v-ema-and-net-neutrality/#comments Tue, 28 Jun 2011 05:10:22 +0000 http://techliberation.com/?p=37539

John Perry Barlow famously said that in cyberspace, the First Amendment is just a local ordinance.  That’s still true, of course, and worth remembering.  But at least today there is good news in the shire.  The local ordinance still applies with full force, if only locally.

As I write in CNET this evening (see “Video Games Given Full First Amendment Protection“), the U.S. Supreme Court issued a strong and clear opinion today nullifying California’s 2005 law prohibiting the sale or rental to minors of what the state deemed “violent video games.”

The 7-2 decision in Brown v. EMA follows last week’s decision in Sorrell, which also addressed the role of the First Amendment in the digital economy.  Sorrell dealt with a Vermont law that banned data mining of pharmacy information.  That application, the Court said, was also protected speech.

The CNET article is quite long (duh), and I’ll let it speak for itself.  There is also excellent commentary on both decisions from Adam Thierer and Berin Szoka here at the Technology Liberation Front.  Adam and Berin submitted an amicus brief in the EMA case that closely tracked the Court’s opinion, which in fact quoted from another amicus brief from the Cato Institute.  Berin also contributed a brief in the Sorrell case, again on the winning side.

Perhaps the most interesting commentary on today’s decision, however, comes from Prof. Susan Crawford.  Prof. Crawford’s blog on EMA notes that an important feature of the majority decision (written by Justice Scalia and joined by Justices Kennedy, Ginsburg, Sotomayor and Kagan) is what she calls the “absolute” view it takes of speech.  Crawford writes of Scalia’s opinion:

“Whether government regulation applies to creating, distributing, or consuming speech makes no difference,” he says in response to Justice Alito’s attempt to say that sale/rental is different from “creation” or “possession” of particular speech.

That view is absolute in the sense that it does not distinguish between different stages of the supply chain of information provisioning.  The “speaker,” for First Amendment purposes, is not only the author of the content, but also distributors, retailers, and consumers.  Each is equally protected by the First Amendment’s prohibition on government interference, whether that interference is a ban on certain content (violent video games) or a requirement to promote it (must-carry rules for cable).

Why does this matter?  Though I have written and tesftified extensively about the FCC’s December, 2010 “Open Internet” order, I have so far avoided discussion of a possible First Amendment challenge.  Frankly, I hadn’t initially thought it to be the strongest available argument against the legality of the rules.

But Prof. Crawford, a strong advocate for “net neutrality” in general, reads EMA as adding support to such an argument:

Today’s opinion may further strengthen the carriers’ arguments that any nondiscrimination requirement imposed on them should be struck down.  Although a nondiscrimination requirement arguably promotes speech rather than proscribes it, the long-ago Turner case on “must-carry” obligations for cable already suggested that the valence of the requirement doesn’t really matter.

If challengers to the Open Internet order (which today added the State of Virginia to the list of those waiting in the wings to file lawsuits) can convince a court that rules requiring nondiscriminatory treatment of packets are effectively requiring carriers to speak, such a rule would be seen as content-based.  Under EMA and last year’s decision in Stevens, such a rule could fail a First Amendment challenge.

It’s an interesting argument, to say the least.  I think I’ll give it a little more thought.

]]>
https://techliberation.com/2011/06/28/brown-v-ema-and-net-neutrality/feed/ 1 37539
Worrying over Internet content wars: Protect IP and the nuclear option https://techliberation.com/2011/05/16/worrying-over-internet-content-wars-protect-ip-and-the-nuclear-option/ https://techliberation.com/2011/05/16/worrying-over-internet-content-wars-protect-ip-and-the-nuclear-option/#comments Mon, 16 May 2011 15:29:04 +0000 http://techliberation.com/?p=36820

I’ve written two articles on the Protect IP Act of 2011, introduced last week by Sen. Leahy (D-Vt.).

For CNET, I look at some of the key differences, better and worse, between Protect IP and its predecessor last year, known as COICA.

On Forbes this morning, I have a long meditation on what Protect IP says about the current state of the Internet content wars.  Copyright, patent, and trademark are under siege from digital technology, and for now at least are clearly losing the arms race.

The new bill isn’t exactly the nuclear option in the fight between the media industries and everyone else, but it does signal increased desperation.

I’m not exactly a non-combatant here.  Increasingly, everyone is being dragged into this fight, including search engines, ISPs, advertisers, financial transaction processors, and, in Protect IP is passed, anyone who uses a hyperlink.

But as someone who earns his living from information exchanges–what the law anachronistically calls “intellectual property”–I’m not exactly an anarchist either (or as one recent commenter on CNET called me, a complete anarchist!).

The development of an information economy will stabilize and mature at some point, and, I believe, the new supply chain will be richer, more profitable, and give a greater share of the value than the current one does to those who actually create new content.  (Most of the cost of information products and services today is eaten up by middlemen, media, and distribution.)

But it’s not an especially smooth or predictable trajectory.  Joseph Schumpeter didn’t call it creative destruction for nothing.

 

]]>
https://techliberation.com/2011/05/16/worrying-over-internet-content-wars-protect-ip-and-the-nuclear-option/feed/ 1 36820
Initial Thoughts about the Markey-Barton ‘Do Not Track Kids’ Bill https://techliberation.com/2011/05/06/initial-thoughts-about-the-markey-barton-do-not-track-kids-bill/ https://techliberation.com/2011/05/06/initial-thoughts-about-the-markey-barton-do-not-track-kids-bill/#comments Fri, 06 May 2011 19:50:43 +0000 http://techliberation.com/?p=36633

Reps. Edward Markey (D-Mass.) and Joe Barton (R-Texas) have released a discussion draft of their forthcoming “Do Not Track Kids Act of 2011.”  I’ve only had a chance to give it a quick read, but the bill, which is intended to help safeguard kids’ privacy online, has two major regulatory provisions of interest:

(1) New regulations aimed at limiting data collection about children and teens, including (a) expansion of the Children’s Online Privacy Protection Act (COPPA) of 1998, which would build upon COPPA’s “verifiable parental consent” model; and (b) a new “Digital Marketing Bill of Rights for Teens;” and (c) limits on collection of geolocation information about both children and teens.

(2) An Internet “Eraser Button” for Kids to help kids wipe out embarrassing facts they have place online but later come to regret.  Specifically, the bill would require online operators “to the extent technologically feasible, to implement mechanisms that permit users of the website, service, or application of the operator to erase or otherwise eliminate content that is publicly available through the website, service, or application and contains or displays personal information of children or minors.” This is loosely modeled on a similar idea currently being considered in the European Union, a so-called “right to be forgotten” online.

Both of these proposals were originally floated by the child safety group Common Sense Media (CSM) in a report released last December.  It’s understandable why some policymakers and child safety advocates like CSM would favor such steps. They fear that there is simply too much information about kids online today or that kids are voluntarily placing far too much personal information online that could come back to haunt them in the future. These are valid concerns, but there are both practical and principled reasons to be worried about the regulatory approach embodied in the Markey-Barton “Do Not Track Kids Act”:

  • It is very hard to imagine how most elements of this new “Do Not Track Kids” regulatory regime would work without requiring mandatory online age verification of all websurfers, which would raise serious constitutional issues. Previous efforts to age-verify websurfers (namely, The Child Online Protection Act or COPA) have been found to violate the First Amendment and also to raise different privacy concerns. By contrast, the Children’s Online Privacy Protection Act (COPPA) partially avoided this problem by limiting its coverage to kids 12 and under and did not mandate strict age verification. The Markey-Barton bill seems to imagine that the COPPA regime can simply be expanded without serious constitutional scrutiny (or economic cost, for that matter). The sponsors are wrong. Their bill puts COPPA on a collision course with COPA because it would necessitate expanded age verification in order to be effective.
  • An Internet “Eraser Button” is similarly challenged by practical realities and principled concerns. It’s unclear how to even enforce such a notion. Moreover, if it could be enforced, it would raise profound free speech issues since it is tantamount to digital censorship and specifically threatens press freedoms. And the economic costs of such a mandate — especially on smaller operators — could be quite significant. See my recent Forbes essay for a discussion of those problems.
  • Although some of the concerns that motivate the “Do Not Track Kids Act” are understandable, there are two very different models for how we might address these problems: ‘Legislate & Regulate’ vs. ‘Educate & Empower.’ The latter is the superior framework for dealing with these concerns in light of the practical and principled problems associated with the former.

I will expand upon these concerns in a follow-up post, but for now I would direct your attention to the 36-page white paper that Berin Szoka and I released two years ago on this topic:”COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech.” It explains why this issue is so complicated and raises so many constitutional red flags.


Additional Reading:

on COPA:

on Eraser Button:

]]>
https://techliberation.com/2011/05/06/initial-thoughts-about-the-markey-barton-do-not-track-kids-bill/feed/ 2 36633
The troubled history of the Global Network Initiative https://techliberation.com/2011/03/30/the-troubled-history-of-the-global-network-initiative/ https://techliberation.com/2011/03/30/the-troubled-history-of-the-global-network-initiative/#comments Wed, 30 Mar 2011 16:58:41 +0000 http://techliberation.com/?p=36031

I’ve posted a long article on Forbes.com this morning on the Global Network Initiative. A non-profit group aimed at improving human rights though the agency of information technology companies, GNI has never really gotten off the ground.

Since its formal launch in 2008, following two years of negotiations among tech companies, human rights groups and academics, not a single company has agreed to join beyond the original members–Google, Yahoo and Microsoft.

This despite considerable pressure from supporters of GNI, including Senator Richard Durbin (D-IL), Chair of the Senate Judiciary’s Subcommittee on Human Rights.  Indeed, in the wake of uprisings in Tunisia, Egypt, Libya and elsewhere and the seminal role played by social media and other IT, a full-court press has been launched against Facebook and Twitter in particular for failing to sign up.

The tone of the criticism hardly seems designed to encourage new members to join.  (In The Huffington Post, Amy Lee asks simply, “Why won’t Twitter and Facebook sign on for free speech on the Internet?”)

Why indeed.

The article reviews the troubled history of GNI and its complex, incomplete, and worrisome organizational structure, which gives considerable power to NGOs to shape the policies and practices of participating companies.  (That features is especially worrisome, as many of the NGOs are traditional human rights organizations with little or no experience dealing with IT.)

Participating companies, among other commitments, must submit to bi-annual “assessments” of their compliance with GNI principles, conducted by assessors certified by GNI’s board.

Details aside, there is a more fundamental question worth asking here.  Why are technology companies being asked to influence (one might say interfere with) public policy and local laws of other countries?   GNI requires not only that participants resist efforts by repressive governments to censor content or to force disclosure of private information of their citizens, but also that they actively lobby these governments, to “engage government officials to promote the rule of law and the reform of laws, policies and practices that infringe on freedom of expression and privacy.”

Freedom of expression and privacy are worthwhile goals, but isn’t it the job of a country’s own citizens to petition their governments for change?  And if those citizens are suppressed, isn’t it the job of the global community, operating through political and trade organizations such as the U.N. and the WTO, to lobby for change?  Why is foreign policy being outsourced to Facebook and Twitter?

Perhaps it’s because national governments won’t do it.  But the demur by tech companies to take on the job is hardly a reason for Sen. Durbin to criticize and threaten them.  If he’s looking for someone to blame for the poor human rights record of some governments, perhaps he should look a little closer to home.

]]>
https://techliberation.com/2011/03/30/the-troubled-history-of-the-global-network-initiative/feed/ 7 36031
Senators Seek to Censor Mobile App Stores, Disregarding Public Safety and the Constitution https://techliberation.com/2011/03/25/senators-seek-to-censor-mobile-app-stores-disregarding-public-safety-and-the-constitution/ https://techliberation.com/2011/03/25/senators-seek-to-censor-mobile-app-stores-disregarding-public-safety-and-the-constitution/#comments Fri, 25 Mar 2011 20:18:00 +0000 http://techliberation.com/?p=35923

In the latest example of big government run amok, several politicians think they ought to be in charge of which applications you should be able to install on your smartphone.

On March 22, four U.S. Senators sent a letter to Apple, Google, and Research in Motion urging the companies to disable access to mobile device applications that enable users to locate DUI checkpoints in real time. Unsurprisingly, in their zeal to score political points, the Senators—Harry Reid, Chuck Schumer, Frank Lautenberg, and Tom Udall—got it dead wrong.

Had the Senators done some basic fact-checking before firing off their missive, they would have realized that the apps they targeted actually  enhance the effectiveness of DUI checkpoints while reducing their intrusiveness. And had the Senators glanced at the Constitution – you know, that document they swore an oath to support and defend – they would have seen that sobriety checkpoint apps are almost certainly protected by the First Amendment.

While Apple has stayed mum on the issue so far, Research in Motion quickly yanked the apps in question. This is understandable; perhaps RIM doesn’t wish to incur the wrath of powerful politicians who are notorious for making a public spectacle of going after companies that have the temerity to stand up for what is right.

Google has refused to pull the DUI checkpoint finder apps from the Android app store, reports Digital Trends. Google’s steadfastness on this matter reflects well on its stated commitment to free expression and openness. Not that Google’s track record is perfect on this front – it’s made mistakes from time to time – but it’s certainly a cut above several of its competitors when it comes to defending Internet freedom.

Advance Publicity & DUI Checkpoints

Trying to keep the locations of DUI checkpoints secret is bad public policy. Contrary to the Senators’ assertion that “applications that alert users to DUI checkpoints” are “harmful to public safety,” there is zero evidence that publicizing sobriety checkpoints contributes to drunk driving accidents.

If anything, advance publicity actually  saves lives. DUI checkpoints aren’t primarily about catching drunk drivers, but about deterring drunk driving in the first place. When drivers know that police have set up checkpoints nearby, they’re likely to think twice about getting behind the wheel. Instead, they might hail a cab or catch a ride from a sober friend.

The California Supreme Court recognized in Ingersoll v. Palmer that DUI checkpoints are designed to deter drunk driving:

The stated goals of several law enforcement agencies explicitly point to deterrence as a primary objective of the checkpoint program. The Burlingame manual described the objectives of its program, noting the historical use of roving patrols as the principal law enforcement response to the drunk driving problem… Two major goals of the checkpoint as stated in the manual were to increase public awareness of the seriousness of the problem and to increase the perceived risk of apprehension.

The  Ingersoll court further stated with regard to the checkpoints that, “advance publicity is important to the maintenance of a constitutionally permissible sobriety checkpoint. Publicity both reduces the intrusiveness of the stop and increases the deterrent effect of the roadblock.”

California is not alone in focusing on the deterrent effect of DUI checkpoints. In 1990, shortly after the U.S. Supreme Court upheld the constitutionality of certain kinds of DUI checkpoints in Michigan Department of State Police v. Sitz, the National Highway Traffic Safety Administration (NHTSA) published a document (PDF) laying out guidelines for police in conducting sobriety checkpoints. NHTSA’s model sobriety checkpoint guidelines include the following section:

C. ADVANCE NOTIFICATION 1. For the purpose of public information and education, this agency will announce to the media that checkpoints will be conducted. 2. This agency will encourage media interest in the sobriety checkpoint program to enhance public perception of aggressive enforcement, to heighten the deterrent effect and to assure protection of constitutional rights.

Indeed, police departments routinely publicize information about DUI checkpoints in local newspapers and other media outlets. Many police officers think such publicity is beneficial to law enforcement. Take Indiana State Police Sgt. Dave Burstein, who brushed off the Senators’ concerns about DUI checkpoint apps, saying to local news affiliate WXIN-TV, “Let everybody know they’re there because the whole idea is to get voluntary compliance.”

Regulation Through Intimidation

The Senators’ letter isn’t just uninformed and irresponsible, it’s also arrogant – a prime example of regulation through intimidation. When politicians want to dictate behavior but know they cannot lawfully legislate or regulate it, a widely favored tactic is to demonize the target by sending a threatening letter accompanied by a vitriolic press release. When that doesn’t get the job done, politicians hold congressional hearings to publicly rake the alleged wrongdoers over the coals. This reprehensible strategy has long been used to suppress constitutionally protected speech in ways that, if legislated, would almost certainly be overturned by courts on First Amendment grounds. As former U.S. Senator Paul Simon warned in 2003:

I have no problem with holding hearings and putting on pressure. But the problem with holding hearings and putting on pressure is that most of the members have no sensitivity on the First Amendment…The only oath we take says that we promise to support and defend the Constitution of the United States against all enemies, foreign and domestic. The domestic enemies of the Constitution are often on the floor of the House and the Senate.

In a free society, it is unacceptable for a handful of Senators to attempt to dictate mobile app store decisions without a floor vote or any judicial oversight. Lawmakers’ function is to make laws, not exploit their bully pulpit to try to coerce private businesses into doing their bidding. If voters let these politicians get away with going after DUI checkpoint apps, which politically unpopular apps will be next? A ban on apps that locate abortion clinics? A ban on apps that locate handgun dealers? It’s a scary slippery slope, as ACT’s Morgan Reed reminds us.

If Reid, Schumer, Lautenberg, and Udall want to examine a serious threat to public safety, they should look in the mirror. Meanwhile, they should leave mobile app stores alone. The Washington Times nailed it in a recent editorial:

Real drunk drivers deserve severe punishment, but the best way to catch them is to respect the Fourth Amendment. Instead of having cops stand around behind barricades interrogating soccer moms, have them patrol the streets looking for evidence of impaired driving. It works. In the meantime, high-tech companies ought to email these senators a free Constitution app for their smart phones.

Amen.

]]>
https://techliberation.com/2011/03/25/senators-seek-to-censor-mobile-app-stores-disregarding-public-safety-and-the-constitution/feed/ 605 35923
Doing Nothing to Save the Internet https://techliberation.com/2011/01/30/doing-nothing-to-save-the-internet/ https://techliberation.com/2011/01/30/doing-nothing-to-save-the-internet/#comments Mon, 31 Jan 2011 01:50:51 +0000 http://techliberation.com/?p=34782

My essay last week for Slate.com (the title I proposed is above, but it must have been too “punny” for the editors) generated a lot of feedback, for which I’m always grateful, even when it’s hostile and ad hominem.  Which much of it was.

The piece argues generally that when it comes to the Internet, a disruptive technology if ever there was one, the best course of action for traditional, terrestrial governments intent on “saving” or otherwise regulating digital life is to try as much as possible to restrain themselves.  Or as they say to new interns in the operating room, “Don’t just do something.  Stand there.”

This is not an argument in favor of anarchy, or even more generally for social Darwinism.  I have something much more practical in mind.  Disruptive technologies, by definition, do not operate within the “normal science” of those areas of life they impact. Its problems can’t be solved by reference to existing systems and institutions. In the case of the Internet, that’s pretty much all aspects of life, including regulation.

By design, modern democratic government is deliberative, incremental, and slow to change.  That is an appropriate model for regulating traditional areas including property, torts, criminal procedure, civil rights and business law.    But when applied to a new ecosystem—to a new frontier, as I suggest in the piece—that model doesn’t work.

Digital life is changing much faster than traditional regulators can hope to keep up with.  It isn’t just an interesting business use of information anymore, it’s a social phenomenon, one that has gone far beyond companies finding more effective ways to share data.  It’s also, increasingly, a global phenomenon, a poor match for local and even national lawmaking.

Digital life moves at the speed of Moore’s Law, and that is the source of its true regulation.  The Internet—acting through its engineers, its users, and its enterprises–governs itself and, while far from perfect, certainly seems to be doing a better job than traditional governments in their traditional venues, let alone online.

The piece gives a short quote from Frederick Jackson Turner, the groundbreaking historian of the American West.  The full quote gives additional context to my frontier analogy:

The policy of the United States in dealing with its land was in sharp contrast with the European system of scientific administration.  Efforts to make this domain a source of revenue, and to withhold it from emigrants in order that settlement might be compact, were in vain.  The jealousy and fears of the East were powerless in the face of the demands of the frontiersman.  John Quincy Adams was obliged to confess:  “My own system of administration, which was to make the national domain the inexhaustible fund for progressive and unceasing internal improvement, has failed.”  The reason is obvious:  a system of administration was not what the West demanded:  it wanted land.

A few key points from this passage are worth highlighting:

1.      Parochialism – Traditional governments attempting to regulate new and disruptive technologies rarely have the best interests of the users in mind.  Instead, they try to exploit the new ecosystem, at best, as a stalking horse for regulation they could not get away with in traditional contexts but hope to foist off on the more poorly-organized inhabitants of the frontier.  At worst, governments captured by the vested interests most threatened by the disruption of the new technology attempt to slow down the pace of change, to preserve the interests of those in the process of being upended.

That’s in part why, despite increasingly desperate efforts by the East to impose its regulatory will on the West, those efforts failed.  The East was interested in exploiting western lands for their own benefit, not optimizing the West’s potential to create a new kind of society and economic system.  The East was working against the momentum of transformation.  It understood little of how frontier life was evolving, and its laws couldn’t keep up with the pace of change even if they were enforceable, which they weren’t.  Nor should they have been.

One need only look to one of the first U.S. efforts to regulate the Internet for an example of the first kind of lawmaking.  The Communications Decency Act, passed in 1996 and signed by President Clinton, banned classes of content on the Internet that were perfectly legal in the U.S. in any other media.  (Similar bans have been enacted, often with more bite or more focused morality, in other counties, including Thailand, Pakistan, China, the E.U., and others.)

That law, and subsequent efforts to impose an antediluvian morality on U.S. Internet users, was summarily tossed out by the U.S. Supreme Court as a facial violation of the First Amendment.  Its passage inspired John Perry Barlow to issue his famous “Declaration of the Independence of Cyberspace,” which pointed out correctly that traditional governments have anything but the best interests of this new environment in mind when they put pen to paper.

As an example of regulation to protect vested (and obsoleting) interests, consider the 1998 Digital Millennium Copyright Act, in which content owners unwilling or unable to adapt to the new physics of digital distribution, convinced their lawmakers to impose brutally restrictive new limits on digital technologies.  They bought themselves far greater protection from reverse engineering, fair use, and the First Sale doctrine than they had achieved in the real world.

Whether those protections are enforceable, or whether they used the time it bought them to get ready for a more orderly transition to digital life, remain to be seen.  But the prospects are predictably poor.  Just ask Pope Urban VIII, who banned Galileo’s insistence that the Earth revolved around the Sun.  No matter how long Galileo stayed in prison, the orbits didn’t change.

Indeed, it’s hard without doing an exhaustive survey to think of a single piece of traditional law aimed at helping or saving the Internet that wasn’t at best naïve and at worse intentionally harmful–including laws that grant law enforcement more powers online than they have in their native territory.  That’s why I’m surprised when some of my fellow frontiersman short-sightedly rush back to Washington at the first sign of trouble with Native populations, or with saloon-keepers, or with the railroads, or with any other participant in the ecosystem who isn’t living up to their standards.  They should know that it’s both dangerous, and pointless, to do so.

2.      Impotence – In some sense, in other words, it doesn’t matter whether terrestrial governments regulate or not.  We have ample evidence – file-sharing, spam, political dissent, porn, gambling–that even those activities that have been banned go on without much regard for the legal consequences.  The government of Egypt (and Burma, and Pakistan, and China) can shut down Internet access for a short or for a long period of time.  But the disruption in service is a mere blink in the eye in Internet time.  Let’s see who wins the stand-off that ensues, and how quickly the Law of Disruption takes hold.  Bets gladly accepted here.

As Barlow wrote in his Declaration, “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”  Put another way, in nearly every conflict between Moore’s Law and traditional law, Moore’s Law wins.  Digital life will make its own “social contract” whether traditional governments give it permission to or not.

3.      Reverse engineering government – To repeat, the absence or ineffectiveness of traditional regulators in digital life does not translate to anarchy and chaos.  There is a social contract to online life, and it will be followed by more organized and organic forms of governance.  As I wrote in the piece, “the posse and the hanging tree gave way to local sheriffs and circuit-riding judges.”

That does not mean, however, that over time the old forms of government and regulation will finally win the battle and establish their norms on digital life.  Quite the opposite.  What has been and will continue to develop are forms of online governance that are suited to the unique environmental properties of digital life.

For now, we can already see that the new institutions will be more democratic–more directly democratic—for better and for worse.  (As Madison said, “If every Athenian had been a Solon, every Greek Assembly would still have been a mob.”)  Watch how the users of Facebook, Twitter, YouTube, World of Warcraft, iTunes, and Android respond to efforts by the sovereigns of these domains to dictate the terms of the social contract, and you’ll see how the new social contract is being worked out.

There’s more.  Turner points out that the organic forms of governance that emerged from the American West didn’t simply create a new form of frontier law.  It created American law.  Once the global inhabitants of digital life work out their rules and enforcement mechanisms, in other words, they are unlikely to settle for a system any less efficient back on terra firma.  Turner writes, “Steadily, the frontier of settlement advanced and carried with it individualism, democracy, and nationalism, and powerfully affected the East and the Old World.”

Who will impose their collective will on whom, and which form of government will become obsolete?  Again, anyone care to place a wager?

This is starting to sound like the outline of something much longer.  So I’ll stop there.

]]>
https://techliberation.com/2011/01/30/doing-nothing-to-save-the-internet/feed/ 6 34782
“Fake Neutrality” or Government Takeover?: Reading the FCC’s Net Neutrality Report (Part III) https://techliberation.com/2011/01/05/%e2%80%9cfake-neutrality%e2%80%9d-or-government-takeover-reading-the-fcc%e2%80%99s-net-neutrality-report-part-iii/ https://techliberation.com/2011/01/05/%e2%80%9cfake-neutrality%e2%80%9d-or-government-takeover-reading-the-fcc%e2%80%99s-net-neutrality-report-part-iii/#comments Wed, 05 Jan 2011 20:22:05 +0000 http://techliberation.com/?p=34135

In Part I of this analysis of the FCC’s Report and Order on “Preserving the Open Internet,” I reviewed the Commission’s justification for regulating broadband providers.   In Part II, I looked at the likely costs of the order, in particular the hidden costs of enforcement.  In this part, I compare the text of the final rules with earlier versions.  Next, I’ll look at some of the exceptions and caveats to the rules—and what they say about the true purpose of the regulations

In the end, the FCC voted to approve three new rules that apply to broadband Internet providers.  One (§8.3) requires broadband access providers to disclose their network management practices to consumers.  The second One (§8.4) prohibits blocking of content, applications, services, and non-harmful devices.  The third One (§8.5) forbids fixed broadband providers (cable and telephone, e.g.) from “unreasonable” discrimination in transmitting lawful network traffic to a consumer.

There has of course been a great deal of commentary and criticism of the final rules, much of it reaching fevered pitch before the text was even made public.  At one extreme, advocates for stronger rules have rejected the new rules as meaningless, as “fake net neutrality,” “not neutrality,” or the latest evidence that the FCC has been captured by the industries it regulates.  On the other end, critics decry the new rules as a government takeover of the Internet, censorship, and a dangerous and unnecessary interference with a healthy digital economy.  (I agree with that last one.)

One thing that has not been seriously discussed, however, is just how little the final text differs from the rules originally proposed by the FCC in October, 2009.  Indeed, many of those critical of the weakness of the final rules seem to forget their enthusiasm for the initial draft, which in key respects has not changed at all in the intervening year of comments, conferences, hearings, and litigation.

The differences—significant and trivial—that have been made can largely be traced to comments the FCC received on the original draft, as well as interim proposals made by industry and Congress, particularly the framework offered by Verizon and Google in August and a bill circulated by Rep. Henry Waxman just before the mid-term elections.

1.      Transparency

Compare, for example, the final text of the transparency rule with the version first proposed by the FCC.

Subject to reasonable network management, a provider of broadband Internet access service must disclose such information as is reasonably required for users and content, application and service providers to enjoy the protections specified in this part. (Proposed)

A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance and commercial terms of its broadband Internet access service sufficient for consumers to make informed choices regarding use of such services and for content, application, service and device providers to develop, market and maintain Internet offerings. (Final)

The final rule is much stronger, and makes clearer what it is that must be disclosed.  It is also not subject to the limits of reasonable network management,  Rather than the vague requirement of the draft for disclosures sufficient to “enjoy the protections” of the open Internet rules, the final rule requires disclosures sufficient for consumers to make “informed choices” about the services they pay for, a standard more easily enforced.

By comparison, the final rule comes close to the version that appeared in draft legislation circulated but never introduced by Rep. Henry Waxman in October of 2010. It likewise reflects the key concepts in the Verizon-Google Legislative Framework Proposal from earlier in the year.

As the Report makes clear (¶¶ 53-61), the transparency rule has teeth.  Though the agency declines for now from making specific decisions about the contents of the disclosure and how is must be posted, the Report lays out a non-exhaustive list of nine major categories of disclosure, including network practices, performance characteristics, and commercial terms, that must be included.  It’s hard to imagine a complying disclosure that will not run to several pages of very small text.

That generosity, of course, may be the rule’s undoing.  As anyone who has ever thrown away a required disclosure from a service provider (mortgage, bank, drug, electronic device, financial statement, privacy, etc.) knows full well, information “sufficient” to make an informed choice is far more information than any non-expert consumer could possibly absorb and evaluate, even if they wanted to.   The more information consumers are given, the less likely they’ll pay attention to any of it, including what may be important.

The FCC recognizes that risk, however, but believes it has an answer.  “A key purpose of the transparency rule,” the Commission notes (¶ 60), “is to enable third-party experts such as independent engineers and consumer watchdogs to monitor and evaluate network management practices, in order to surface concerns regarding potential open Internet violations.”

Perhaps the agency has in mind here organizations like BITAG, which has been established by a wide coalition of participants in the Internet ecosystem to develop “consensus on broadband network management practices or other related technical issues.”  Or by consumer watchdogs, perhaps the agency imagines that some of the public interest groups who have most strenuously rallied for the rules will become responsible stewards of their implementation, trading the acid pens of political rhetoric for responsible analysis and advocacy to their members and other consumers.

We’ll see.  I wish I shared the Commissions confidence that, “for a number of reasons” (none cited), “the costs of the disclosure rule we adopt today are outweighed by the benefits of empowering end users and edge providers to make informed choices….”  (¶ 59).  But I don’t. Onward.

2.       Blocking

The final version of the blocking rule (§8.5) consolidated the Content, Applications and Services and Devices rule of the original draft.  The final rule states:

A person engaged in the provision of fixed broadband Internet access services, insofar as such person is so engaged, shall not block lawful content, applications, services or non-harmful devices, subject to reasonable network management.

A more limited rule applies to mobile broadband providers, who

[S]hall not block consumers from accessing lawful websites, subject to reasonable network management, nor shall such person block applications that compete with the providers’ voice or video telephony services, subject to reasonable network management

Much of the anguish over the final rules that has been published so far relates to a few of the limitations built into the blocking rule.  First, copyright-reform activists object to the word “lawful” appearing in the rule.  “Lawful” content, applications, and services do not include activities that constitute copyright and trademark infringement.  Therefore, the rule allows broadband providers to use whatever mechanisms they want (or may be required to) to reduce or eliminate traffic that involves illegal fire-sharing, spam, viruses and other malware, and the like.

A provider who blocks access to a site selling unlicensed products, in other words, is not violating the rules.  And as the agency finds it is “generally preferable to neither require not encourage broadband providers to examine Internet traffic in order to discern which traffic is subject to the rules” (¶ 48), there will be considerable margin of error given to providers who block sites, services, or applications which may include some legal components.

On this view, though the FCC otherwise contradicts it—see footnote 245 and elsewhere—a complete ban on the BitTorrent protocol, for better or worse, might not be a violation of the blocking rule.  Academic studies have shown that over 99% of BitTorrent traffic constitutes unlicensed file sharing of protected content.  Other than inspecting individual torrents, which the agency disfavors, how else can an access provider determine what tiny minority of BitTorrent traffic is in fact lawful?

A second concern is the repeated caveat for “reasonable network management,” which gives access providers leeway to balance traffic during peak times, limit users whose activity may be harming other users, and other “legitimate network management” purposes.

Finally, disappointed advocates object to the special treatment for mobile broadband, which may, for example, block applications, services or devices without violating the rule.  There is an exception to the exception for applications, such as VoIP and web video, that compete with the provider’s own offerings, but that special treatment doesn’t keep mobile providers from using “app stores” to exclude services they don’t approve.  (See ¶ 102)

Of course even the original draft of the rules included the limitation for “reasonable network management,” and refused to apply any of the rules to unlawful activities.  The definition of “reasonable network management” in the original draft is different, but functionally equivalent, to the final version.

The carve-out for mobile broadband, however, is indeed a departure from the original rules.  Though the Oct. 2009 Notice of Proposed Rulemaking expressed concern about applying the same rule to fixed and mobile broadband (see  13, 154-174), the draft blocking rule did not distinguish between fixed and mobile Internet access.  The FCC did note, however, that different technologies “may require differences in how, to what extent, and when the principles apply.”  The agency sought comment on these differences (and asked for further comment in a later Notice of Inquiry).  Needless to say, they heard plenty.

Wireless broadband is, of course, a newer technology, and one still very much in development.  Spectrum is limited, and capacity cannot easily be added.  Those are not so much market failures as they are regulatory failures.  The FCC is itself responsible for managing the limited radio spectrum, and has struggled by its own admission to allocate spectrum for its most efficient and productive uses—indeed, even to develop a complete inventory of who has which frequencies of licensed spectrum today.

Adding additional capacity is another regulatory obstacle.  Though mobile users rail against their providers for inadequate or unreliable coverage, no one, it seems, wants to have cellular towers and other equipment near where they live.  Local regulators, who must approve new infrastructure investments, take such concerns very much to heart.  (There is also rampant corruption and waste in the application, franchising, and oversight processes at the state and local levels, a not-very-secret secret.)

The FCC, it seems, has taken these concerns into account in the final rule.  Its original open Internet policy statements—from which the rules derive—applied only to fixed broadband access, and the October, 2009 draft’s inclusion of mobile broadband came as a surprise to many.

The first indication that the agency was considering a return to the original open Internet policy came with the Verizon-Google proposal, where the former net neutrality adversaries jointly released a legislative framework (that is, something they hoped Congress, not the FCC, would take seriously) that gave different treatment to mobile.  As the V-G proposal noted, “Because of the unique technical and operational characteristics of wireless networks, and the competitive and still-developing nature of wireless broadband services, only the transparency principle would apply to wireless at this time.”

The Waxman proposal didn’t go as far as V-G, however, adding a provision that closely tracks with the final rule.  Under the Waxman bill, mobile providers would have been prohibited from blocking “lawful Internet websites”, and applications “that compete with the providers’ voice or video communications services.”

So the trajectory of the specialized treatment for mobile broadband is at least clear and, for those following the drama, entirely predictable.  Yet the strongest objections to the final rule and the loudest cries of betrayal from neutrality advocates came from the decision to burden mobile providers less than their fixed counterparts.  (Many providers offer both, of course, so will be subject to different rules for different parts of their service.)

At the very least, the advocates should have seen it coming.  Many did.  A number of “advocacy” groups demonized Google for its cooperation with Verizon, and refused to support Waxman’s bill.  (It should also be noted that none of the groups objecting to the final rules or any interim version ever actually proposed their own version—that is, what they actually wanted as opposed to what they didn’t want.)

3.      Unreasonable discrimination

The final rule, applicable only to fixed broadband providers, demands that a provider not “unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.”  (§ 8.7, and see ¶¶ 68-79 of the Report).

Though subtle, the difference in language between the NPRM and the final rule are significant, as the FCC acknowledges.  The NPRM draft rule noted plainly that “a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.”

The difference here is between “nondiscrimination,” which prohibits all forms of differential network treatment, and “unreasonable discrimination,” which allows discrimination so long as it is reasonable.

The migration from a strict nondiscrimination rule (subject, however, to reasonable network management) to a rule against “unreasonable” discrimination can be seen in the interim documents.  The Verizon-Google proposal, which called for a “Non-Discrimination Requirement,” nonetheless worded the requirement to ban only “undue discrimination against lawful Internet content, application, or service in a manner that causes meaningful harm to competition or to users.” (emphasis added)

Rep. Waxman’s draft bill, likewise, would have applied a somewhat different standard for wireline providers, who “shall not unjustly or unreasonably discriminate in transmitting lawful traffic over a consumer’s wireline broadband Internet access service,” also subject to reasonable network management.

Over time, the FCC recognized the error of its original draft and now agrees “with the diverse group of commenters who argue that any nondiscrimination rule should prohibit only unreasonable discrimination.” (¶ 77)

As between the suggested limiting terms “undue,” “unjust” and “unreasonable,” the FCC chose the latter for the final rule.  Though many have complained that “unreasonable” is a nebulous, subjective term, it should be noted that of the three it is the only one with understood (if not entirely clear) legal meaning, particularly in the context of the FCC’s long history of rulemaking and adjudication.

The earliest railroad regulations, for example, which also provided the beginning of the FCC’s eventual creation and authority over communications industries, required reasonable rates of carriage, and empowered the Interstate Commerce Commission to intervene and eventually set the rates itself, much as the FCC later did with telephony.

One lesson of the railroad and telephone histories, however, is the danger of turning over to regulators decisions about what behaviors are reasonable. (Briefly, regulatory capture often ends up leaving the industry unable to respond to new forms of competition from disruptive technologies, with disastrous consequences.)

The V-G proposal gets to the heart of the problem in the text I italicized.  Despite the negative connotations of the word in common use, “discrimination” isn’t inherently bad. As the Report makes clear, in managing Internet access and network traffic, there are many forms of discrimination—which means, after all, affording different treatment to different things—that are entirely beneficial to overall network behavior and to the consumer’s experience with the Internet.

The draft rule, as the FCC now admits (see ¶ 77 of the Report), was dangerously rigid.  If any behavior should be regulated, it is the kind of discrimination whose principal purpose is to harm competition or users—though that kind of behavior is already illegal under various antitrust laws.

For one thing, users may want some kinds of traffic – e.g., voice and video – to receive a higher priority over text and graphics, which do not suffer from latency problems.  Companies operating Virtual Private Networks for their employees may likewise want to limit Web access to selected sites and activities for workers while on the job.

A strict nondiscrimination rule would have also discouraged or perhaps banned tiered pricing, harming consumers who do not need the fastest speeds and the highest volume of downloads to accomplish what the want to online.  (Without tiered pricing, such consumers effectively subsidize power users who, not surprisingly, are the most vociferous objectors to tiered pricing.)

Discrimination may also be necessary to manage congestion during peak usage periods or when failing nodes put pressure on the backbone.  Discrimination against spam, viruses and other malware, much of which is not “lawful,” is also permitted and indeed encouraged.  (See ¶ 90-92.)

By comparison, the Report notes three (¶ 75) types of provider discrimination that are of particular concern.  These are:  discrimination that harms competitors (e.g., VoIP providers of over-the-top telephone service, such as Skype or Vonage, that competes with the provider’s own telephone service), “inhibiting” end users from accessing content, services, and applications of their choice (but see the no-blocking rule, above, which already covers this), and discrimination that “impairs free expression,” including slowing or blocking access to a blog whose message the broadband provider does not approve.

On that last point, however, it’s important to note that Congress has already given broadband providers (and others) broad freedom to filter and otherwise curate content they do not approve of or which they believe their customers don’t want to see.  Under Section 230 of the Communications Decency Act,

“No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The goal of Section 230 was to immunize early Internet providers including CompuServe and Prodigy from efforts to exercise editorial control over message boards whose content was provided by customers themselves.  But it gives providers broad discretion in determining what kind of content it believes its customers don’t want to see.  So long as the filtering is undertaken in “good faith” (e.g., not with the intent of harming a competitor), there is no liability for the provider, who does not, for example, become a “publisher” for purposes of defamation law.

The FCC (¶ 89) acknowledges the limit that Section 230 puts on the discrimination rule.

On the harm to competitors prong, the FCC waffles (see ¶ 76) on whether “pay for priority”—the bugaboo that launched the neutrality offensive in the first place, actually constitutes a violation of the rules.  While a broadband provider’s offering to prioritize the traffic of a particular source for a premium fee “would raise significant cause for concern,” the agency acknowledges that such behavior has occurred and thrived for years in the form of third party Content Delivery Networks.  (See footnote 236)  CDNs are allowed.   (More on CDNs in the next post.)

So in the end the discrimination rule doesn’t appear to add much to the blocking rule or existing antitrust laws.  Discrimination against competing over-the-top providers would violate antitrust.  Blocking or slowing access to disfavored content is already subject to the blocking rule.  And interfering with “free expression” rights of users is already significantly allowed by Section 230.

What’s left?   “The rule rests on the general proposition,” the agency concludes (¶ 78), “that broadband providers should not pick winners and losers on the Internet,” even when doing so is independent of competitive interests.  What exactly this means—and how “reasonable” discrimination will be judged in the course of enforcing the rules—remains to be seen.

Next:  The exceptions and what they say about the real purpose of the rules

]]>
https://techliberation.com/2011/01/05/%e2%80%9cfake-neutrality%e2%80%9d-or-government-takeover-reading-the-fcc%e2%80%99s-net-neutrality-report-part-iii/feed/ 6 34135
Chairman Genachowski and his Howling Commissioners: Reading the Net Neutrality Order (Part I) https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/ https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/#comments Thu, 30 Dec 2010 22:27:41 +0000 http://techliberation.com/?p=33907

At the last possible moment before the Christmas holiday, the FCC published its Report and Order on “Preserving the Open Internet,” capping off years of largely content-free “debate” on the subject of whether or not the agency needed to step in to save the Internet.

In the end, only FCC Chairman Julius Genachowski fully supported the final solution.  His two Democratic colleagues concurred in the vote (one approved in part and concurred in part), and issued separate opinions indicating their belief that stronger measures and a sounder legal foundation were required to withstand likely court challenges.  The two Republican Commissioners vigorously dissented, which is not the norm in this kind of regulatory action.  Independent regulatory agencies, like the U.S. Courts of Appeal, strive for and generally achieve consensus in their decisions.

So for now we have a set of “net neutrality” rules that a bi-partisan majority of the last Congress, along with industry groups and academics, strongly urged the agency not to adopt, and which were deemed unsatisfactory by four of the five Commissioners.  It’s hardly a moment of pride for the agency, which has been distracted by the noise around these proceedings since Genachowski was first confirmed by the Senate.  Important work freeing up radio spectrum for wireless Internet, reforming the corrupt Universal Service Fund, and promoting the moribund National Broadband Plan have all been sidelined.

How did we get here?  In October, 2009, the agency first proposed new rules, but their efforts were sidetracked by a May court decision that held the agency lacked authority to regulate broadband Internet.  After flirting with the dangerous (and likely illegal) idea of “reclassifying” broadband to bring it under the old telephone rules, sanity seemed to return.  Speaking to state regulators in mid-November, the Chairman made no mention of net neutrality or reclassification, saying instead that “At the FCC, our primary focus is simple: the economy and jobs.”

Just a few days later, at a Silicon Valley event, the Chairman seemed to reverse course, promising that net neutrality rules would be finalized.  He also complimented the “very smart lawyers” in his employ who had figured out a way to do it without the authorization of Congress, which has consistently failed to pass enabling legislation since the idea first surfaced in 2003.  (Most recently, Democratic Congressman Henry Waxman floated a targeted net neutrality bill days before the mid-term elections, but never introduced it.)

From then until the Commission’s final meeting before the new Congress comes to town in January, Commissioners and agency watchers lobbied hard and feinted outrage with the most recent version of the rules, which the agency did not make public until after the final vote was taken on Dec. 21.  In oral comments delivered at the December meeting, two commissioners complained that they hadn’t seen the version they were to vote on until midnight the night before the vote.  Journalists covering the event didn’t have the document all five Commissioners referenced repeatedly in their spoken comments, and had to wait two more days for all the separate opinions to be collated.

Why the Midnight Order?  FCC Commissioners do not serve at the whim of Congress or the President, so the mid-term election results technically had no effect on the chances of agency action.  Chairman Genachowski has had the votes to approve pretty much anything he wants to all along, and will for the remainder of his term.

Even with a Republican House, legislation to block or overturn FCC actions is unlikely.  The Republicans would have to get Democratic support in the Senate, and perhaps overcome a Presidential veto.

But Republicans could use net neutrality as a bargaining chip in future negotiations, and the House can make life difficult for the agency by holding up its budget or by increasing its oversight of the agency, forcing the Chairman to testify and respond to written requests so much as to tie the agency in knots.

So doing something as Congress was nearly adjourned and too busy to do much but bluster was perhaps the best chance the Chairman had for getting something—anything—on the Federal Register.

More likely, the agency was simply punting the problem.  Tired of the rancor and distraction of net neutrality, the new rules—incomplete, awkward, and without a solid legal foundation—move the issue from the offices of the FCC to the courts and Congress.  That will still tie up agency resources and waste even more taxpayer money, of course, but now the pressure of industry and “consumer advocate” groups will change its focus.  Perhaps this was the only chance the Chairman had of getting any real work done.

The Report and Order

Too much ink has already been spilled on both the substance and the process of this order, but there are a few tidbits from the documents that are worth calling out.  In this post, I look at the basis for issuing what the agency itself calls “prophylactic rules.”  In subsequent posts, I’ll look at the final text of the rules themselves and compare them to the initial draft, as well as to alternatives offered by Verizon and Google and Congressman Waxman.  Another post will review the legal basis on which the rules are being issued, and likely legal challenges to the agency’s authority.  I’ll also examine the FCC’s proposed approach to enforcement of the rules.

“Prophylactic” Rules

Even the FCC acknowledges that the “problem” these new rules solve doesn’t actually exist…yet.  The rules are characterized as “prophylactic” rules—a phrase that appears eleven times in the 87-page report.  The report fears that the lack of robust broadband competition in much of the U.S. (how many sets of redundant broadband infrastructure do consumer advocates want companies to build out, anyway?) could lead to ISPs using their market influence to squeeze content providers, consumers, or both.

This hasn’t happened in the ten years broadband Internet has been growing in both capability and adoption, of course, but still, there’s a chance.  As the report (¶ 21) puts it in challenged grammar, “broadband providers potentially face at least three types of incentives to reduce the current openness of the Internet.”

We’ll leave to the side for now the undiscussed potential that these new rules will themselves cause unintended negative consequences for the future development or deployment of technologies built on top of the open Internet.  Instead, let’s look at the sum total of the FCC’s evidence, collected over the course of more than a year with the help of advocates who believe the “Internet as we know it” is at death’s door, that broadband providers are lined up to destroy the technology that, ironically, is the source of their revenue.

To prove that these “potential” incentives are neither “speculative or merely theoretical,” the FCC cites precisely four examples between 2005 and 2010 where it believes broadband providers have threatened the open Internet (¶ 35).   These are:

1.      A local ISP that was “a subsidiary of a telephone company” settled claims it had interfered with Voice over Internet Telephony (VoIP) applications used by its customers.

2.      Comcast agreed to change its network management techniques when the company was caught slowing or blocking packets using the BitTorrent protocol (the subject of the 2010 court decision holding the agency lacked jurisdiction over broadband Internet).

3.      After a mobile wireless provider contracted with an online payment service, the provider “allegedly” blocked customers’ attempts to use competing services to pay for purchases made with mobile devices.

4.      AT&T initially restricted the types of applications—including VoIP and Slingbox—that customers could use on their Apple iPhone.

In the world of regulatory efficiency, this much attention being focused on just four incidents of potential or “alleged” market failures is a remarkable achievement indeed.  (Imagine if the EPA, FDA, or OSHA reacted with such energy to the same level of consumer harm.)

But in legal parlance, regulating on such a microscopically thin basis goes well beyond mere “pretense”—it’s downright embarrassing the agency couldn’t come up with more to justify its actions.  Of the incidents, (1) and (2) were resolved quickly through existing agency authority, (3) was merely alleged and apparently did not even lead to a complaint filed with the FCC (the footnote here is to comments filed by the ACLU, so it’s unclear who is being referenced) and (4) was resolved—as the FCC acknowledges–when customers put pressure on Apple to allow AT&T as the sole iPhone network provider to allow the applications.

Even under the rules adopted, (2) would almost surely still be allowed.  The Comcast case involved use of the BitTorrent protocol.  Academic studies performed since 2008 (when the protocol has been expanded to more legal uses, that is), find that over 99% of BitTorrent traffic still involves unlicensed copyright infringement.  Thus the vast majority of the traffic involved is not “lawful” traffic and, therefore, is not subject to the rules.  The no blocking rule (§8.5) only prohibits blocking of “ lawful content, applications, services or non-harmful devices.”  (emphasis added)

Indeed, the FCC encourages network providers to move more aggressively to block customers who use the Internet to violate intellectual property law.  In ¶ 111, the Report makes crystal clear that the new rules “do not prohibit broadband providers from making reasonable efforts to address the transfer of unlawful content or unlawful transfers of content…..open Internet rules should not be invoked to protect CR infringement….” (Perhaps the FCC, which continues to refer to BitTorrent as an “application” or believes it to be a website, simply doesn’t understand how the BitTorrent protocol actually works.)

Under the more limited wireless rules adopted, (3) and (4) would probably still be allowed as well.  We don’t know enough about (3) to really understand what is “alleged” to have happened, but the no-blocking rule (§ 8.5) says only that mobile broadband Internet providers “shall not block consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider’s voice or video telephony service, subject to reasonable network management.”

A mobile payment application wouldn’t seem to be included in that limitation, and in the case of the iPhone, it was Apple, not AT&T, that wanted to limit VoIP.

Even so, the Report makes clear that the wireless rule (¶ 102) doesn’t apply to app stores: “The prohibition on blocking applications that compete with a broadband provider’s voice or video telephony services does not apply to a broadband provider’s operation of application stores or their functional equivalent.”  So if the software involved in incidents (3) and (4) involved rejection of proposed apps for the respective mobile devices, there would still be no violation under the new rules.

And the caveat for “reasonable network management” (§8.11(d)) says only that a practice is “reasonable if it is appropriate and tailored to achieving a legitimate network purpose, taking into account the particular network architecture of the broadband Internet access service.”  Voice and video apps, depending on how they have been implemented, can put particular strain on a wireless broadband network.  Blocking particular VoIP or apps like Slingbox might be allowed, in other words.

So that’s it.  Only four or fewer actual examples of non-open behavior by ISPs in ten years.  And the rules adopted to curb such behavior would probably only apply, at best, to the single case of Madison River (1), a local telephone carrier with six hundred employees, in a case the FCC agreed to drop without a formal finding of any kind nearly six years ago.

But maybe these aren’t the real problems.  Maybe the real problem is, as many regulatory advocates argue vaguely, the lack of “competition” for broadband.  Since the first deployment of high-speed Internet, multiple technologies have been used to deliver access to consumers, including DSL (copper), coaxial cable, satellite, cellular (3G and now 4G), wireless (WiFi and WiMax), and broadband over power lines.  According to the National Broadband Plan, 4% of the U.S. population still doesn’t have access to any of these alternatives.  In many parts of the country, only two providers are available and in others, the offered speeds of alternatives vary greatly, leaving high-bandwidth users without effective alternatives.

If lack of competition is the problem, though, why not solve that problem?  Well, perhaps the FCC would rather sidestep the issue, since it has demonstrated it is the wrong agency to encourage more competition.  The FCC, for example, has supported legal claims by states that they can prohibit municipalities from offering wireless service, and has dragged its feet on approving trials for broadband over power lines—the best hope for much of the 4% who today have no broadband option, most of whom live in rural areas which already have power line infrastructure.

Indeed, if there are anti-competitive behaviors now or in the future, existing antitrust law, enforceable by either the Department of Justice or the Federal Trade Commission, provide much more powerful tools both to prosecute and remedy activities that genuinely harm consumers.

It’s hard, by comparison, to find many examples in the long history of the FCC where it has used its sometimes vast authority to solve a genuine problem.  The Carterfone decision, which Commissioner Copps cites enthusiastically in his concurrence, and (finally) the opening of long distance telephony to competition, certainly helped consumers.  But both (and other examples) could also be seen as undoing harm caused by the agency in the first place.  And both dealt with technologies and applications that were mature.  Why does anyone believe the FCC can “prophylactically” solve a problem dealing with an emerging, rapidly-evolving new technology that has thrived in the last decade in part because it was unregulated?

The new rules, which are aimed at ensuring “edge” providers do not need to get “permission to innovate” from ISPs, may have the unintended effect of requiring ISPS—and edge providers—to get “permission to innovate” from the FCC.  That hardly seems like a risk worth taking for a problem that hasn’t presented itself.

]]>
https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/feed/ 8 33907
Domain Name Seizures and the “Limits” of Civil Forfeiture https://techliberation.com/2010/11/29/domain-name-seizures-and-the-limits-of-civil-forfeiture/ https://techliberation.com/2010/11/29/domain-name-seizures-and-the-limits-of-civil-forfeiture/#comments Mon, 29 Nov 2010 21:33:20 +0000 http://techliberation.com/?p=33259

I was quoted this morning in Sara Jerome’s story for The Hill on the weekend seizures of domain names the government believes are selling black market, counterfeit, or copyright infringing goods.

The seizures take place in the context of an on-going investigation where prosecutors make purchases from the sites and then determine that the goods violate trademarks or copyrights or both.

Several reports, including from CNET, The Washington Post and Techdirt, wonder how it is the government can seize a domain name without a trial and, indeed, without even giving notice to the registered owners.

The short answer is the federal civil forfeiture law, which has been the subject of increasing criticism unrelated to Internet issues.  (See http://law.jrank.org/pages/1231/Forfeiture-Constitutional-challenges.html for a good synopsis of recent challenges, most of which fail.)

The purpose of forfeiture laws is to help prosecutors fit the punishment to the crime, especially when restitution of the victims or of the cost of prosecution is otherwise unlikely to have a deterrent effect, largely because the criminal has no assets to attach.  In the war on drugs, for example, prosecutors can now seize pretty much any property used in the commission of the crime, including a seller’s vehicle or boat.  (See U.S. v. 1990 Toyota 4 Runner for an example and explanation of the limits of federal forfeiture law.)

Forfeiture laws have been increasingly used to fund large-scale enforcement operations, and many local and federal police now develop budgets for these activities based on assumptions about the value of seized property.  This has led to criticism that the police are increasingly only enforcing the law when doing so is “profitable.”  But police point out that in an age of regular budget cuts, forfeiture laws are all they have in the way of leverage.

Sometimes the forfeiture proceedings happen after the trial, but as with the domain names, prosecutors also have the option to seize property before any indictment and well before any trial or conviction.  Like a search warrant, a warrant to seize property requires only that a judge find probable cause that the items to be seized fit the requirements of forfeiture—in general, that they were used in the commission of a crime.

The important difference between a seizure and a finding of guilt—the difference that allows the government to operate with such a free hand—is that the seizure is only temporary.  A forfeiture, as here, isn’t permanent until there is a final conviction.

The pre-trial seizure is premised on the idea that during the investigation and trial, prosecutors need to secure the items so that the defendant doesn’t destroy or hide it.

If the defendant is acquitted, the seized items are returned.  Or, if the items turn out not to be subject to forfeiture (e.g., they were not used in the commission of any crimes the defendant is ultimately convicted for), they are again returned.  Even before trial, owners can sue to quash the seizure order on the grounds that there was insufficient (that is, less than probable) cause to seize it in the first place.

All of that process takes time and money, however, and many legal scholars believe in practice that forfeiture reverses the presumption of innocence, forcing the property owner to prove the property is “innocent” in some way.

In current (and expanding) usage, forfeiture may also work to short-circuit due process of the property owner.  (Or owners—indeed, seized property may be jointly owned, and the victim of the crime may be one of the owners, as when the family car is seized when the husband uses it to liaison with a prostitute.)

That’s clearly a concern with the seizure of domain names.  This “property” is essential for the enterprise being investigated to do business of any kind.  So seizing the domain names before indictment and trial effectively shuts down the enterprise indefinitely. (Reports are that most if not all of the enterprises involved in this weekend’s raid, however, have returned under new domain names.)

If prosecutors drag their heels on prosecution, the defendant gets “punished” anyway.  So even if the defendant is never charged or is ultimately acquitted, there’s nothing in the forfeiture statute that requires the government to make them whole for the losses suffered during the period when their property was held by the prosecution.  The loss of the use of a car or boat, for example, may require the defendant to rent another while waiting for the wheels of justice to turn.

For a domain name, even a short seizure effectively erases any value the asset has.  Even if ultimately returned, it’s now worthless.

Clearly the prosecutors here understand that a pre-trial seizure is effectively a conviction.  Consider the following quote from Immigration and Customs Enforcement Director John Morton, who said at a press conference today, “Counterfeiters are prowling in the back alleys of the Internet, masquerading, duping and stealing.”  Or consider the wording of the announcement placed on seized domain names (see http://news.cnet.com/8301-1023_3-20023918-93.html), implying at the least that the sites were guilty of illegal acts.

There’s no requirement for the government to explain the seizures are only temporary measures designed to safeguard property that may be evidence of crime or may be an asset used to commit it.  Nor do they have to acknowledge that none of the owners of the domain names seized has been charged or convicted of any crime yet.  But the farther prosecutors push the forfeiture statute, the bigger the risk that courts or Congress will someday step in to pull them back.

]]>
https://techliberation.com/2010/11/29/domain-name-seizures-and-the-limits-of-civil-forfeiture/feed/ 9 33259
Europe Reimagines Orwell’s Memory Hole https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/ https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/#comments Tue, 16 Nov 2010 19:48:21 +0000 http://techliberation.com/?p=33047

Inspired by thoughtful pieces by Mike Masnick on Techdirt and L. Gordon Crovitz’s column yesterday in The Wall Street Journal, I wrote a perspective piece this morning for CNET regarding the European Commission’s recently proposed “right to be forgotten.”

A Nov. 4th report promises new legislation next year “clarifying” this right under EU law, suggesting not only that the Commission thinks it’s a good idea but, even more surprising, that it already exists under the landmark 1995 Privacy Directive.

What is the “right to be forgotten”?  The report is cryptic and awkward on this important point, describing “the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they [that is, the data] are no longer needed for legitimate purposes.”

The devil, of course, will be in the forthcoming details.  But it’s important to understand that under current EU law, the phrase “their data” doesn’t just mean information a user supplies to a website, social network, or email host.  Any information that refers to or identifies an individual is considered private information under the control of the person to whom it refers.  So “their data” means anyone’s data, even if the individual identified had nothing to do with its collection or storage.

And EU law doesn’t just limit privacy protections to computer data. Users have the right to control information about them appearing in printed and other analog formats as well.

As I say in the piece, the “right to be forgotten” begins to sound like Big Brother’s “memory hole” in Orwell’s classic 1984.  But instead of Winston Smith “rectifying” newspaper articles at the direction of his faceless masters at the Ministry of Truth, a right to be forgotten creates a kind of personal memory hole.  Something you did in the past that you would prefer never happened?  Just issue orders to anyone who knows about, and force them to destroy any evidence.

Of course such a right would be as impractical to enforce as it is ill-conceived to grant.

Both Masnick and Crovitz, in particular, worry about the free speech implications of such a right, both for the press and for individuals.  And those are indeed potentially catastrophic.  Having the power to rewrite history devalues any information, including information that hasn’t been erased.

The social contract operates on facts and the ability to sort out truth from lie.  A right to be forgotten gives every individual the power to rewrite that contract whenever they feel like.  So who would sensibly enter into such a relationship in the first place?

My concern, however, is even more metaphysical.  The privacy debate currently going on in public policy circles is disturbing, perhaps most of all because it is being framed as a policy discussion.  Rather than work out what costs and benefits we get from increased information sharing with each other, those who are feeling anxious about the pace of change in digital life are running, as anxious people often do, to regulators, demanding they do something—anything—to alleviate their future shock.  And regulators, who are pretty anxious people themselves, are too-often happy to oblige, even when they understand neither the technology nor the implications of their lawmaking.

Beyond the worst possible choice of forum to begin a conversation, the privacy debate in its current form is no debate at all.  It is mostly a bunch of emotional people hurling rhetorical platitudes at each other, trading the worst-case examples of the deadly potential of privacy invasions (teen suicides, evil corporations) with fear-inspiring claims of the risk of keeping information secret (terrorists win).

It’s not really a debate at all when the two “sides” are talking about entirely different subjects.  And when no one’s really listening anyway. All that is happening is that the stress level amps up, and those not participating in the discussion get the distinct impression that the world is about to end.

A starting point for a real conversation about privacy—one that is dangerously absent from any of the current lawmaking efforts—is an understanding about the nature of information.  Privacy in general and a right to be forgotten specifically begins with the false assumption that information (private or otherwise) is a kind of property, a discrete, physical item that can be controlled, owned, traded, used up, and destroyed.  (Both “sides” have fallen into this trap, and can’t seem to get out.)

The fight often breaks down into questions of entitlement—who initially owns the information that refers to me?  The person who found it and translated it into a form that could be accessed by others, or the person to whom it refers, regardless of source?  Under what conditions can it be transferred?  Does the individual maintain a universal and inalienable right of rescission—the ability to take it back later, for any reason, and without compensating the person who now has it?

But these are the wrong questions to be asking in the first place.  Information isn’t property, at least not as understood by our industrial-age legal system or popular metaphors of ownership.  Information, from an economic standpoint, is a virtual good.  It can be “possessed” and used by everyone at the same time.  It can become more valuable in being combined with other information.  It can maintain or improve its value forever.

And, whether the law says so or not, it can’t be repossessed, put back in the safety deposit box, buried at sea, or “devoured by the flames” like the old newspaper articles Winston Smith rewrites when the truth turns out to be inconvenient to the past.  That of course was Orwell’s point.  You can send down the memory hole the newspaper that reported Big Brother’s promise of increased chocolate rations, but people still remember that he said it.  You can try to brainwash them, too, and limit their choice of language to eliminate the possibility of unsanctioned thoughts.  You can destroy the individual who rebels against such efforts.

But it still doesn’t work.  The facts, warts and all, are still there, even when their continued existence is subjectively embarrassing to an individual.  Believe me, I wish sometimes it were otherwise.  I would very much like to “rectify” high school, or my parents, or the recent death of my beloved dog.  The truth often hurts.

But burning all the libraries and erasing all the bits in the world doesn’t change the facts.  It just makes them harder to access.  And that makes it harder to learn anything from them.

Maybe the European Commission was just being sloppy in its choice of words.  Perhaps it has something much more limited in mind for a “right to be forgotten.”  Or perhaps as it begins the ugly process of writing actual directives that must then be implemented in law by member countries, it will see both the impossibility and danger of going down this path.

Perhaps they’ll then pretend they never actually promised to “clarify” such a right in the first place.

But we’ll all know that they did.  For whatever it’s worth.

]]>
https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/feed/ 7 33047
Privacy as an Information Control Regime: The Challenges Ahead https://techliberation.com/2010/11/13/privacy-as-an-information-control-regime-the-challenges-ahead/ https://techliberation.com/2010/11/13/privacy-as-an-information-control-regime-the-challenges-ahead/#comments Sat, 13 Nov 2010 15:04:45 +0000 http://techliberation.com/?p=32937

This week, we’ve seen reports in both The New York Times (“Stage Set for Showdown on Online Privacy“) and The Wall Street Journal (“Watchdog Planned for Online Privacy“) that the Obama Administration is inching closer toward adopting a new Internet regulatory regime in the name of protecting privacy online.  In this essay, I want to talk about information control regimes, not from a normative perspective, but from a practical one.  In doing so, I will compare the relative complexities associated with controlling various types of information flows to protect against four theoretical information harms: objectionable content, defamation, copyright, and privacy.

From a normative perspective, there are many arguments for and against various forms of information control.  Here, for example, are the reasons typically given for why society might want to impose regulations on the Internet (or other communications channels) to address each of the four issues identified above:

  1. Content control / Censorship: We must control information flows to protect children from objectionable content or all citizens against some other form of supposedly harmful speech (hate speech, terrorist recruitment, etc).
  2. Defamation control: We must control information flows to protect people’s reputations.
  3. Copyright control: We must control information flows to protect the property rights of creators against unauthorized use / distribution.
  4. Privacy control: We must control information flows to protect against information flows that include information about individuals.

Again, there are plenty of good normative arguments in the opposite direction, many of which are based on free speech considerations since, by definition, information control regimes limit the flow of forms of speech.  For privacy, I discussed such speech-related considerations in my essay on “Two Paradoxes of Privacy Regulation.”  But what about the administrative or enforcement burdens associated with each form of information control?  I increasingly find that question as interesting as the normative considerations.

Let’s begin with a self-evident statement about which most of us can (hopefully) agree: Information control can be complex and costly.  This was true even in the scarcity era with its physical and analog distribution methods of information dissemination.  All things considered, however, the challenge of controlling information in the past paled in comparison to the far more formidable challenges nation-states face in the digital era when they seek to limit information flows.

The movement of bits across electronic networks and digital distribution systems creates unique problems I have previously discussed in my essay on “The End of Censorship.” To recap, efforts to control information today are greatly complicated by problems associated with:

  • Convergence: Media content and information distribution outlets are blurring together today thanks to the rise of myriad new technologies and competitors. These new technologies and competitors generally ignore or reject the distribution-based distinctions and limitations of the past. In other words, convergence means that media content is increasingly being “unbundled” from its traditional distribution platforms and finding many paths to the consumers. As a result of these developments, it is now possible to disseminate, find, or consume the same content / information via multiple devices or distribution networks.  In this way, convergence complicates efforts to create effective information control regimes.
  • Scale: In the past, the reach of speech and information was limited by geographic, technological, and cultural / language considerations. Today, by contrast, media can now flow across the globe at the click of a button because of the dramatic expansion of Internet access and broadband connectivity.  While restrictions by nation-states are still possible, the scale of modern digital speech and content dissemination greatly complicates government efforts to control information flows.
  • Volume:  The sheer volume of media and communications activity taking place today also complicates regulatory efforts. In simple terms, there is just too much stuff for regulators to police today relative to the past. As a 2002 blue ribbon panel assembled by the National Research Council to examine the regulation objectionable content concluded: “The volume of information on the Internet is so large — and changes so rapidly — that it is simply impractical for human beings to evaluate every discrete piece of information for inappropriateness.”  While it may have been possible to oversee a handful of newspapers or TV and radio stations in each community or country in the past, today’s electronic media universe is so diverse and enormous—and evolving so quickly—that content controls significantly complicate enforcement burdens.
  • Unprecedented individual empowerment / user-generation of content: In this new world in which every man, woman and child can be a one-person publishing house or self-broadcaster, restrictions on viewing, listening or uploading and downloading will be become increasingly difficult to devise and enforce.   By comparison, few of those opportunities were available to the citizenry in the past.

Now, let’s go back to the four issues I identified above and think abut the implications.  In terms of content controls, defamation, and copyright, it’s fairly clear how these considerations complicate enforcement efforts.  Of course, some regulatory efforts have succeeded after governments pushed back aggressively enough, and I certainly don’t mean to suggest that governments are powerless to control information flows in the Informati0n Age.  Read through Access Controlled and you’ll find plenty of examples of how nations across the globe are doing so using various methods of control: Surveillance, centralized filtering, strict liability regimes, government ownership of key facilities / companies, etc.   Let me also make clear that  I am not entirely against all information control efforts.  Generally speaking, I want strict limits placed on governments efforts to control information flows, but I’ve also endorsed efforts to use some of these regulatory approaches to deal with child pornography and extreme forms of copyright piracy.

Anyway, I want to just make two more points here about how this relates to privacy regulation as an information control regime.  First, while I think most people understand the complexities associated with information control efforts in the content, defamation, and copyright fields, I don’t think scholars or policymakers are spending nearly enough time considering the complexities of enforcing an information control regime for privacy.  All too often, privacy advocates seem to suggest that privacy regulation will be frictionless and cost-free.  Once they jump to the assumption that privacy is a “human right,” or must be protected in the name of “human dignity,” any discussion of enforcement hassles or the costs of regulation seemingly goes right out the window.  In reality, of course, privacy regulation will have profound consequences for online sites and services by potentially undermining the goose that lays the Internet’s golden (and mostly free) eggs: online advertising and the data collection that powers it.  Again, this is somewhat secondary to my point in this essay, which is just to suggest that the complexities associated with the mechanics of information control are not being fully considered in the privacy context.  Either way, it’s time we stop pretending privacy regulation is a free lunch.

Second, I would like to suggest — but I cannot prove at this time — that enforcing a privacy information control regime will be more complex than the regimes needed to control information flows for content, defamation, and copyright.  Now how can that possibly be, you ask?  It requires a much deeper dive into the specifics of various privacy regulatory proposals, but consider two recent privacy-related regulatory regimes: a “Do Not Track” list and a “right to be forgotten.” Both sound simple enough in theory, but the details are quite devilish.

How, for example, would government go about verifying proper compliance with such regulations without also ensuring some sort of online authentication system is in place to verify people are who they say they are?  Must every browser be retooled to comply and then regulated accordingly?  What about apps downloaded on tablets or smartphones that don’t require browsers at all?  Are IP addresses “personal information” that are also subject to regulation? Which agencies are responsible for creating authentication systems, policing online data flows, and reviewing new innovations and sites to ensure they are complying?  Who in government has access to the data about individuals that is collected for such purposes, and what else are they doing with it?  What systems will need to be put into place by online operators, large and small alike, to ensure compliance?  And so on.  Enforcement problems will also be complicated by the subjectivity of privacy norms from one individual to another as well as the fact that these norms change over time (and seem to be changing quite rapidly in recent years).

Again, more research needs to be done to better document the potential costs associated with a privacy information control regime, but I would hope we could begin by accepting that fact that it is an information control regime and that it will be complicated to enforce and will have costs — both in economic and speech-related terms.  Advocates of such a regulatory regime for the Internet should at least be mature enough to admit that what they are proposing is comparable in complexity and cost to the censorship and copyright regulatory regimes they typically oppose.

]]>
https://techliberation.com/2010/11/13/privacy-as-an-information-control-regime-the-challenges-ahead/feed/ 5 32937
After the Deluge, More Deluge https://techliberation.com/2010/07/22/after-the-deluge-more-deluge/ https://techliberation.com/2010/07/22/after-the-deluge-more-deluge/#respond Thu, 22 Jul 2010 06:34:58 +0000 http://techliberation.com/?p=30577

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

 

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

 

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19 th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17 th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

 

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

]]>
https://techliberation.com/2010/07/22/after-the-deluge-more-deluge/feed/ 0 30577
China Renews Google’s License https://techliberation.com/2010/07/09/china-renews-googles-license/ https://techliberation.com/2010/07/09/china-renews-googles-license/#comments Fri, 09 Jul 2010 20:57:07 +0000 http://techliberation.com/?p=30278

Today, China renewed Google’s license to do business in the country, reports The Washington Post. The announcement means that Google will maintain its presence in the country for the foreseeable future. Google will likely meet criticism, but this is good news nonetheless for Chinese Internet users.

The rapidly unfolding Google-China saga has made headline after headline since January, when Google announced that it had suffered an intrusion originating in China. In March, after months of internal debate and heavy public criticism, Google shut down its China-based search engine Google.cn, redirecting all queries to its Hong Kong-based Google.com.hk site. Late last month, Google reactivated some of its China-based services and has continued to operate in China, albeit on a limited basis.

Operating in China has long been a headache for Google, due to the Chinese government’s notorious disregard for Internet freedom, embodied by its infamous “Great Firewall of China.” China surveils all Internet traffic that traverses its borders and attempts to block its citizens from accessing information sources which the government considers unfavorable. China also gleans data from its network to identify and retaliate against political dissidents.

Human rights advocates have long derided Google and other U.S. tech companies, such as Microsoft and Yahoo, for doing business in China. China requires all search engines operating in the country to censor a broad range of information, like photos of the 1989 Tiananmen Square massacre. Critics contend that complying with the Chinese government’s oppressive demands is unethical and that facilitating censorship and suppression is morally unacceptable on its face.

Such criticisms, however principled, miss the forest for the trees. If Google were to cease its Chinese operations entirely, the result would be one less U.S. Internet firm accessible to Chinese citizens. While Google is the worldwide search leader, in the Chinese search market Google lags behind Baidu, a search company based in China. Baidu’s market share increased after Google shut down its China-based search site. If Google were to pull out of China entirely, chances are Baidu would pick up many more users.

Why is this troubling? Because Baidu has a long history of complying with the Chinese government’s demands, and has never publicly repudiated the regime’s oppressive practices.

American firms that operate in China do so begrudgingly, often repudiating the state’s human rights violations and, at times, even pushing back when they believe the government has gone too far. Google in particular has struggled over the ethical dilemma posed by China. Before 2005, Google had not formally entered the Chinese market at all, partially on human rights grounds. And after its servers were hacked from within China in late 2009, Google was reportedly on the verge of pulling out of China entirely.

The complicity of U.S. tech firms in China’s oppressive practices has also spurred attacks from politicians looking to score political points. At a recent hearing, Rep. Chris Smith (R-N.J.) accused Microsoft of “enabling tyranny” in China. And Senator Dick Durbin (D-Ill.) is pushing for federal legislation to regulate the practices of U.S. companies that do business in non-democratic nations.

Such saber-rattling will only make problems worse. Undermining the autonomy of private U.S. corporations to make their own business decisions only discourages constructive business engagement with China. Worse, American politicians’ lambasting of China actually emboldens the Chinese regime, which plays upon nationalist sentiments to garner public support.

American businesses, on the other hand, are in a far better position to criticize Chinese censorship. Google and Microsoft are household names in China. And it is far more difficult for the Chinese government to demonize American technology firms than the U.S. government.

Yes, China has a horrendous human rights record, but it isn’t the only nation in the world whose government routinely tramples human rights. In the flawed world we live in, to expect businesses to operate only in nations that truly respect their citizens’ human rights is wishful thinking. Neither Google nor any other American company enjoys facilitating Chinese oppression. But given the available alternatives, is pulling out really a superior option? Is relegating Chinese citizens to patronizing solely Chinese firms actually conducive to improving human rights?

In the long run, disengaging China will not encourage its government to grant greater political freedoms to its people. Commerce between the U.S. and China facilitates wealth creation and opens up new economic opportunities in both countries. In China, that new wealth, along with corresponding new opportunities, help expand the country’s middle class, bringing subsistence farmers into cities and, thus, closer to the global economy.

For China to become a politically and economically freer nation, a sizable middle class is a crucial factor. While Google, Microsoft, and Yahoo may not seem to be making China any freer now, they can only help in the long run.

Today, China renewed Google’s license to do business in the country, reports The Washington Post. The announcement means that Google will maintain its presence in the country for the foreseeable future. Google will likely meet criticism, but this is good news nonetheless for Chinese Internet users.

The rapidly unfolding Google-China saga has made headline after headline since January, when Google announced that it had suffered an intrusion originating in China. In March, after months of internal debate and heavy public criticism, Google shut down its China-based search engine Google.cn, redirecting all queries to its Hong Kong-based Google.com.hk site. Late last month, Google reactivated some of its China-based services and has continued to operate in China, albeit on a limited basis.

Operating in China has long been a headache for Google, due to the Chinese government’s notorious disregard for Internet freedom, embodied by its infamous “Great Firewall of China.” China surveils all Internet traffic that traverses its borders and attempts to block its citizens from accessing information sources which the government considers unfavorable. China also gleans data from its network to identify and retaliate against political dissidents.

Human rights advocates have long derided Google and other U.S. tech companies, such as Microsoft and Yahoo, for doing business in China. China requires all search engines operating in the country to censor a broad range of information, like photos of the 1989 Tiananmen Square massacre. Critics contend that complying with the Chinese government’s oppressive demands is unethical and that facilitating censorship and suppression is morally unacceptable on its face.

Such criticisms, however principled, miss the forest for the trees. If Google were to cease its Chinese operations entirely, the result would be one less U.S. Internet firm accessible to Chinese citizens. While Google is the worldwide search leader, in the Chinese search market Google lags behind Baidu, a search company based in China. Baidu’s market share increased after Google shut down its China-based search site. If Google were to pull out of China entirely, chances are Baidu would pick up many more users.

Why is this troubling? Because Baidu has a long history of complying with the Chinese government’s demands, and has never publicly repudiated the regime’s oppressive practices.

American firms that operate in China do so begrudgingly, often repudiating the state’s human rights violations and, at times, even pushing back when they believe the government has gone too far. Google in particular has struggled over the ethical dilemma posed by China. Before 2005, Google had not formally entered the Chinese market at all, partially on human rights grounds. And after its servers were hacked from within China in late 2009, Google was reportedly on the verge of pulling out of China entirely.

The complicity of U.S. tech firms in China’s oppressive practices has also spurred attacks from politicians looking to score political points. At a recent hearing, Rep. Chris Smith (R-N.J.) accused Microsoft of “enabling tyranny” in China. And Senator Dick Durbin (D-Ill.) is pushing for federal legislation to regulate the practices of U.S. companies that do business in non-democratic nations.

Such saber-rattling will only make problems worse. Undermining the autonomy of private U.S. corporations to make their own business decisions only discourages constructive business engagement with China. Worse, American politicians’ lambasting of China actually emboldens the Chinese regime, which plays upon nationalist sentiments to garner public support.

American businesses, on the other hand, are in a far better position to criticize Chinese censorship. Google and Microsoft are household names in China. And it is far more difficult for the Chinese government to demonize American technology firms than the U.S. government.

Yes, China has a horrendous human rights record, but it isn’t the only nation in the world whose government routinely tramples human rights. In the flawed world we live in, to expect businesses to operate only in nations that truly respect their citizens’ human rights is wishful thinking. Neither Google nor any other American company enjoys facilitating Chinese oppression. But given the available alternatives, is pulling out really a superior option? Is relegating Chinese citizens to patronizing solely Chinese firms actually conducive to improving human rights?

In the long run, disengaging China will not encourage its government to grant greater political freedoms to its people. Commerce between the U.S. and China facilitates wealth creation and opens up new economic opportunities in in both countries. In China, that new wealth, along with corresponding new opportunities, help expand the country’s middle class, bringing subsistence farmers into cities and, thus, closer to the global economy.

For China to become a politically and economically freer nation, a sizable middle class is a crucial factor. While Google, Microsoft, and Yahoo may not seem to be making China any freer now, they can only help in the long run.

]]>
https://techliberation.com/2010/07/09/china-renews-googles-license/feed/ 9 30278
Don’t Like Apple’s “Censorship” of Apps Content? Use Your iPhone or iPad Browser! https://techliberation.com/2010/07/05/dont-like-apples-censorship-of-apps-content-use-your-iphone-or-ipad-browser/ https://techliberation.com/2010/07/05/dont-like-apples-censorship-of-apps-content-use-your-iphone-or-ipad-browser/#comments Mon, 05 Jul 2010 18:33:36 +0000 http://techliberation.com/?p=30075

NY venture capitalist Fred Wilson notes eight advantages of using the iPhone’s Safari browser over iPhone apps to access content. Fred’s arguments seem pretty sound to me and help to illustrate the point I was trying to make a few months ago in a heated exchange over Adam’s post on Apple’s App Store, Porn & “Censorship”: Although Apple restricts pornographic apps, it does not restrict what iPhone (or iPad or iTouch) users can access on their  browsers. (And it’s not censorship, anyway, because that’s what governments do!)

As I noted in that exchange, the main practical advantage of apps right now over the browser seems to be the ability to play videos from websites that require Flash—which is especially useful for porn! Apple has rejected using Flash on the iPhone on technical grounds, in favor of HTML5, which will allow websites to display video without Flash—including on mobile devices. But once HTML5 is implemented (large scale adoption expected in 2012), this primary advantage of apps over mobile Safari will disappear: Users will be able to view porn on their browsers without needing to rely on apps—and Apple’s control over apps based on their content will no longer matter so much, if at all.

Of course, it may take several more years for HTML5 to really become the standard, but what matters is that all Apple products, including mobile Safari, already support HTML5. So it’s just a question of when porn sites move from Flash to HTML5. That seems already to be happening, with major porn publishers already starting the transition. The main stumbling block seems to be HTML5 support from the other browser makers. But Internet Explorer 9 supports HTML5, and is expected out early in 2011 with a beta version due out this August. Mozilla’s Firefox 4.0 (formerly 3.7) also promises HTML5 support and is due out this November. Since porn publishers have always been on the cutting edge of implementing new web technologies, I’d bet we’ll start seeing many porn sites move to HTML5 by this Christmas. And by Christmas 2011, as we all sit around the fire with Grandma sipping eggnog and enjoying our favorite adult websites on our overpriced-but-elegant Apple products loading in HTML5 in the Safari browser, we’ll all look back and wonder why anyone made such a big deal about Apple restricting porn apps.

Oh, and if you get tired of waiting, get an Android phone! Anyway, here are my comments on Adam’s February post:

I do understand why, as a practical matter, it’s a real inconvenience for a porn-lover not to be able to get a porn app on the iPhone. I think we can have a legitimate debate about what Apple needs to do to make this limitation transparent to its customers. But, as I noted above, users now have lots of choices for other platforms that either allow apps stores with porn (e.g., Google’s Android) or simply those that support Mobile Flash. Again, the practical importance of the apps store from a user interface perspective will diminish significantly when mobile Flash comes out this year for the various mobile OSes (except Apple, sadly) because users will be able to watch porn video through their mobile browser without needing a porn-specific app. (Of course, it’s still possible that an app might handle scrolling through photos better.)

And:

Now, as a practical matter, it’s not easy to view porn on mobile browsers, especially since they don’t currently support Flash, so video playing is limited to videos you either (i) download or (ii) stream from the web in a special app, such as for YouTube. Since Flash is used by the vast majority of video streaming sites, including for porn, this means that the abundance of online porn isn’t particularly accessible on a mobile phone. Scrolling through images, pornographic or otherwise, isn’t terribly easy either, especially since even fast data networks suffer from much greater latency than fixed broadband services. But Adobe recently announced that Flash 10.1 would be coming to Android, Blackberry, Windows Mobile 7, Nokia S60/Symbian and Palm WebOS. While it appears that Microsoft won’t be rolling out Flash for Windows Mobile 7 anytime soon, it does appear to be planning to do so at some point in the near future, and Google is already hard at work on rolling out Flash for Android sometime soon. Once these platforms roll out Flash, the Apps stores will no longer have any meaningful “gatekeeper” control over easily accessing video content, since users will be able to view or stream whatever they like in the browser. But today, the historical moment when restrictions on Apple’s app store had anything like the censorious effect claimed by Apple’s critics has passed (even assuming one believed “private censorhip” isn’t a contradiction in terms). Specifically, I’d say it passed sometime in the last year, when Android became a more viable option and, even more specifically, on this issue of mobile access to porn, on November 30, 2009, when MiKandi launched. Sure, it’s true, that Android users can’t access  all their favorite porn sites, and MiKandi app offerings are limited, but more are coming—so to speak! And when Android phone gets Flash this year, this important distinction between mobile Internet browsing and desktop Internet browsing will largely disappear. (I only hope the wireless data networks are prepares for the upsurge in video streaming on their networks that will, to be sure, be driven largely by mobile-browsing porn sites.) So… who really cares what Apple does with their app store? Yes, I understand some app users with long-term contracts may be itching for porn right now, and don’t to pay an early termination fee to jump to Android but, well, too damn bad! You may have a right to access porn if you want to, but that certainly doesn’t give you a right to force Apple to offer it to you in the most convenient way possible. Finally, it’s worth noting here that Apple has  not removed sexually-oriented social networking apps, such asGrindr, a mobile gay-cruising app from the iPhone store. I’d be a little more concerned about Apple removing such apps, whose functionality is harder to replicate from the browser, than simply removing apps for viewing pornography.

Thoughts?

]]>
https://techliberation.com/2010/07/05/dont-like-apples-censorship-of-apps-content-use-your-iphone-or-ipad-browser/feed/ 5 30075
FCC Votes for Reclassification, Dog Bites Man https://techliberation.com/2010/06/17/fcc-votes-for-reclassification-dog-bites-man/ https://techliberation.com/2010/06/17/fcc-votes-for-reclassification-dog-bites-man/#comments Thu, 17 Jun 2010 22:25:21 +0000 http://techliberation.com/?p=29830

Not surprisingly, FCC Commissioners voted 3 to 2 today to open a Notice of Inquiry on changing the classification of broadband Internet access from an “information service” under Title I of the Communications Act to “telecommunications” under Title II.  (Title II was written for telephone service, and most of its provisions pre-date the breakup of the former AT&T monopoly.)  The story has been widely reported, including posts from The Washington Post, CNET, Computerworld, and The Hill.

As CNET’s Marguerite Reardon counts it, at least 282 members of Congress have already asked the FCC not to proceed with this strategy, including 74 Democrats.

I have written extensively about why a Title II regime is a very bad idea, even before the FCC began hinting it would make this attempt.  I’ve argued that the move is on extremely shaky legal grounds, usurps the authority of Congress in ways that challenge fundamental Constitutional principles of agency law, would cause serious harm to the Internet’s vibrant ecosystem, and would undermine the Commission’s worthy goals in implementing the National Broadband Plan.  No need to repeat any of these arguments here.  Reclassification is wrong on the facts, and wrong on the law.

What is Net Neutrality?

Instead, I thought it would be useful to return to the original problem, which is last fall’s Notice of Proposed Rulemaking on net neutrality.  For despite a smokescreen argument that reclassification is necessary to implement the NBP, everyone knows that today’s NOI was motivated by the Commission’s crushing defeat in Comcast v. FCC, which held that “ancillary authority” associated with Title I did not give the agency jurisdiction to enforce its existing net neutrality policy.

Rather than request an en banc rehearing of Comcast, or appeal the case, or follow the court’s advice and return to Congress for the authority to enforce the net neutrality rules, the FCC has chosen in the name of expediency simply to rewrite the Communications Act itself.

Many metaphors have been applied to this odd decision.  I liken it to setting your house on fire to light your cigarette.  (You shouldn’t be smoking in the first place.)

Let me be clear, once again, that I am all for an open and transparent Internet.  I believe the packet-switching architecture is one of the key reasons TCP/IP has become the dominant data communications protocol (and will soon dominate voice and video).

Packet-switching isn’t the only reason the Internet has triumphed.  Perhaps the other, more important secrets to TCP/IP’s success are that it is a non-proprietary standard –so long SNA, DECNet and OSI and the corporate strategies their respective owners tried to pursue through them–and simple enough to be baked in to even the least-powerful computing devices. The Internet doesn’t care if you are an RFID tag or a supercomputer.  If you speak the language, you can participate in the network.

These features have made the Internet, as I first argued in 1998 in “Unleashing the Killer App,” an engine for remarkable innovation over the last ten years

The question for me, as I wrote in Chapter 4 of “The Laws of Disruption,” comes down most importantly to one of institutional economics.  Who is best-suited, legal authority aside, to enforce the features of the Internet’s architecture and protocols that make it work so well?  The market?  Industry self-regulation?  A global NGO?  The FCC?  Or put another way, why is a federal government agency (limited, by definition, to enforcing it authority only within the U.S.) such a poor choice for the job, despite the best intentions of its leadership and the obviously strong work ethic of its staff?

To answer that, let’s back all the way up.  Net neutrality is a political concept overlayed on a technical and business architecture.  That’s what makes this debate both dangerous and frustrating.

For starters, it’s hard to come up with a concise definition of net neutrality, largely because it’s one of those terms like “family values” that means something different to everyone who uses it.  For me it’s become something of a litmus test—people who use it positively are generally hostile to large communications companies.  People who use it negatively are generally hostile to regulatory agencies.  A lot of that anger, wherever it comes, seems to get channeled into net neutrality.

In fact the FCC doesn’t even use the term—they talk about the “open and transparent” Internet instead.

But here’s the general idea.  The defining feature of the Internet is that information is broken up into small “packets” of data which are routed through any number of computers on the world-wide network and then are reassembled when they reach their destination.

Up until now, with some notable exceptions, every participating computer relays those packets without knowing what’s in them or who they come from.  The network operates on a packet-neutral model—when one computer receives it, it looks only to see where it’s heading and sends it, depending on traffic congestion at the time, to some other computer along the way just as quickly as it can.

That’s still the model on which the Internet works.  The FCC’s concern is not with current practice, but of future problems.   Increasingly, they see a few dominant providers controlling the outgoing and incoming packets to and from consumers—the first and last mile.  So while the computers between my house and Google headquarters all treat my packets to Google and Google’s packets back to me in a neutral fashion, there’s no law that keeps Comcast (my provider) from opening those packets on their way in or on their way out and deciding to slow or speed up some or all of them.

(Well, the law of antitrust and unfair trade could in fact apply here, depending on how the non-neutral behavior was expressed and by whom.  See below.)

Why would they do that?  Perhaps they make a deal with Google to give priority to Google-related packets in exchange for a fee or a share of Google’s ad revenues.  Or, maybe they want to encourage me to watch Comcast programming instead of YouTube videos, and intentionally slow down YouTube packets to make those videos less appealing to watch.

Most of this is theoretical so far.  No ISP offers the premium or “fast lane” service to individual applications.  Comcast, however, was caught a few years ago experimenting with slowing down the BitTorrent peer-to-peer protocol.  Some of Comcast’s most active customers were clogging the pipes sending and receiving very large files (mostly illegal copies of movies, it turns out).

When they were caught, the company agreed instead to stop offering “unlimited” access and to use more sophisticated network management techniques to ensure a few customers didn’t slow traffic for everyone else.  Comcast and BitTorrent made peace, but the FCC held hearings and sanctioned Comcast after-the-fact, leading to the court case that made clear the FCC has no authority to enforce its neutrality policies.

The simple-minded dichotomy of the ensuing “debate” leaves out some important and complicated technical details.  First, some applications already require and get “premium” treatment for their packets.  Voice and video packets have to arrive pretty much at the same time in order to maintain good quality, so Voice over IP telephone calls (Skype, Vonage, Comcast) get priority treatment, as do cable programming packets, which, after all, are using the same connection to your home that the data uses.

Google, as one of the largest providers of outbound packets, has deals with some ISPs to locate Google-only servers in their hubs to ensure local copies of their web pages are always close by, a service offered more generally by companies such as Akamai and LimeLight, which offers caching services to paying customers.  In that sense, technology is being used to give priority even to data packets, about which no one should complain.

Fighting over the Future

So the net neutrality fight, aside from leaving out any real appreciation either for technological or business realities, is really a fight about the future.  As cable and telephone companies invest billions in the next generation of technology—including fiber optics and next-generation cellular services–application providers fear they will be asked to shoulder more of the costs of that investment through premium service fees.

Consumer groups have been co-opted into this fight, and see it as one that pits big corporations against powerless customers who need outside advocates to save them from dangers they do not understand.  That increasingly quaint attitude, for one thing, grossly underestimates the growing power of consumers to effect change using the Internet itself (see:  Facebook et al.). Consumers can save themselves, thanks very much.

What is true is that consumers do not and aren’t likely to be asked to pay the true costs of broadband access given the intense competition in major markets between large ISPs such as Comcast, AT&T, Verizon and others.  That is the source of anxiety for the application providers–they are seen as having more elasticity in pricing than end-users.

The existence of provider competition, however, also weighs heavily against the need for government intervention.  If an ISP interferes with the open and transparent Internet, customers will know and they will complain. Ultimately they will find a provider that gives them full and unfettered access.   (There are plenty of interested parties who help consumers with the “know” part of that equation, but still, I fully support the principle of ISP transparency with regard to network management principles.  Few consumers would actually read them, and fewer still understand them, but it’s still a good practice.)

If the market really does fail, or fails in significant local ways (rural or poor customers, for example), then some kind of regulatory intervention might make sense.  But it’s a bad idea to regulate ahead of a market failure, especially when dealing with technology that is evolving rapidly.  In the last ten years, as I argue in The Laws of Disruption, the Internet has proven to be a source of tremendous embarrassment for regulators trying to “fix” problems that shift under their feet even as they’re legislating.  Often the laws are meaningless by the time the ink is dry or—worse—inadvertently make the problems worse after the fact.

Nevertheless, in October of last year the FCC proposed—in a 107-page document—six net neutrality rules that would codify what I described above and a number of peripheral, perhaps unrelated, ideas.  Right now the agency has only a net neutrality policy, and that policy, the D.C. Circuit Court of Appeals ruled, doesn’t constitute enforceable law.  Implicit in that rulemaking was the assumption that someone needed to codify these principles, that the FCC was that someone, and that the agency had the authority from Congress to be that someone.  (The court’s ruling made clear that the latter is not the case.)

There are good reasons to be skeptical that the FCC in particular is the right agency to solve this problem even if it is a problem.  Through most of its existence the agency has been fixed on regulating a legal monopoly—the old phone company—and on managing what were very limited broadcast spectrum—now largely supplanted by cable and more sophisticated technologies for managing the spectrum.

The FCC, recall, is the agency that watches broadcast (but not cable) television and issues fines for indecent content—an activity they do more, rather than less, even as broadcast becomes a trivial part of programming reception.  Congress has three times tried to give the FCC authority to regulate indecency on the Internet as well, but the U.S. Supreme Court has stopped all three.

So if the FCC were to be the “smart cop on the beat” as Chairman Genachowski characterized his view of net neutrality, how would the agency’s temptation to shape content itself be curbed?

Worse, no one seems to have thought ahead as to how the FCC would enforce these rules.  If I complain that my access is slow today and I believe that must mean my ISP is acting in a non-neutral fashion, the agency would have to look at the traffic and inside the packets in order to investigate my complaint.  Again, the temptation to use that information and to share it with law enforcement under the name of anti-terrorism or other popular goals would be strong—strong enough that it ought to worry some of the groups advocating for net neutrality laws as a placebo to keep the ISPs in line.

The Investment Effect

It should be obvious that the course being followed by the FCC – the enactment of net neutrality rules in the first place and the increasingly desperate methods by which it hopes to establish its authority to do so—will cast a pall over the very investments in infrastructure the FCC is counting on to achieve the worthy goals of the NBP.  If nothing else, the reclassification NOI will invariably end in some heavy-duty litigation, which is likely to take years to resolve.  Courts move even more slowly than legislators, who move more slowly than regulators, all of whom aren’t even moving compared to the speed of technological innovation.

How serious a drag on the markets will regulatory uncertainty prove to be?  For what it’s worth, New York Law School’s Advanced Communications Law & Policy Institute today issued an economic analysis of the Commission’s proposed net neutrality rules, arguing that as many as 604,000 jobs and $80 billion in GDP loss would result from their passage.  Matthew Lasar at Ars Technica summarizes the report, which I have not yet read.

But one doesn’t need sophisticated economic analysis to understand why markets are already reacting poorly to the FCC’s sleight-of-hand.  The net neutrality rules the FCC proposed in October would, depending on how the agency decided to enforce them, greatly limit the future business arrangements that broadband providers could offer to their business customers.

Application providers worry that the offer of “fast lane” service invariably means everything else will become noticeably slower (not necessarily true from a technical standpoint).  But in any case the limitation of future business innovations by providers is bound to discourage, at least to some extent, up-front investments in broadband, which are characterized by high fixed costs and a long payback.

Worse, the proposed rules would also apply to Internet access over cellular networks, which is still in a very early stage of development and has much more limited capacity.  Cellular providers have to limit access to video and other high-bandwidth applications just to keep the networks up and running.   (Some of those limits are the result of resistance from local regulators to allow investments in cell towers and other infrastructure.)  The proposed rules would require them not to discriminate against any applications, no matter how resource-intensive.  That simply won’t work.

Investors are worried that the hundreds of billions they’ve spent so far on fiber optics, cellular upgrades and cable upgrades and the amount left to be spent to get the U.S. to 100 mbps speeds in the next ten years are going to be hard to recover if they don’t have flexibility to innovate new business models and services.

To Wall Street, the net neutrality rules are perceived not as enshrining a level playing field for the Internet so much as a land grab by content providers to ensure they are the only ones who can innovate with a free hand, pushing the access providers increasingly to a commodity business as, for example, long distance telephony has become.  Why should investors spend hundreds of billions to upgrade the networks if they won’t be able to make their money back?

Investors are also concerned more generally that the FCC will implement and enforce the proposed neutrality rules in unpredictable ways, bowing to lobbying pressure by the content companies even more in the future.  Up until now, the FCC has played no meaningful role in regulating access or content, and the Internet has worked brilliantly.  The networks the FCC does regulate–local telephone, broadcast TV–are increasingly unprofitable.

How would the FCC proceed if the rules are enrolled and upheld?  The NPRM says only that the Commission would investigate charges of non-neutral behavior “on a case-by-case basis.”  That approach is understandable when technology is changing rapidly, but at the same time it introduces even more uncertainty and more opportunities for regulatory mischief.  Better to wait until an identifiable problem arises, one that has an identifiable solution a regulatory agency can implement more efficiently than any other institution.

It’s possible of course that access providers, especially in areas where there is little competition, could use their leverage to make bad business decisions that would harm consumers, content providers, or both.  But that that risk could be adequately covered by existing antitrust law, or, if necessary, by FCC action when the problem actually arises.

The problem isn’t here yet, other than a handful of anecdotal problems dealt with quickly and without the need for federal intervention.  Again, the danger of rulemaking ahead of an actual failure of the market is acute, especially when one is dealing with an emerging and fast-changing set of technologies.

The more the FCC pushes ahead on the net neutrality rules, even in the face of a court decision that it has no authority to do so, the more irrational the agency appears to the investor community.  And given the almost complete reliance for the broadband plan on private investment, this seems a poor choice of battles for the FCC to be spending its political capital on now.

Preserving the Ecosystem

There’s a forest among all these trees.  So far, the Internet economy has thrived on a delicate balance today between infrastructure and the innovation of new products and services that Internet companies build on top of it.  If the infrastructure isn’t constantly upgraded in speed, cost, and reliability, entrepreneurs won’t continue to spend time and money building new products.

At the same time, if infrastructure providers don’t think the applications will be there, there’s no reason to invest in more and better capacity.  So far, consumers have shown a voracious appetite for both capacity and applications, in part because there’s been little to make them doubt more of both are always coming.

Given the long lead time for capital investments, the infrastructure providers have to bet pretty far into the future without a lot of information.  Sometimes they overbuild, or build ahead of demand (this has happened at least twice in the last ten years); sometimes (in the case of cellular), the applications arrive faster than the capacity after a long period of relative quiet.   3G support was an industry embarrassment until the iPhone finally put it to good use.

By and large the last decade has seen remarkable success in getting the right infrastructure to the right applications at the right time, as evidenced by the fact that the U.S. is still the leader by far in Internet innovation.   The U.S., despite its geography and economic diversity, is also still the leader in broadband access, with availability to over 96% of U.S. residents.  According to the latest OECD data, the U.S. has twice the number of broadband subscribers as the next-largest market.  Our per capita adoption is lower, as are our broadband speeds—both sources of understandable concern to the authors of the NBP.

The larger issue here is that regulatory intervention, or even the looming possibility of it, can throw a monkey wrench in all that machinery, and make it harder to make quick adjustments when one side gets too far ahead of the other.  Once the machine stalls, restarting it may be difficult if not impossible.   The Internet ecosystem works remarkably well.  By contrast, even regulatory changes intended to smooth out inefficiencies can wind up having the opposite effect, sometimes disastrously so.

That above all else should have given the FCC pause today in its vote.  Apparently not.

]]>
https://techliberation.com/2010/06/17/fcc-votes-for-reclassification-dog-bites-man/feed/ 8 29830
The Fallacy of “E-Personation” Laws https://techliberation.com/2010/06/11/the-fallacy-of-e-personation-laws/ https://techliberation.com/2010/06/11/the-fallacy-of-e-personation-laws/#comments Sat, 12 Jun 2010 01:19:37 +0000 http://techliberation.com/?p=29687

I was interviewed yesterday for the local Fox affiliate on Cal. SB 1411, which criminalizes online impersonations (or “e-personation”) under certain circumstances.

On paper, of course, this sounds like a fine idea.  As Palo Alto State Senator Joe Simitian, the bill’s sponsor, put it, “The Internet makes many things easier.  One of those, unfortunately, is pretending to be someone else.  When that happens with the intent of causing harm, folks need a law they can turn to.”

Or do they?

The Problem with New Laws for New Technology

SB1411 would make a great exam question of short paper assignment for an information law course.  It’s short, is loaded with good intentions, and on first blush looks perfectly reasonable—just extending existing harassment, intimidation and fraud laws to the modern context of online activity.  Unfortunately, a careful read reveals all sorts of potential problems and unintended consequences.

A number of states have passed new laws in the wake of highly-publicized cyberstalking and bullying incidents, including the tragic case involving a young girl’s suicide after being dumped by her online MySpace boyfriend, who turned out to be a made-up character created for the purpose of hurting her feelings.  (I’ve written about the case before, see “Lori Drew Verdict Finally Overturned.” )

Missouri passed a cyberbullying law when it turned out there was no federal law that covered the behavior in the MySpace case.  Texas and New York recently enacted laws similar to SB 1411, though the Texas law applies only to impersonation on social media sites.

The problem with all these laws generally is that the authors aren’t clear what behaviors exactly they are trying to criminalize.  And, mindful of the fact that the evolution of digital life is happening much faster than any legislative body can hope to keep up with, these laws are often written to be both too specific (the technology changes) and too broad (the behavior is undefined).  As a result, they often don’t wind up covering the behavior they intend to deter, and, left on the books, can often come back to life when prosecutors need something to hang a case on that otherwise doesn’t look illegal.

Given the proximity to free speech issues, the vagueness of many of these laws makes them good candidates for First Amendment challenges, and many have fallen on that sword.

California’s SB 1411 as a Case in Point

SB1411, which last week passed in the State Senate, suffers from all of these defects.  It punishes the impersonation of an “actual person through or on an Internet Web site or by other electronic means for purposes of harming, intimidating, threatening or defrauding another person.”  It requires the impersonator to knowingly commit the crime and do so without the consent of the person they are imitating.  It also requires that the impersonation be credible.  Punishment for violation can include a year in jail and a suit brought by the victim for punitive damages.

First let’s consider a few hypotheticals, starting with the one that inspired the law, the MySpace case noted above.  Since the boy whose profile lured the victim into an online romance that was then cruelly terminated was a made-up person (the perpetrators found some photo of a suitably shirtless teen and built a personality around it), SB 1411 would not apply had it been the law in Missouri.  The boy was not an “actual person,” and, except perhaps to a thirteen year old with existing mental health problems, may not have been credible either.  (The determination of “credibility” under SB 1411 would presumably be based on the “reasonable person” standard.)  Likewise, law enforcement agents creating fake Craigslist ads to smoke out drug buyers, child molesters, or customers of sex workers would also not be violating the law.

Also excluded from SB 1411 would probably be those who use Craigslist to get back at exes or other people they are angry at by placing ads promising sex to anyone who stops by, and then gives the address of the person they are trying to get even with.  In most cases, these ads are not credible impersonations of the victim; they are meant to offend them but not to convince a reasonable third person that they really speak for the victim.   A fake Facebook page for a teacher who proceeds to make cruel or otherwise harmful statements about her students, likewise, would not be a credible impersonation.

The Twitter profiles being created to issue fake press releases purportedly on behalf of BP would also not be illegal under SB 1411.  First, BP is not an “actual person.”  Second,  Twitter profiles such as BPGlobalPR are clearly parodies—they are issuing statements they believe to be what BP would say if it were telling the truth about its actions in relation to the gulf spill.  (“We’re on a seafood diet- When we see food, we eat it! Unless that food is seafood from the Gulf. Yuck to that.”)   Again, not a credible impersonation.

You also do not commit the crime by confusing people inadvertently.  There are several people I am aware of online named Larry Downes, including a New Jersey state natural resources regulator, a radio station executive and conservative commentator, a cage fighter and a veterinarian who lives in a nearby community.  (The latter is a distant cousin.)  Facebook alone has 11 profiles with my name.  Only one of them is actually me, but the others are not knowingly impersonating me just because they use the same name, even if some third person might be confused to my detriment.

Likewise, the statute doesn’t reach out to those who help the perpetrator, intentionally or otherwise.  The “Internet Web sites” or providers of other electronic means aren’t themselves subject to prosecution or civil cases brought by the victims of the impersonation.  So Craigslist, MySpace, Facebook, and Twitter aren’t liable here, nor are the ISPs of the perpetrators, even if made aware of the activity of their users and/or customers.

For one thing, a federal law, Section 230, immunizes providers against that kind of liability under most circumstances.  Last week, Craigslist lost its bid to preclude a California lawsuit using Section 230 as its defense when sued by the victim of fake posts soliciting sex and offering to give away his possessions.  The victim informed Craigslist of the problem, and the company promised to take action to stop future posts but did not succeed.  But it lost its immunity only by promising to help which, of course, the site won’t do in the future!  (See Eric Goldman’s analysis of the case.)

So there are important limitations (some added through recent amendments) to SB 1411 that reduce the possibility of its being applied to speech that is otherwise protected or immunized by federal law.  (In the BP example, the company might have a trademark case to bring.)  Most of these limits, however, seem to take any teeth out of the statute, and seem to exclude most of the behavior Sen. Simitian says he is concerned about.

Unintended Consequences

What’s left?  Imagine a case where, angry at you, I create a fake Facebook profile that purports to represent you.  I post material there that is not so outrageous that the impersonation is no longer credible, but which still has the intent of harming, intimidating, threatening or defrauding you.  Perhaps I report, pretending to be you, about all of my extravagant purchases (but not so extravagant that I am not credible), leading your friends to believe you are spending beyond your means.  You find out, and find my actions intimidating or threatening.

Perhaps I announce that you have defaulted on your mortgage and are being foreclosed, leading your creditors to seek security on your other debts.  Perhaps I threaten to continue posting stories of your sexual exploits, forcing you to pay me blackmail to save you embarrassment.

Would these cases be covered under SB 1411?  Perhaps, unless of course the claims that I am making as you turn out to be true.  In the U.S., truth is a defense to defamation, so even if my intent is to “harm” you by revealing these facts, if they are facts then there is no action for defamation.  That I say the facts pretending to be you, under SB 1411, would appear to turn a protected activity into a crime, perhaps not what the drafters intended and perhaps not something that would stand up in court.  (The truth-as-defense in defamation cases rests on First Amendment principles—you can’t be prosecuted for saying the truth.)

Of course, much of the other behavior I described above is already a crime in California—in particular, various forms of intimidation, harassment and, by definition, fraud.  The authors of SB 1411 believe the new law is needed to extend those crimes to cover the use of “Internet Web sites” and “other electronic means,” but there’s no reason to believe that the technology used is any bar to prosecutions under existing law.  (Indeed, the use of electronic communications to commit the acts would extend the possible criminal laws that apply, since electronic communications are generally considered interstate commerce and thus subject to federal as well as state laws.)

For the most part, then, SB 1411 covers very little new behavior, and little of the behavior its drafters thought needed to be criminalized.  For an impersonation to be damaging would, in most cases, mean that it was also not credible.  Pretending to be me and telling the truth could be harmful, but probably a form of protected speech.  Pretending to be me in order to defraud a third party is already a crime-that is the crime of identity theft.

Which is not to say, pun intended, that the proposed law is harmless.  For in addition to categories of behavior already covered by existing law, SB 1411 makes it a crime to impersonate someone with the purpose of “harming” “another person.”  There is, not surprisingly, no definition given for what it means to have the purpose of “harming,” nor is it clear if “another person” refers only to the person whose identity has been usurped, or includes some third party (perhaps a family member or friend of that person, perhaps their employer.)

Having a purpose of “harming” “another person” is incredibly vague, and can cover a wide range of behaviors that wouldn’t, in offline contexts, be subject to criminal prosecution.  The only difference would be that the intended harm here would be operationalized through online channels, and would take the form of a credible impersonation of some actual person.

Why those differences ought to result in a year in jail doesn’t make much sense.  Consequently, an attempt to use the law to prosecute “harmful” behavior would be met with a strong constitutional objection.

That’s my read of the bill, in any case.  Since I posed this as an exam question, I’m offering extra credit for anyone who can come up with examples—there are none given by the California State Senate—of situations where the law would actually apply and that would not already be illegal and which would not be subject to plausible Constitutional challenges.

]]>
https://techliberation.com/2010/06/11/the-fallacy-of-e-personation-laws/feed/ 14 29687
Book Review: Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace https://techliberation.com/2010/06/08/book-review-access-controlled-the-shaping-of-power-rights-and-rule-in-cyberspace/ https://techliberation.com/2010/06/08/book-review-access-controlled-the-shaping-of-power-rights-and-rule-in-cyberspace/#comments Wed, 09 Jun 2010 01:02:46 +0000 http://techliberation.com/?p=29369

Faithful readers know of my geeky love for tech policy books. I read lots of ’em. There’s a steady stream of Amazon.com boxes that piles up on my doorstop some days because my mailman can’t fit them all in my mailbox.  But I go pretty hard on all the books I review. It’s rare for me pen a glowing review. Occasionally, however, a book will come along that I think is both worthy of your time and which demands a place on your bookshelf because it is such an indispensable resource.  Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace is one of those books.

Smartly organized and edited by Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski, and Jonathan Zittrain, Access Controlled is essential reading for anyone studying the methods governments are using globally to stifle online expression and dissent. As I noted of their previous edition, Access Denied: The Practice and Policy of Global Internet Filtering, there is simply no other resource out there like this; it should be required reading in every cyberlaw or information policy program.

The book, which is a project of the OpenNet Initiative (ONI), is divided into two parts. Part 1 of the book includes six chapters on “Theory and Analysis.”  They are terrifically informative essays, and the editors have made them all available online here (I’ve listed them down below with links embedded). The beefy second part of the book provides a whopping 480 pages(!) of detailed regional and country-by-country overviews of the global state of online speech controls and discuss the long-term ramifications of increasing government meddling with online networks.

In their interesting chapter on “Control and Subversion in Russian Cyberspace,” Deibert and Rohozinski create a useful taxonomy to illustrate the three general types of speech and information controls that states are deploying today. What I find most interesting is how, throughout the book, various authors document the increasing movement away from “first generation controls,” which are epitomized by “Great Firewall of China”-like filtering methods, and toward second- and third-generation controls, which are more refined and difficult to monitor. Here’s how Deibert and Rohozinski define those three classes (or “generations”) of controls:

  • First-generation controls focus on denying access to specific Internet resources by directly blocking access to servers, domains, keywords, and IP addresses. This type of filtering is typically achieved by the use of specialized software or by implementing instructions manually into routers at key Internet choke points. First-generation filtering is found throughout the world, in particular among authoritarian countries, and is the phenomenon targeted for monitoring by the ONI’s methodology. In some countries, compliance with first-generation filtering is checked manually by security forces, who physically police cybercafes and ISPs. (p. 22)
  • Second-generation controls aim to create a legal and normative environment and technical capabilities that enable state actors to deny access to information resources as and when needed, while reducing the possibility of blowback or discovery. Second-generation controls have an overt and a covert track. The overt track aims to legalize content controls by specifying the conditions under which access can be denied. Instruments here include the doctrine of information security as well as the application of existent laws, such as slander and defamation, to the online environment. The covert track establishes procedures and technical capabilities that allow content controls to be applied ‘‘just in time,’’ when the information being targeted has the highest value (e.g., during elections or public demonstrations), and to be applied in ways that assure plausible deniability. (p. 24)
  • Unlike the first two generations of content controls, third-generation controls take a highly sophisticated, multidimensional approach to enhancing state control over national cyberspace and building capabilities for competing in informational space with potential adversaries and competitors. The key characteristic of third-generation controls is that the focus is less on denying access than successfully competing with potential threats through effective counter-information campaigns that overwhelm, discredit, or demoralize opponents. Third-generation controls also focus on the active use of surveillance and data mining as means to confuse and entrap opponents. (p. 27)

Again, the country-by-country discussions contained in Part 2 of the book document how several nations are moving toward those more sophisticated second- and third-generation information control efforts, although it appears that CIS states are on the cutting edge so far. As Deibert and Rohozinski note in their opening overview chapter: “the center of gravity of practices aimed at managing cyberspace has shifted subtly from policies and practices aimed at denying access to content to methods that seek to normalize control and the exercise of power in cyberspace through a variety of means.” (p. 6)  They also note that, just in the short time since their previous volume was published (in 2008):

a sea change has occurred in the policies and practices of Internet controls. States no longer fear pariah status by openly declaring their intent to regulate and control cyberspace. The convenient rubric of terrorism, child pornography, and cyber-security has contributed to a growing expectation that states should enforce order in cyberspace, including policing unwanted content. (p. 4)

I don’t agree with all the conclusions in the book, of course. In particular, I don’t share the somewhat lugubrious outlook most of the contributors seem to hold toward the long-term prospects for “technologies of freedom” relative to “technologies of control.” I think it’s vital to put things in some historical context in this regard. It’s important to recall that, as a communications medium, the Net is still quite young.  So, is the Net really more susceptible to State control and manipulation than previous communications technologies and platforms?  I’m not so sure, although it’s hard to find a metric to compare them in an analytically rigorous fashion. However, I’m still quite bullish on the prospect for the “technologies of freedom” that are already out there (and those yet to be developed) to help people evade many of the technologies of control being utilized by States across the globe today.

The contributors in Access Controlled don’t really come to any definitive conclusion on this issue, but some of them seem to imply that the Net is more easily manipulated than past technologies. For example, in Chapter 3, Hal Roberts and John Palfrey speak of the Internet as “surveillance-ready technology.” (p. 35).  It’s certainly true that the State has access to more data about its citizens than in the past, but it’s also true that we have more information about the State than ever before, too!  And, again, we also have access to more of those “technologies of freedom” than ever before to at least try to fight back. Compare, for example, the plight of a dissident in a Cold War-era Eastern Bloc communist state to a dissident in China or Iran today. Which one had a better chance of getting their words (or audio and video) out to the local or global community?  But let me be clear about something: I am not one of those quixotic utopians who thinks that the whole world is going to magically become more democratic and free overnight because of the existence of blogs, mobile phones, wireless networks, SMS, Twitter, YouTube, encryption, proxy servers, etc.  Nonetheless, aren’t we citizens of the modern world at least a little better off for having such technologies at our potential disposal?

Moreover, what about the scale and volume problem that would-be censors increasingly face?  Again, let’s remember how young the Net is and how many people aren’t using it aggressively (or at all) yet. The challenge of bottling-up information — or even tracking / monitoring it — is going to grow exponentially more difficult as more people get online, networks expand, digital technologies fall in cost and grow more ubiquitous, and the overall volume of data flows continues to expand.  What sort of armies of censors and surveillance officers are going to be needed going forward to keep up with this pace of change?  Ethan Zuckerman’s chapter on “Intermediary Censorship” in Access Controlled discusses one answer that many nation-states are turning to in an effort to solve that problem: Make the middleman do it. Deputizing the middleman has been used in many contexts before, of course, but the problem for the State is that (a) the middlemen typically resent doing that sort of censoring / surveillance and (b) it is only going to grow more costly and convoluted for those middlemen to carry out the will of the State as the scale and volume problems identified above manifest themselves.

Of course, one could argue that the censoring & surveillance technologies are going to continue to grow more robust, too, and that the middlemen will always fall in line with the State’s desires if the penalties for non-compliance are steep enough.  But I can’t believe that’s how it will play out over the long haul.  At some point, something’s got to give. The technological arms race between the State and its citizens will continue to escalate, but I remain optimistic that we will live not in an “access controlled” world, but more like a “access-sorta-controlled-but-with-lots-of-holes-in-it” kind of world.

Anyway, these are not major reservations that should keep you from reading Access Controlled. Indeed, it may have been for the best that the editors and contributors chose not to go down this line of inquiry since it would have made a long book even longer and forced the contributors to divert from their generally objective positions.

I have only two other little nitpicks with the Access Controlled. First, I do not understand why the editors decided to dump the excellent old chapter from Access Denied on “Tools and Technologies of Net Filtering,” which contained some very useful schematics explaining how technologies of control work. [You can see what I mean here.]  I used to recommend that chapter to students and journalists all the time as the first stop in their investigation of online censorship issues. I hope the editors decide to update that chapter and include it in the next version of the book.  Second, I was quite surprised there wasn’t more discussion of HerdictWeb in the book. Herdict, which I have praised here in the past, “seeks to present a real-time picture of Web site accessibility and inaccessibility… by crowdsourcing data from individuals around the world.”  I think I only saw one mention of Herdict in Access Controlled.  I thought it would figure more prominently in this version of the book.

Those small quibbles aside, I want to congratulate all the editors and contributors to the marvelous volume.  Access Controlled is an indispensable resource that I can wholeheartedly endorse as a “must-have” for your info-tech policy bookshelf.  Buy it now.


Chapter 1

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

]]>
https://techliberation.com/2010/06/08/book-review-access-controlled-the-shaping-of-power-rights-and-rule-in-cyberspace/feed/ 10 29369