twitter – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 07 Dec 2022 00:54:38 +0000 en-US hourly 1 6772528 Video: Censorship is a Big Government Problem, Not a Big Tech Problem https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/ https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/#comments Wed, 07 Dec 2022 00:54:38 +0000 https://techliberation.com/?p=77062

My colleague Wayne Brough and I recently went on the “Kibbe on Liberty” show to discuss how to discuss the state of free speech on the internet. We explained how censorship is a Big Government problem, not a Big Tech problem. Here’s the complete description of the show and the link to the full episode is below.

With Elon Musk’s purchase of Twitter, we are in the middle of a national debate about the tension between censorship and free expression online. On the Right, many people are calling for government to rein in what they perceive as the excesses of Big Tech companies, while the Left wants the government to crack down on speech they deem dangerous. Both approaches make the same mistake of giving politicians authority over what we are allowed to say and hear. And with recent revelations about government agents leaning on social media companies to censor speech, it’s clear that when it comes to the online conversation, there’s no such thing as a purely private company.”

For more on this issues, please see: “The Classical Liberal Approach to Digital Media Free Speech Issues.”

]]>
https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/feed/ 1 77062
The Classical Liberal Approach to Digital Media Free Speech Issues https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/ https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/#comments Wed, 08 Dec 2021 20:41:45 +0000 https://techliberation.com/?p=76930

On December 13th, I will be participating in an Atlas Network panel on, “Big Tech, Free Speech, and Censorship: The Classical Liberal Approach.” In anticipation of that event, I have also just published a new op-ed for The Hill entitled, “Left and right take aim at Big Tech — and the First Amendment.” In this essay, I expand upon that op-ed and discuss the growing calls from both the Left and the Right for a variety of new content regulations. I then outline the classical liberal approach to concerns about free speech platforms more generally, which ultimately comes down to the proposition that innovation and competition are always superior to government regulation when it comes to content policy.

In the current debates, I am particularly concerned with calls by many conservatives for more comprehensive governmental controls on speech policies enforced by various private platforms, so I will zero in on those efforts in this essay. First, here’s what both the Left and the Right share in common in these debates: Many on both sides of the aisle desire more government control over the editorial decisions made by private platforms. They both advocate more political meddling with the way private firms make decisions about what types of content and communications are allowed on their platforms. In today’s hyper-partisan world,” I argue in my Hill column, “tech platforms have become just another plaything to be dominated by politics and regulation. When the ends justify the means, principles that transcend the battles of the day — like property rights, free speech and editorial independence — become disposable. These are things we take for granted until they’ve been chipped away at and lost.”

Despite a shared objective for greater politicization of media markets, the Left and the Right part ways quickly when it comes to the underlying objectives of expanded government control. As I noted in my Hill op-ed:

there is considerable confusion in the complaints both parties make about “Big Tech.” Democrats want tech companies doing more to limit content they claim is hate speech, misinformation, or that incites violence. Republicans want online operators to do less, because many conservatives believe tech platforms already take down too much of their content.

This makes life very lonely for free speech defenders and classical liberals. Usually in the past, we could count on the Left to be with us in some free speech battles (such as putting an end to “indecency” regulations for broadcast radio and television), while the Right would be with us on others (such as opposition to the “Fairness Doctrine,” or similar mandates). Today, however, it is more common for classical liberals to be fighting with both sides about free speech issues.

My focus is primarily on the Right because, with the rise of Donald Trump and “national conservatism,” there seems to be a lot of soul-searching going on among conservatives about their stance toward private media platforms, and the editorial rights of digital platforms in particular.

In my new  Hill essay and others articles (all of which are listed down below), I argue there is a principled classical liberal approach to these issues that was nicely outlined by President Ronald Reagan in his 1987 veto of Fairness Doctrine legislation, when he said:

History has shown that the dan­gers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and compe­tition that the First Amendment sought to guarantee.

Let’s break that line down. Reagan admits that media bias can be a real thing. Of course it is! Journalists, editors, and even the companies they work for all have specific views. They all favor or disfavor certain types of content. But, at least in the United States, the editorial decisions made by these private actors are protected by the First Amendment. Section 230 is really quite secondary to this debate, even though some Trumpian conservatives wrongly suggest that it’s the real problem here. In reality, national conservatives would need to find a way to work around well-established First Amendment protections if they wanted to impose new restrictions on the editorial rights of private parties.

But why would they want to do that? Returning to the Reagan veto statement, we should remember how he noted that, even if the First Amendment did not protect the editorial discretion of private media platforms, bureaucratic regulation was not the right answer to the problem of “bias.”  Competition and choice were the superior answer. This is the heart and soul of the classical liberal perspective: more innovation is always superior to more regulation.

For the past 30 years, conservatives and classical liberals were generally aligned on that point. But the ascendancy of Donald Trump created a rift in that alliance that now threatens to grow into a chasm as more and more Right-of-center people begin advocating for comprehensive control of media platforms.

The problems with that are numerous beginning with the fact that none of the old rationales for media controls work (and most of them never did). Consider the old arguments justifying widespread regulation of private media:

  • Scarcity” was the oldest justification for media regulation, but we live in the exact opposite world today, in which the most common complaint about media is the abundance of it!
  • Conversely, the supposed “pervasiveness” of some media (namely broadcasting) was used as a rationale for government censorship in the past. But that, too, no longer works because in today’s crowded media marketplace and Internet-enabled world, all forms of communications and entertainment are equally pervasive to some extent.
  • State ownership and licensing of spectrum was another rationale for control that no longer works. No digital media platforms need federal licenses to operate today. So, that hook is also gone. Moreover, the answer to the problem of government ownership of media is to stop letting the government own and control media assets, including spectrum.
  • “Fairness” is another old excuse for control, with some regulatory advocates suggesting that five unelected bureaucrats at the Federal Communications Commission (or some other agency) are well-suited to “balance” the airing of viewpoints on media platforms. Of course, America’s disastrous experience with the Fairness Doctrine proved just how wrong that thinking was. [I summarize all the evidence proving that here.]

That leaves a final, more amorphous rationale for media control: ” gatekeeper” concerns and assertions that private media platforms can essentially become “state actors.” In the wake of Donald Trump’s “de-platorming” from Facebook and Twitter, many of his supporters began adopting this language in defense of more aggressive government control of private media platforms, including the possibility of declaring those platforms common carriers and demanding that some sort of amorphous “neutrality” mandates be imposed on them. But as Berin Szóka and Corbin Barthold of Tech Freedom note:

Where courts have upheld imposing common carriage burdens on communications networks under the First Amendment, it has been because consumers reasonably expected them to operate conduits. Not so for social media platforms. [. . . ] When it comes to the regulation of speech on social media, however, the presumption of content neutrality does not apply. Conservatives present their criticism of content moderation as a desire for “neutrality,” but forcing platforms to carry certain content and viewpoints that they would prefer not to carry constitutes a “content preference” that would trigger strict scrutiny. Under strict scrutiny, any “gatekeeper” power exercised by social media would be just as irrelevant as the monopoly power of local newspapers was in [previous Supreme Court holdings].

Put simply, efforts to stretch extremely narrow and limited common carriage precedents to fit social media just don’t work. We’ve already seen lower courts declare that recently when blocking the enforcement of new conservative-led efforts in Florida and Texas to limit the editorial discretion of private social media platforms. If conservatives really hope to get around these legal barriers to regulation, what would be needed would be a more far-reaching strike at the First Amendment itself. That would entail a jurisprudential revolution at the Supreme Court — reversing about a century of free speech precedents — or an some sort of an effort to amend the First Amendment itself. These things are almost certainly not going to occur.

But, again, this hasn’t stopped some conservatives from pitching extreme solutions in their efforts to regulate digital media at both the state and federal level. I discuss these efforts in previous essays on, “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality,“ “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet,“ and “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’.“ Perhaps some Trump-aligned conservatives understand that these legislative efforts are unlikely to work, but they continue to push them in an attempt to make life hell for tech platforms, or perhaps just to troll the Left and “own the Libs.”

On the other hand, some conservatives seem to really believe in some of the extreme ideas they are tossing around. What is particular troubling about these efforts is the way — following Trump’s lead — some conservatives, including even more mainstream conservative groups like the Heritage Foundation, are increasingly referring to private media platforms as “the enemy of the people.” That’s the kind of extremist language typically used by totalitarian thugs and Marxist lunatics who so hate private enterprise and freedom of speech that they are willing to adopt a sort of burn-the-village-to-save-it rhetorical approach to media policy.

And speaking of Marxists, here’s what is even more incredible about these efforts by some conservatives to use such rationales in support of comprehensive media regulation: It is all based on the “media access” playbook concocted by radical Leftist scholars a generation ago. As I summarized in my essay on, “The Surprising Ideological Origins of Trump’s Communications Collectivism“:

Media access advocates look to transform the First Amendment into a tool for social change to advance specific political ends or ideological objectives. Media access theory dispenses with both the editorial discretion rights and private property rights of private speech platforms. Private platforms become subject to the political whims of policymakers who dictate “fair” terms of access. We can think of this as communications collectivism.

Media access doctrine is rooted in an arrogant, elitist, anti-property, anti-freedom ethic that suggest the State is a better position to dictate what can and cannot be said on private speech platforms. “It’s astonishing, yet nonetheless true,” I continued on in that essay, “that the ideological roots of Trump’s anti-social media campaign lie in the works of those extreme Leftists and even media Marxists. He has just given media access theory his own unique nationalistic spin and sold this snake oil to conservatives.” Yet, Trump and other national conservatives are embracing this contemptible doctrine because now more than ever the ends apparently justify the means in American politics. Nevermind that all this could come back to haunt them when the Left somehow leverages this regulatory apparatus to control Fox News or other sites and content that conservatives favor! Once media platforms are viewed as just another thing to be controlled by politics, the only question is which politics and how are those politics enforced? Certainly both the Left and the Right cannot both have their way given all that current divides them.

Finally, what is utterly perplexing about all this is how much thanks national conservatives really owe to the major digital platforms they now seek to destroy. As I noted in my new Hill op-ed:

There has never been more opportunity for conservative viewpoints than right now. Each day on Facebook, the top-10 most shared links are dominated by pundits such as Ben Shapiro, Dan Bongino, Dinesh D’Souza and Sean Hannity. Right-leaning content is shared widely on Twitter each day. Websites like Dailywire.com and Foxnews.com get far more traffic than the New York Times or CNN.

Thus, conservatives might be shooting themselves in the foot if they were able to convince more legislatures to adopt the media access regulatory playbook because it could have profound unintended consequences once the Left uses those tools to somehow restrict access to “hate speech” or “misinformation” — and then define it so broadly so as to include much of the top material posted by conservatives on Facebook and Twitter ever day.

Not all conservatives have drank the media access kool-aid. In the wake of Trump’s deplatforming from a few major sites, a wave of new Right-leaning digital services are being planned or have already launched. (Axios and Forbes recently summarized some of these efforts.) I don’t know which will of these efforts will succeed, but more competition and platform-building are certainly superior to current calls by some Trump supporters for government regulation of mainstream social media services.

Again, this is the old Reagan vision at its finest! We can achieve a better media landscape, “only through the freedom and compe­tition that the First Amendment sought to guarantee,” not through bureaucratic regulation. It remains the principled path forward.


Additional Reading :

Older essays & testimony :

]]>
https://techliberation.com/2021/12/08/the-classical-liberal-approach-to-digital-media-free-speech-issues/feed/ 1 76930
A Good Time to Re-Read Reagan’s Fairness Doctrine Veto https://techliberation.com/2020/10/17/a-good-time-to-re-read-reagans-fairness-doctrine-veto/ https://techliberation.com/2020/10/17/a-good-time-to-re-read-reagans-fairness-doctrine-veto/#comments Sat, 17 Oct 2020 22:42:42 +0000 https://techliberation.com/?p=76816

Ronald Reagan's presidential portrait, circa 1981With many conservative policymakers and organizations taking a sudden pro-censorial turn and suggesting that government regulation of social media platforms is warranted, it’s a good time for them to re-read President Ronald Reagan’s 1987 veto of Fairness Doctrine legislation. Here’s the key line:

History has shown that the dan­gers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and compe­tition that the First Amendment sought to guarantee.

That wisdom is just as applicable today when some conservatives suggest that government intervention is needed to address what they regardless as “bias” or “unfair” treatment on Twitter, Facebook, YouTube, or whatever else. Ignoring the fact that such meddling would likely violate property rights and freedom of contract — principles that most conservatives say they hold dear — efforts to empower the Federal Communications Commission, the Federal Trade Commission, or other regulators would be hugely misguided on First Amendment grounds.

President Reagan understood that there was a better way to approach these issues that was rooted in innovation and First Amendment protections. Here’s hoping that conservatives remember his sage advice. Read his entire veto message here.

Additional Reading:

]]>
https://techliberation.com/2020/10/17/a-good-time-to-re-read-reagans-fairness-doctrine-veto/feed/ 5 76816
6 Ways Trump’s Social Media Executive Order Betrays Conservative Principles https://techliberation.com/2020/06/05/6-ways-trumps-social-media-executive-order-betrays-conservative-principles/ https://techliberation.com/2020/06/05/6-ways-trumps-social-media-executive-order-betrays-conservative-principles/#comments Fri, 05 Jun 2020 14:52:38 +0000 https://techliberation.com/?p=76751

[Co-authored with Connor Haaland and originally published on The Bridge as, “Do Our Leaders Believe in Free Speech and Online Freedom Anymore?”]

The president is a counterpuncher': Trump on familiar ground in ...A major policy battle has developed regarding the wisdom of regulating social media platforms in the United States, with the internet’s most important law potentially in the crosshairs. Leaders in both major parties are calling for sweeping regulation.

Specifically, President Trump and his presumptive opponent in the coming presidential election, former Vice President Joe Biden, have both called for “Section 230” of the Communications Decency Act to be repealed. Last week, the president took a misguided step in this direction by signing an executive order that, if fully carried out, will result in significantly greater regulation of the internet and of speech.

A Growing Call to Regulate Internet Platforms

The ramifications of these threats and steps could not be more profound. Without Section 230—also known as “the 26 words that created the internet”—we would have a much less advanced internet ecosystem. Twitter, Facebook, YouTube, and Wikipedia would have never grown as quickly. Indeed, the repeal of Section 230 means many fewer jobs, less information distribution, and, frankly, less joy.

Shockingly, by backing Trump’s recent push for regulating these internet platforms, many conservatives are betraying their own principles—the ones that support freedom of expression and the ability to run private businesses without government interference.

Section 230 limits the liability online intermediaries face for the content and communications that travel over their networks. The immunities granted by Section 230 let online speech and commerce flow freely, without the constant threat of legal action or onerous liability looming overhead for digital platforms. To put it another way, without this provision, today’s vibrant internet ecosystem likely would not exist.

For completely different reasons, however, Biden and Trump want it axed. “Section 230 should be revoked, immediately should be revoked, number one. For [Facebook CEO Mark] Zuckerberg and other platforms,” said Biden in a New York Times interview. Like many other Democrats, Biden wants social media platforms to do far more to block speech they find to be offensive in various ways. If they fail to do more, Biden and other Democrats want Sec. 230 revised or repealed.

In contrast, Trump and his allies want these same platforms to do far less to curate content. Although lacking any empirical evidence, they allege that massive anti-conservative bias exists across today’s most popular platforms. As a result, they want Sec. 230 gutted. “Repeal 230,” said Trump in a tweet. Tensions reached a boiling point last week following a public fight between the president and Twitter after the social networking platform on May 27 added a fact-check notice to one of the president’s tweets about the supposed dangers of mail-in voting.

Retaliating Against Social Media

On May 28, Trump struck back against Twitter by signing an executive order on “preventing online censorship.” The EO cited Twitter six times but also went after Facebook, Instagram, and YouTube by name. Paradoxically, it also noted that the “freedom to express and debate ideas is the foundation for all of our rights as a free people,” even though the order will result in arbitrary government rule over our free speech rights.

Indeed, Trump’s executive order runs afoul of traditional conservative principles in several ways:

  1. It expands the power of the government by delegating more authority to the administrative state and expanding arbitrary bureaucratic rule and regulatory abuse. It encourages the Federal Communications Commission (FCC) and the Federal Trade Commission to take a more active interest in content policy decisions, which is of dubious legality. Section 3 of the EO also says the Department of Justice “shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report … and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.” (emphasis added)

    What do other bad practices entail, and who in the government gets to make the call? It is not prudent to delegate authority over something as sacred as our rights to free speech to unelected government bureaucrats. Such power will stifle civil discourse and increase the possibility for special interests to co-opt the government by using its power for their own desires.
  2. It undermines property rights of private companies by letting Big Government dictate how they use their business platforms. Carrying out the president’s executive order would amount to a taking of private property by the government, an action that conservatives have historically loathed. Our Founding Fathers considered property rights to be the cornerstone of a free and just society, yet Trump pays that fact little respect in this EO, running afoul of a centuries-old American tradition.

  3. It will encourage frivolous lawsuits. By gutting Sec. 230, a law that protects online platforms from punishing liability for third-party speech, Trump’s EO would empower trial lawyers. We are already too litigious a country, filing over 80 million cases in state courts every year, and we do not need another reason to be in the courtroom. Repealing 230 would open the floodgates to endless lawsuits about online speech and clog up our judicial system, using resources that could be directed to more important matters.

  4. It undermines free speech and would likely hurt conservative voices most. Trump’s executive order makes a mockery of the First Amendment by applying the Fairness Doctrine and net neutrality notions to social media, regulations that conservatives have vociferously opposed. A recent lawsuit filed by the Center for Democracy and Technology that seeks to challenge the EO alleges this exact point, saying it could chill free speech. In the past we have seen such concepts applied arbitrarily, harming free speech and media competition.

    For instance, our colleague Brent Skorup, has written on how the FCC exploited another arbitrary rule—the “public interest” standard. He points to the fact that a documentary portraying former Sen. John Kerry in a negative light was taken off the air thanks to the authority of the public interest standard as a paradigmatic example of how arbitrary regulatory power can harm free speech. The EO also undermines platforms that have greatly amplified conservative voices in recent years. On Facebook, for instance, 7 of the top 10 most cited news outlets were conservative. Meanwhile, Trump and other conservative leaders have tapped the power of Twitter to directly communicate with their base. The EO would therefore likely result in much conservative content being removed quickly to avoid legal hassles with regulators or the courts.
  5. The combined effect of all these other factors will undermine the global competitiveness of US-based firms, potentially benefiting Chinese internet companies the most. Willingly giving up a comparative advantage would be foolish, considering how America’s tech companies are the envy of the world. Not only does the EO affect existing social platforms, but it could stifle innovation throughout the digital economy moving forward. Who wants to try and innovate in a field that is subject to regulations that can change on a president’s whim?

  6. It could be used by future politicians against conservative platforms, like Fox News and other right-leaning outlets. This is clearly not the intent of Trump’s executive order, but that will eventually be the result nonetheless. Going forward, we will have different presidents with different political outlooks. When making laws, regulations, and executive orders, it is always important to consider how they could be applied by successive administrations with opposite political and ideological stripes.

Today’s social media platforms are not perfect, but it is impossible for them to please everyone. There is no Goldilocks formula whereby they can get speech policies just right and make everyone happy. Instead, the ideal policy for speech platforms is: Let a thousand flowers bloom. One-size-fits-all content management and community standards shouldn’t be the goal. We need diverse platforms and approaches for a diverse citizenry.

But when presidential candidates and their allies line up in support of repealing Sec. 230 and opening the door to speech controls, the end result will be homogenized conformity with the will of those in power. That’s a horrible result for a nation that values diversity of opinion and freedom of speech, and it will only end up hurting those who seek to change the conversation.

Also see: Brent Skorup, “The Section 230 Executive Order, Free Speech, and the FCC,” Technology Liberation Front, June 3, 2010.

]]>
https://techliberation.com/2020/06/05/6-ways-trumps-social-media-executive-order-betrays-conservative-principles/feed/ 2 76751
The Surprising Ideological Origins of Trump’s Communications Collectivism https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/ https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/#respond Thu, 28 May 2020 19:40:03 +0000 https://techliberation.com/?p=76742

President Trump and his allies have gone to war with social media sites and digital communications platforms like Twitter, Facebook, and Google. Decrying supposed anti-conservative “bias,” Trump has even floated an Executive Order aimed at “Preventing Online Censorship,” that entails many new forms of government meddling with these private speech platforms. Section 230 is their crosshairs and First Amendment restraints are being thrown to the wind.

Various others have already documented the many legal things wrong with Trump’s call for greater government oversight of private speech platforms. I want to focus on something slightly different here: The surprising ideological origins of what Trump and his allies are proposing. Because for those of us who are old-timers and have followed communications and media policy for many decades, this moment feels like deja vu all over again, but with the strange twist that supposed “conservatives” are calling for a form of communications collectivism that used to be the exclusive province of hard-core Leftists.

To begin, the truly crazy thing about President Trump and some conservatives saying that social media should be regulated as public forums is not just that they’re abandoning free speech rights, it’s that they’re betraying property rights, too. Treating private media like a “public square” entails a taking of private property. Amazingly, Trump and his followers have taken over the old “media access movement” and given it their own spin.

Media access advocates look to transform the First Amendment into a tool for social change to advance specific political ends or ideological objectives. Media access theory dispenses with both the editorial discretion rights and private property rights of private speech platforms. Private platforms become subject to the political whims of policymakers who dictate “fair” terms of access. We can think of this as communications collectivism.

The media access movement’s regulatory toolkit includes things like the Fairness Doctrine and “neutrality” requirements, right-of-reply mandates, expansive conceptions of common carriage (using “public forum” or “town square” rhetoric), agency threats, and so on. Even without formal regulation, media access theorists hope that jawboning and political pressure can persuade private platforms to run more (or perhaps sometimes less) of the content that they want (or don’t) on media platforms.

The intellectual roots of the media access movement were planted by leftist media theorists like Jerome Barron, Owen Fiss in 1960s and 1970s, and later by Marxist communications scholar Robert McChesney. In 2005, I penned this short history of media access movement and explored its aims. I also wrote two old books with chapters on the dangers of media access theory and calls for collectivizing communications and media systems. Those books were: Media Myths (2005) and A Manifesto for Media Freedom (2008, w Brian C. Anderson). The key takeaway from those essays is that the media access movement comes down to control.

The best book ever written about dangers of media access movement was Jonathan Emord’s 1991, Freedom, Technology and the First Amendment. He perfectly summarizes their goals (and now Trump’s) as follows:

  • “In short, the access advocates have transformed the marketplace of ideas from a laissez-faire model to a state-control model.”
  • “Rather than understanding the First Amendment to be a guardian of the private sphere of communication, the access advocates interpret it to be a guarantee of a preferred mix of ideological viewpoints.
  • “It fundamentally shifts the marketplace of ideas from its private, unregulated, and interactive context to one within the compass of state control, making the marketplace ultimately responsible to government for determinations as to the choice of content expressed.”

“This arrogant, elitist, anti-property, anti-freedom ethic is what drives the media access movement and makes it so morally repugnant,” I argued in that old TLF essay. That is still just as true today, even when it’s conservatives calling for collectivization of media.

It’s astonishing, yet nonetheless true, that the ideological roots of Trump’s anti-social media campaign lie in the works of those extreme Leftists and even media Marxists. He has just given media access theory his own unique nationalistic spin and sold this snake oil to conservatives.

There certainly could come a day where his opponents on the Left just take this media access playbook up again and suggest this is exactly what’s needed for Fox News and other right-leaning media outlets. If and when that does happen, Trump and other conservatives will have no one to blame but themselves for embracing this contemptible philosophical vision simply because it suited their short-term desires while they were in power.

I hope that conservatives rethink their embrace of communications collectivism, but I fear that Trump and his allies have already convinced themselves that the ends justify the means when it comes to advancing their causes or even just “owning the libs.” But there really is a strong moralistic slant to what Trump and many of his allies want. They think they are on the right side of history and that the opponents–including most media outlets and plaforms–are evil. Trump and his allies have repeatedly referred to the press as the “enemy of the American people” and endlessly lambasted social media platforms for not going along with his desires. This reflects a core tendency of all communications collectivists: a sort of ‘you’re-either-with-us-or-against-us’ attitude.

Steve Bannon scripted all this out back in 2018. Go back and read this astonishing CNN interview for a preview of what could happen next. Here’s the rundown:

>> Bannon said Big Tech’s data should be seized and put in a “public trust.” Specifically, Bannon said, “I think you take [the data] away from the companies. All that data they have is put in a public trust. They can use it. And people can opt in and opt out. That trust is run by an independent board of directors. It just can’t be that [Big Tech is] the sole proprietors of this data…I think this is a public good.” Bannon added that Big Tech companies “have to be broken up” just like Teddy Roosevelt broke up the trusts.” >> Bannon attacked the executives of Facebook, Twitter and Google. “These are run by sociopaths,” he said. “These people are complete narcissists. These people ought to be controlled, they ought to be regulated.” At one point during the phone call, Bannon said, “These people are evil. There is no doubt about that.” >> Bannon said he thinks “this is going to be a massive issue” in future elections. He said he thinks it will probably take until 2020 to fully blossom as a campaign issue, explaining, “I think by the time 2020 comes along, this will be a burning issue. I think this will be one of the biggest domestic issues.”

This is now Trump’s playbook. It’s incredibly frightening because, once married up with Trump’s accusations of election fraud and other imagined conspiracies, you can sense how he’s laying the groundwork to call into question future election results by suggesting that both traditional media and modern digital media platforms are just in bed with the Democratic party and trying to rig the presidential election. I don’t really want to think about what happens if this situation escalates to that point. These are very dark days for the American Republic.

]]>
https://techliberation.com/2020/05/28/the-surprising-ideological-origins-of-trumps-communications-collectivism/feed/ 0 76742
An Epic Moral Panic Over Social Media https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/ https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/#comments Thu, 30 May 2019 17:36:14 +0000 https://techliberation.com/?p=76493

[This essay originally appeared on the AIER blog on May 28, 2019. The USA TODAY also ran a shorter version of this essay as a letter to the editor on June 2, 2019.]

In a hotly-worded USA Today op-ed last week, Senator Josh Hawley (R-Missouri) railed against social media sites Facebook, Instagram, and Twitter. He argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to these sites as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide), leading him to declare that, “we’d be better off if Facebook disappeared.”

As far as moral panics go, Sen. Hawley’s will go down as one for the ages. Politicians have always castigated new technologies, media platforms, and content for supposedly corrupting the youth of their generation. But Sen. Hawley’s inflammatory rhetoric and proposals are something we haven’t seen in quite some time.

He sounds like those fire-breathing politicians and pundits of the past century who vociferously protested everything from comic books to cable television, the waltz to the Walkman, and rock-and-roll to rap music. In order to save the youth of America, many past critics said, we must destroy the media or media platforms they are supposedly addicted to. That is exactly what Sen. Hawley would have us do to today’s leading media platforms because, in his opinion, they “do our country more harm than good.”

We have to hope that Sen. Hawley is no more successful than past critics and politicians who wanted to take these choices away from the public. Paternalistic politicians should not be dictating content choices for the rest of us or destroying technologies and platforms that millions of people benefit from.

Addiction Panics: We’ve Been Here Before

Ironically, Sen. Hawley isn’t even right about what the youth of America are apparently obsessed with. Most kids view Facebook and Twitter as places where old people hang out. My teenage kids laugh when I ask them about those sites. Pew Research polling finds that many younger users are increasingly deleting Facebook (if they used it at all) or flocking to other platforms, such as Snapchat or YouTube.

But shouldn’t we be concerned with kids overusing social media more generally? Yes, of course we should—but that’s no reason to call for their outright elimination, as Sen. Hawley recommends. Such rhetoric is particularly concerning at a time when critics are proposing a “break up” of tech companies. Sen. Hawley sits on the U.S. Senate Judiciary Committee’s Subcommittee on Antitrust, Competition Policy and Consumer Rights. It is likely he and others will employ these arguments to fan the flames of regulatory intervention or antitrust action against at least Facebook.

Forcing social media sites to “disappear” or be broken up is one of the worst ways to deal with these concerns. It is always wise to mentor our youth and teach them how to achieve a balanced media diet. Many youths—and many adults—are probably overusing certain technologies (smartphones, in particular) and over-consuming some types of media. For those truly suffering from addiction, it is worth considering targeted strategies to address that problem. However, that is not what antitrust law is meant to address.

Moreover, concerns about addiction and distraction have popped up repeatedly during past moral panics and we should take such claims with a big grain of salt. Sociologist Frank Furedi has documented how, “inattention has served as a sublimated focus for apprehensions about moral authority” going back to at least the early 1700s. With each new form of media or means of communication, the older generation taps into the same “kids-these-days!” fears about how the younger generation has apparently lost the ability to concentrate or reason effectively.

For example, in the past century, critics said the same thing about radio and television broadcasting, comparing them to tobacco in terms of addiction and suggesting that media companies were “manipulating” us into listening or watching. Rock-and-roll and rap music got the same treatment, and similar panics about video games are still with us today.

Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.

Missed Opportunity, or Marvelous Achievement?

Sen. Hawley makes another audacious assertion in his essay when he suggests that social media has not provided any real benefit to American workers or consumers. He says the rise of the Digital Economy has “encouraged a generation of our brightest engineers to enter a field of little productive value,” which he regards as “an opportunity missed for the nation.”

This is an astonishing statement, made more troubling by Hawley’s claim that all these digital innovators could have done far more good by choosing other professions. “What marvels might these bright minds have produced,” Hawley asks, “had they been oriented toward the common good?”

Why is it that Sen. Hawley gets to decide which professions are in “the common good”? This logic is insulting to all those who make a living in these sectors, but there is a deeper hubris in Sen. Hawley’s argument that social media does not serve “the common good.” Had some benevolent philosopher kings in Washington stopped the digital economy from developing over the past quarter century, would all those tech workers really have chosen more noble-minded and worthwhile professions? Could he or others in Congress really have had the foresight to steer us in a better direction?

In reality, U.S. tech companies produce high-quality jobs and affordable, collaborative communications platforms that are popular across the globe. In response to Sen. Hawley’s screed, the Internet Association, which represents America’s leading digital technology companies, noted that, in Sen. Hawley’s home state of Missouri alone, the Internet supports 63,000 jobs at 3,400 companies and contributed $17 billion in GDP to the state’s economy. Presumably, Sen. Hawley would not want to see those benefits “disappear” along with the social media sites that helped give rise to them.

But the Internet and social media have an equally profound impact on the entire U.S. economy, adding over 9,000 jobs and nearly 570 businesses to each metropolitan statistical area. The Digital Economy is a great American success story that is the envy of the world, not something to be lamented and disparaged as Sen. Hawley has.

For someone who believes that Facebook is a “drug” and a “parasite,” it is curious how active Sen. Hawley is on Facebook, as well as on Twitter. If he really believes that “we’d be better off if Facebook disappeared,” then he should lead by example and get off the sites. But that is a decision he will have to make for himself. He should not, however, make it for the rest of us.

]]>
https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/feed/ 2 76493
The Problem with Calls for Social Media “Fairness” https://techliberation.com/2018/09/06/the-problem-with-calls-for-social-media-fairness/ https://techliberation.com/2018/09/06/the-problem-with-calls-for-social-media-fairness/#respond Thu, 06 Sep 2018 16:12:00 +0000 https://techliberation.com/?p=76371

There has been an increasing outcry recently from conservatives that social media is conspiring to silence their voices.  Leading voices including President Donald Trump and Senator Ted Cruz have started calling for legislative or regulatory actions to correct this perceived “bias”. But these calls for fairness miss the importance of allowing such services to develop their own terms and for users to determine what services to use and the benefit that such services have been to conservatives.

Social media is becoming a part of our everyday lives and recent events have only increased our general awareness of this fact. More than half of American adults login to Facebook on a daily basis. As a result, some policymakers have argued that such sites are the new public square. In general, the First Amendment strictly limits what the government can do to limit speakers in public spaces and requires that such limits be applied equally to different points of view. At the same time, private entities are generally allowed to set terms regarding what speech may or may not be allowed on their own platforms.

The argument that modern day websites are the new public square and must maintain a neutral view point was recently rejected in a lawsuit between PraegerU and YouTube. Praeger believed that its conservative viewpoint was being silenced by YouTube decision to place many of its videos in “restricted mode.” In this case, the court found that YouTube was still acting as a private service rather than one filling a typical government role. Other cases have similarly asserted that Internet intermediaries have First Amendment rights to reject or limit ads or content as part of their own rights to speak or not speak. Conservatives have long been proponents of property rights, freedom of association, and free markets. But now, faced with platforms choosing to exercise their rights, rather than defend those values and compete in the market some “conservatives” are arguing for legislation or utilizing litigation to bully the marketplace of ideas into giving them a louder microphone. In fact, part of the purpose behind creating the liability immunity (known as Section 230) for such services was the principle that a variety of platforms would emerge with different standards and new and diverse communities could be created and evolve to serve different audiences.

A similar idea of a need for equal content was previously used by the Federal Communications Commission (FCC) and known as “the fairness doctrine”. This doctrine required equal access for groups or individuals wanting to express opposing views on public issues. In the 1980s Reagan era Republicans led the charge against this doctrine arguing that it violated broadcasters’ First Amendment rights and actually went against the public interest. In fact, many have pointed out that the removal of the fairness doctrine is what allowed conservative talk radio hosts like Rush Limbaugh to become major political forces.  In the 2000s, when liberals suggested bringing back the fairness doctrine, conservatives were aghast and viewed it was an attack on conservative talk radio.  Even now, President Trump has used social media as a way to deliver messaging and set his political agenda in a way that has never been done before. If anything, there are lower barriers to creating a new medium on the Internet than there are on the TV or radio airwaves. As a 2016 National Review article states if conservatives are concerned with how they are being treated by existing platforms, “The goal should not be to create neutral spaces; it should be to create non-neutral spaces more attractive than existing non-neutral spaces.” In other words rather than complaining that the odds are against them and demanding “equal time”, conservatives should try to compete by building more attractive platforms that promote the content moderation ideals they believe are best. But perhaps, the problem is they realize that ultimately difficult or unpopular content moderation decisions must be faced by any platform.

Content moderation is no easy task. Even for small groups differing beliefs can quickly result in grey areas that require difficult calls by an intermediary. For social media and other Internet intermediaries, when dealing with such issue on a scale of millions and a global diversity of what is and isn’t acceptable, content moderation becomes exponentially complicated. It is unsurprising that a rate of human and machine learning errors exist in making such decisions. AI might seem like a simple solution but such filters aren’t aware of the context in many cases. For example, a Motherboard article recently pointed out the difficulty that those with last names like Weiner and Butts face when trying to register for accounts on websites with AI filters to prevent offensive language. Leaving the task of content moderation to humans is both incredibly difficult on the moderators and may result in inconsistent results due to the large volume of content that must be moderated and differing interpretations of community standards. As Jason Koebler and Joseph Cox point out in their Motherboard article on the challenge of content moderation on a global scale that Facebook is dealing with, “If you take a single case, and then think of how many more like it exist across the globe in countries that Facebook has even less historical context for, simple rules have higher and higher chances of unwanted outcomes.” It is quite clear that if we as a society can’t decide on our own definitions of things like hate speech or bullying in many cases, how we can expect a third party public or private to make such decision in a way that satisfies every perspective?

The Internet has helped the world truly create a marketplace of ideas. The barriers to entry are rather low and the medium is constantly evolving. Because of social media and the Internet more generally conservative voices are able to reach a wider audience than before. Conservatives should be careful what they wish for with calls for “fairness,” because such power could actually prevent future innovation or new platforms and extend the status quo instead.

]]>
https://techliberation.com/2018/09/06/the-problem-with-calls-for-social-media-fairness/feed/ 0 76371
Muddling Through: How We Learn to Cope with Technological Change https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/ https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/#comments Tue, 17 Jun 2014 17:38:18 +0000 http://techliberation.com/?p=74622

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues.

Optimistic (“Heaven”) vs. Pessimistic (“Hell”) Scenarios

Modern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.

In the past century, for example, French philosopher Jacques Ellul ( The Technological Society), German historian Oswald Spengler (Man and Technics), and American historian Lewis Mumford (Technics and Civilization) penned critiques of modern technological processes that took a dour view of technological innovation and our collective ability to adapt positively to it. (Concise summaries of their thinking can be found in Christopher May’s edited collection of essays, Key Thinkers for the Information Society.)

These critics worried about the subjugation of humans to “technique” or “technics” and feared that technology and technological processes would come to control us before we learned how to control them. Media theorist Neil Postman was the most notable of the modern information technology critics and served as the bridge between the industrial era critics (like Ellul, Spengler, and Mumford) and some of today’s digital age skeptics (like Evgeny Morozov and Nick Carr). Postman decried the rise of a “technopoly” — “the submission of all forms of cultural life to the sovereignty of technique and technology” — that would destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.” We see that attitude on display in countless works of technological criticism since then.

Of course, there’s been some pushback from some futurists and technological enthusiasts. But there’s often a fair amount of irrational exuberance at work in their tracts and punditry. Many self-proclaimed “futurists” have predicted that various new technologies would produce a nirvana that would overcome human want, suffering, ignorance, and more.

In a 2010 essay, I labeled these two camps technological “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

Sadly, when I wrote that earlier piece, I was not aware of a similar (and much better) framing of this divide that was developed by science writer Joel Garreau in his terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

Theories of Collapse: Why Does Doomsaying Dominate Discussions about New Technologies?

Indeed, in examining the way new technologies and inventions have long divided philosophers, scientists, pundits, and the general public, one can find countless examples of that sort of fear and loathing at work. “Armageddon has a long and distinguished history,” Garreau notes. “Theories of progress are mirrored by theories of collapse.” (p. 149)

In that regard, Garreau rightly cites Arthur Herman’s magisterial history of apocalyptic theories, The Idea of Decline in Western History, which documents “declinism” over time. The irony of much of this pessimistic declinist thinking, Herman notes, is that:

In effect, the very things modern society does best — providing increasing economic affluence, equality of opportunity, and social and geographic mobility — are systematically deprecated and vilified by its direct beneficiaries. None of this is new or even remarkable.” (p. 442)

Why is that? Why has the “Hell” scenario been such a dominant reoccurring theme in past writing and commentary throughout history, even though the general trend has been steady improvements in human health, welfare, and convenience?

There must be something deeply rooted in the human psyche that accounts for this tendency. As I have discussed in my new book as well as my big “Technopanics” law review article, our innate tendency to be pessimistic but also want to be certain about the future means that “the gloom-mongers have it easy,” as author Dan Gardner argues in his book, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better. He continues on to note of the techno-doomsday pundits:

Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster. (p. 140-1)

Similarly, in his new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson notes that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)

Another explanation is that humans are sometimes very poor judges of the relative risks to themselves or those close to them. Harvard University psychology professor Steven Pinker, author of The Blank Slate: The Modern Denial of Human Nature, notes:

The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. (p. 232)

Put simply, there exists a wide variety of explanations for why our collective first reaction to new technologies often is one of dystopian dread. In my work, I have identified several other factors, including: generational differences; hyper-nostalgia; media sensationalism; special interest pandering to stoke fears and sell products or services; elitist attitudes among intellectuals; and the so-called “third-person effect hypothesis,” which posits that when some people encounter perspectives or preferences at odds with their own, they are more likely to be concerned about the impact of those things on others throughout society and to call on government to “do something” to correct or counter those perspectives or preferences.

Some combination of these factors ends up driving the initial resistance we have see to new technologies that disrupted long-standing social norms, traditions, and institutions. In the extreme, it results in that gloom-and-doom, sky-is-falling disposition in which we are repeatedly told how humanity is about to be steam-rolled by some new invention or technological development.

The “Prevail” (or “Muddling Through”) Scenario

“The good news is that end-of-the-world predictions have been around for a very long time, and none of them has yet borne fruit,” Garreau reminds us. (p. 148) Why not? Let’s get back to his framework for the answer. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

That pretty much sums up my own perspective on things, and in the remainder of this essay I want sketch out the reasons why I think the “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process.

As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154) As John Seely Brown and Paul Duguid noted in their excellent 2001, “ Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

It is this process of “constantly forming and reforming new dynamic equilibriums” that interests me most. In a recent exchange with Michael Sacasas – one of the most thoughtful modern technology critics I’ve come across — I noted that the nature of individual and societal acclimation to technological change is worthy of serious investigation if for no other reason that it has continuously happened! What I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies disrupted our personal, social, economic, cultural, and legal norms.

In a response to me, Sacasas put forth the following admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” This is undoubtedly true, but it does not undermine the reality of societal adaptation. What can we learn from this? What were the mechanics of that adaptive process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

Of course, this raises an entirely different issue: What metrics are we using to judge whether “the changes were inconsequential or benign”? As I noted in my exchange with Sacasas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

Resiliency: Why Do the Skeptics Never Address It (and Its Benefits)?

Nonetheless, I believe that while technological change often brings sweeping and quite consequential change, there is great value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

What we’re talking about here is resiliency. Andrew Zolli and Ann Marie Healy, authors of Resilience: Why Things Bounce Back, define resilience as “the capacity of a system, enterprise, or a person to maintain its core purpose and integrity in the face of dramatically changed circumstances.” (p. 7) “To improve your resilience,” they note, “is to enhance your ability to resist being pushed from your preferred valley, while expanding the range of alternatives that you can embrace if you need to. This is what researchers call preserving adaptive capacity—the ability to adapt to changed circumstances while fulfilling once core purpose—and it’s an essential skill in an age of unforeseeable disruption and volatility.” (p. 7-8, emphasis in original) Moreover, they note, “by encouraging adaptation, agility, cooperation, connectivity, and diversity, resilience-thinking can bring us to a different way of being in the world, and to a deeper engagement with it.” (p. 16)

Even if you one doesn’t agree with all of that, again, I would think one would find great value in studying the process by which such adaptation happens precisely because it does happen so regularly. And then we could argue about whether it was all really worth it! Specially, was it worth whatever we lost in the process (i.e., a change in our old moral norms, our old privacy norms, our old institutions, our old business models, our old laws, or whatever else)?

As Sacasas correctly argues, “That people before us experienced similar problems does not mean that they magically cease being problems today.” Again, quite right. On the other hand, the fact that people and institutions learned to cope with those concerns and become more resilient over time is worthy of serious investigation because somehow we “muddled through” before and we’ll have to muddle through again. And, again, what we learned from living through that process may be extremely valuable in its own right.

Of Course, Muddling Through Isn’t Always Easy

Now, let’s be honest about this process of “muddling through”: it isn’t always neat or pretty. To put it crudely, sometimes muddling through really sucks! Think about the modern technologies that violate our visceral sense of privacy and personal space today. I am an intensely private person and if I had a life motto it would probably be: “ Leave Me Alone!” Yet, sometimes there’s just no escaping the pervasive reach of modern technologies and processes. On the other hand, I know that, like so many others, I derive amazing benefits from all these new technologies, too. So, like most everyone else I put up with the downsides because, on net, there are generally more upsides.

Almost every digital service that we use today presents us with these trade-offs. For example, email has allowed us to connect with a constantly growing universe of our fellow humans and organizations. Yet, spam clutters our mailboxes and the sheer volume of email we get sometimes overwhelms us. Likewise, in just the past five years, smartphones have transformed our lives in so many ways for the better in terms of not just personal convenience but also personal safety. On the other hand, smartphones have become more than a bit of nuisance in certain environments (theaters, restaurants, and other closed spaces.) And they also put our safety at risk when we use them while driving automobiles.

But, again, we adjust to most of these new realities and then we find constructive solutions to the really hard problems – yes, and that sometimes includes legal remedies to rectify serious harms. But a certain amount of social adaptation will, nonetheless, be required. Law can only slightly slow that inevitability; it can’t stop it entirely. And as messy and uncomfortable as muddling through can be, we have to (a) be aware of what we gain in the process and (b) ask ourselves what the cost of taking the alternative path would be. Attempts to through a wrench in the works and derail new innovations or delay various types of technological change are always going to be tempting, but such interventions will come at a very steep cost: less entreprenurialism, diminished competition, stagnant markets, higher prices, and fewer choices for citizens. As I note in my new book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that many best-case scenarios will never come about.

Social Resistance / Pressure Dynamics

There’s another part to this story that often gets overlooked. “Muddling through” isn’t just some sort of passive process where individuals and institutions have to figure out how to cope with technological change. Rather, there is an active dynamic at work, too. Individuals and institutions push back and actively shape their tools and systems.

In a recent Wired essay on public attitudes about emerging technologies such as the controversial Google Glass, Issie Lapowsky noted that:

If the stigma surrounding Google Glass (or, perhaps more specifically, “Glassholes”) has taught us anything, it’s that no matter how revolutionary technology may be, ultimately its success or failure ride on public perception. Many promising technological developments have died because they were ahead of their times. During a cultural moment when the alleged arrogance of some tech companies is creating a serious image problem, the risk of pushing new tech on a public that isn’t ready could have real bottom-line consequences.

In my new book, I spend some time think about this process of “norm-shaping” through social pressure, activist efforts, educational steps, and even public shaming. A recent Ars Technica essay by Joe Silver offered some powerful examples of how when “shamed on Twitter, corporations do an about-face.” Silver notes that “A few recent case-study examples of individuals who felt they were wronged by corporations and then took to the Twitterverse to air their grievances show how a properly placed tweet can be a powerful weapon for consumers to combat corporate malfeasance.” In my book and in recent law review articles, I have provided other examples how this works at both a corporate and individual level to constrain improper behavior and protect various social norms.

Edmund Burke once noted that, “Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.” Cristina Bicchieri, a leading behavioral ethicist, calls social norms “the grammar of society” because,

like a collection of linguistic rules that are implicit in a language and define it, social norms are implicit in the operations of a society and make it what it is. Like a grammar, a system of norms specifies what is acceptable and what is not in a social group. And analogously to a grammar, a system of norms is not the product of human design and planning.

Put simply, more than law can regulate behavior — whether it is organizational behavior or individual behavior. It’s yet another way we learn to cope and “muddle through” over time. Again, check out my book for several other examples.

A Case Study: The Long-Standing “Problem” of Photography

Let’s bring all this together and be more concrete about it by using a case study: photography. With all the talk of how unsettling various modern technological developments are, they really pale in comparison to just how jarring the advent of widespread public photography must have been in the late 1800s and beyond. “For the first time photographs of people could be taken without their permission—perhaps even without their knowledge,” notes Lawrence M. Friedman in his 2007 book, Guiding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy.

Thus, the camera was viewed as a highly disruptive force as photography became more widespread. In fact, the most important essay ever written on privacy law, Samuel D. Warren and Louis D. Brandeis’s famous 1890 Harvard Law Review essay on “The Right to Privacy,” decried the spread of public photography. The authors lamented that “instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life” and claimed that “numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”

Warren and Brandeis weren’t alone. Plenty of other critics existed and many average citizens were probably outraged by the rise of cameras and public photography. Yet, personal norms and cultural attitudes toward cameras and public photography evolved quite rapidly and they became ingrained in human experience. At the same time, social norms and etiquette evolved to address those who would use cameras in inappropriate, privacy-invasive ways.

Again, we muddled through. And we’ve had to continuously muddle through in this regard because photography presents us with a seemingly endless set of new challenges. As cameras grow still smaller and get integrated into other technologies (most recently, smartphones, wearable technologies, and private drones), we’ve had to learn to adjust and accommodate. With wearables technologies (check out Narrative, Butterflye, and Autographer, for example), personal drones (see “Drones are the future of selfies,”) and other forms of microphotography all coming online now, we’ll have to adjust still more and develop new norms and coping mechanisms. There’s never going to be an end to this adjustment process.

Toward Pragmatic Optimism

Should we really remain bullish about humanity’s prospects in the midst of all this turbulent change? I think so.

Again, long before the information revolution took hold, the industrial revolution produced its share of cultural and economic backlashes, and it is still doing so today. Most notably, many Malthusian skeptics and environmental critics lamented the supposed strain of population growth and industrialization on social and economic life. Catastrophic predictions followed.

In his 2007 book, Prophecies of Doom and Scenarios of Progress, Paul Dragos Aligicia, a colleague of mine at the Mercatus Center, documented many of these industrial era “prophecies of doom” and described how this “doomsday ideology” was powerfully critiqued by a handful of scholars — most notably Herman Kahn and Julian Simon. Aligicia explains that Kahn and Simon argued for, “the alternative paradigm, the pro-growth intellectual tradition that rejected the prophecies of doom and called for realism and pragmatism in dealing with the challenge of the future.”

Kahn and Simon were pragmatic optimists or what author Matt Ridley calls “rational optimists.” They were bullish about the future and the prospects for humanity, but they were not naive regarding the many economic and scosial challenges associated with technological change. Like Kahn and Simon, we should embrace the amazing technological changes at work in today’s information age but with a healthy dose of humility and appreciation for the disruptive impact and pace of that change.

But the rational optimists never get as much attention as the critics and catastrophists. “For 200 years pessimists have had all the headlines even though optimists have far more often been right,” observes Ridley. “Arch-pessimists are feted, showered with honors and rarely challenged, let alone confronted with their past mistakes.” At least part of the reason for that, as already noted, goes back to the amazing rhetorical power of good intentions. Techno-pessimists often exhibit a deep passion about their particular cause and are typically given more than just the benefit of doubt in debates about progress and the future; they are treated as superior to opponents who challenge their perspectives or proposals. When a privacy advocate says they are just looking out consumers, or an online safety claims they have the best interests of children in mind, or a consumer advocate argues that regulation is needed to protect certain people from some amorphous harm, they are assuming the moral high ground through the assertion of noble-minded intentions. Even if their proposals will often fail to bring about the better state of affairs they claim or derail life-enriching innovations, they are more easily forgiven for those mistakes precisely because of their fervent claim of noble-minded intentions.

If intentions are allowed to trump empiricism and a general openness to change, however, the results for a free society and for human progress will be profoundly deleterious. That is why, when confronted with pessimistic, fear-based arguments, the pragmatic optimist must begin by granting that the critics clearly have the best of intentions, but then point out how intentions can only get us so far in the real-world, which is full of complex trade-offs.

The pragmatic optimist must next meticulously and dispassionately outline the many reasons why restricting progress or allowing planning to enter the picture will have many unintended consequences and hidden costs. The trade-offs must be explained in clear terms. Examples of previous interventions that went wrong must be proffered.

The Evidence Speaks for Itself

Luckily, we pragmatic optimists have plenty of evidence working in our favor when making this case. As Pulitzer Prize-winning historian Richard Rhodes noted in his 1999 book, Visions of Technology: A Century of Vital Debate About Machines Systems And The Human World:

it’s surprising that [many intellectual] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals? (p. 23)

Great question, and one that we should never stop asking the techno-critics to answer. After all, as Joel Mokyr notes in his wonderful 1990 book, Lever of Riches: Technological Creativity and Economic Progress, “Without [technological creativity], we would all still live nasty and short lives of toil, drudgery, and discomfort.” (p. viii) “Technological progress, in that sense, is worthy of its name,” he says. “It has led to something that we may call an ‘achievement,’ namely the liberation of a substantial portion of humanity from the shackles of subsistence living.” (p. 288) Specifically,

The riches of the post-industrial society have meant longer and healthier lives, liberation from the pains of hunger, from the fears of infant mortality, from the unrelenting deprivation that were the part of all but a very few in preindustrial society. The luxuries and extravagances of the very rich in medieval society pale compared to the diet, comforts, and entertainment available to the average person in Western economies today. (p. 303)

In his new book, Smaller Faster Lighter Denser Cheaper: How Innovation Keeps Proving the Catastrophists Wrong, Robert Bryce hammers this point home when he observes that:

The pessimistic worldview ignores an undeniable truth: more people are living longer, healthier, freer, more peaceful, lives than at any time in human history… the plain reality is that things are getting better, a lot better, for tens of millions of people around the world. Dozens of factors can be cited for the improving conditions of humankind. But the simplest explanation is that innovation is allowing us to do more with less.

This is framework Herman Kahn, Julian Simon, and the other champions of progress used to deconstruct and refute the pessimists of previous eras. In line with that approach, we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. As Kahn taught us long ago, is that when it comes to technological progress and humanity’s ingenious responses to it, “we should expect to go on being surprised” — and in mostly positive ways. Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies. As Mokyr noted in his recent City Journal essay on “The Next Age of Invention”:

Much like medication, technological progress almost always has side effects, but bad side effects are rarely a good reason not to take medication and a very good reason to invest in the search for second-generation drugs. To a large extent, technical innovation is a form of adaptation—not only to externally changing circumstances but also to previous adaptations.

In sum, we need to have a little faith in the ability of humanity to adjust to an uncertain future, no matter what it throws at us. We’ll muddle through and come out better because of what we have learned in the process, just as we have so many times before.

I’ll give venture capitalist Marc Andreessen the last word on this since he’s been on an absolute tear on Twitter lately when discussing many of the issues I’ve raised in this essay. While addressing the particular fear that automation is running amuck and that robots will eat all our jobs, Andreessen eloquently noted:

We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.

Me too, buddy. Me too.


Additional Reading:

Journal articles & book chapters:

Blog posts:

]]>
https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/feed/ 8 74622
New Book Release: “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom” https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/ https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/#respond Tue, 25 Mar 2014 15:06:28 +0000 http://techliberation.com/?p=74314

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.

The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.

I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.

The central lesson of the booklet is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:

  • education and empowerment efforts (including media literacy, digital citizenship efforts);
  • social pressure from activists, academics, and the press and the public more generally.
  • voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
  • increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.

To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.

In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.

If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available. I’ll be doing more blogging about the book in coming weeks and months. The debate between the “permissionless innovation” and “precautionary principle” worldviews is just getting started and it promises to touch every tech policy debate going forward.


Related Essays :

]]>
https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/feed/ 0 74314
Jack Schinasi on global privacy regulation https://techliberation.com/2014/01/21/schinasi/ https://techliberation.com/2014/01/21/schinasi/#respond Tue, 21 Jan 2014 15:01:15 +0000 http://techliberation.com/?p=74128

Jack Schinasi discusses his recent working paper, Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion published in the Columbia Journal of Transnational Law. Schinasi takes an in-depth look at how online privacy laws differ across the world’s biggest Internet markets — specifically the United States, the European Union and China. Schinasi discusses how we exchange data for services and whether users are aware they’re making this exchange. And, if not, should intermediaries like Google be mandated to make its data tracking more apparent? Or should we better educate Internet users about data sharing and privacy? Schinasi also covers whether privacy laws currently in place in the US and EU are effective, what types of privacy concerns necessitate regulation in these markets, and whether we’ll see China take online privacy more seriously in the future.

Download

Related Links

]]>
https://techliberation.com/2014/01/21/schinasi/feed/ 0 74128
Robert Scoble on Wearable Computers https://techliberation.com/2013/12/17/scoble/ https://techliberation.com/2013/12/17/scoble/#respond Tue, 17 Dec 2013 11:00:19 +0000 http://techliberation.com/?p=73996

Robert Scoble, Startup Liaison Officer at Rackspace discusses his recent book, Age of Context: Mobile, Sensors, Data and the Future of Privacy, co-authored by Shel Israel. Scoble believes that over the next five years we’ll see a tremendous rise in wearable computers, building on interest we’ve already seen in devices like Google Glass. Much like the desktop, laptop, and smartphone before it, Scoble predicts wearable computers represent the next wave in groundbreaking innovation. Scoble answers questions such as: How will wearable computers help us live our lives? Will they become as common as the cellphone is today? Will we have to sacrifice privacy for these devices to better understand our preferences? How will sensors in everyday products help companies improve the customer experience?

Download

Related Links

]]>
https://techliberation.com/2013/12/17/scoble/feed/ 0 73996
Alice Marwick on social dynamics and digital culture https://techliberation.com/2013/12/03/marwick/ https://techliberation.com/2013/12/03/marwick/#respond Tue, 03 Dec 2013 11:00:41 +0000 http://techliberation.com/?p=73909

Alice Marwick, assistant professor of communication and media studies at Fordham University, discusses her newly-released book, Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Marwick reflects on her interviews with Silicon Valley entrepreneurs, technology journalists, and venture capitalists to show how social media affects social dynamics and digital culture. Marwick answers questions such as: Does “status conscious” take on a new meaning in the age of social media? Is the public using social media the way the platforms’ creators intended? How do you quantify the value of online social interactions? Are social media users becoming more self-censoring or more transparent about what they share? What’s the difference between self-branding and becoming a micro-celebrity? She also shares her advice for how to make Twitter, Tumblr, Instagram and other platforms more beneficial for you.

Download

Related Links

]]>
https://techliberation.com/2013/12/03/marwick/feed/ 0 73909
Timothy B. Lee on the future of tech journalism https://techliberation.com/2013/08/20/timothy-b-lee/ https://techliberation.com/2013/08/20/timothy-b-lee/#comments Tue, 20 Aug 2013 13:42:06 +0000 http://techliberation.com/?p=73462

Timothy B. Lee, founder of The Washington Post’s blog The Switch discusses his approach to reporting at the intersection of technology and policy. He covers how to make tech concepts more accessible; the difference between blogs and the news; the importance of investigative journalism in the tech space; whether paywalls are here to stay; Jeff Bezos’ recent purchase of The Washington Post; and the future of print news.

Download

Related Links

]]>
https://techliberation.com/2013/08/20/timothy-b-lee/feed/ 3 73462
Adam Thierer on cronyism https://techliberation.com/2013/07/09/adam-thierer-on-cronyism/ https://techliberation.com/2013/07/09/adam-thierer-on-cronyism/#comments Tue, 09 Jul 2013 10:00:37 +0000 http://techliberation.com/?p=45126

Adam Thierer, Senior Research Fellow at the Mercatus Center discusses his recent working paper with coauthor Brent Skorup, A History of Cronyism and Capture in the Information Technology Sector. Thierer takes a look at how cronyism has manifested itself in technology and media markets — whether it be in the form of regulatory favoritism or tax privileges. Which tech companies are the worst offenders? What are the consequences for consumers? And, how does cronyism affect entrepreneurship over the long term?

Download

Related Links

]]>
https://techliberation.com/2013/07/09/adam-thierer-on-cronyism/feed/ 5 45126
What Are We Going to Do after COPPA Fails? https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/ https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/#respond Tue, 09 Jul 2013 00:39:34 +0000 http://techliberation.com/?p=45114

This afternoon, Berin Szoka asked me to participate in a TechFreedom conference on “COPPA: Past, Present & Future of Children’s Privacy & Media.” [CSPAN video is here.] It was a in-depth, 3-hour, 2-panel discussion of the Federal Trade Commission’s recent revisions to the rules issued under the 1998 Children’s Online Privacy Protection Act (COPPA).

While most of the other panelists were focused on the devilish details about how COPPA works in practice (or at least should work in practice), I decided to ask a more provocative question to really shake up the discussion: What are we going to do when COPPA fails?

My notes for the event follow down below. I didn’t have time to put them into a smooth narrative, so please pardon the bullet points.

COPPA will fail in the long-run for two reasons:

(1)    With COPPA, the FTC is engaged in a technological arms race that it cannot win.

  • COPPA was formulated for a Web 1.0 world of static websites with limited interactivity. In that environment is worked reasonably well, although it certainly imposed costs on site developers and affected market structure.
  • As we moved into a Web 2.0 world of interactive social media in the mid to late-2000s, however, the rule has been strained by marketplace new realities. COPPA’s drafters never really envisioned sites like Facebook, Twitter, etc.
  • In our current environment—let’s call it the Web 2.5 world—we have added mobile geolocation and social discovery to the mix and that is straining COPPA to the breaking point.
  • But we are about to enter the Web 3.0 world of the “Internet of Things;” a sensor-based world in which the communication technology will literally be woven into the clothes we wear and all the devices we use.
    • Cisco has estimated that by 2020, 37 billion devices will be linked together and communicating.
    • It will be almost impossible for COPPA to keep up with the explosion of these technologies because everything in our lives and our children’s lives will be interconnected, communicating, and collecting data.
    • Information will be ubiquitously collected simply by nature of the technology itself.
    • The entire Web 3.0 world will be one of comprehensive passive information collection.
    • So, notions like “collection”, “directed at children” and “personal information” will be become impossible to enforce absence a flat-out ban on the technologies themselves

(2)    COPPA will also fail because of the simple reality that the more complicated and costly this regulatory regime becomes, the more likely it is that that both kids and parents will ignore it or seek to actively evade it.

  • The actual monetary cost of any online service may obviously be one thing parents and kids seek to avoid.
  • But the bigger cost is the mental hassle associated with delayed gratification.
    • When people demand certain services, they want them now. And they will get them even when law gets in the way. And sometimes they value the utility / functionality that those services provide more than they value privacy.
    • A 2011 Harvard-Berkeley study pointed out the evasion is already rampant and that many parents are facilitating that result by encouraging their kids to lie about their ages online.
      • This problem will only increase in the Internet of Things era as kids and parents come to expect all their devices to be communicating at all times and retaining data for them.

So, what are we going to do about? How do we prepare for the post-COPPA world that’s coming?

  • We shouldn’t just throw up our hands in defeat.
  • But we must accept the technological and practical challenges associated with regulation and seek out alternative approaches.
  • Best solution, therefore, is: Education, media literacy, and digital citizenship
    • We need to do a much better job educating both kids and adults about sensible online interactions.
    • We need to talk to our kids and each other about being more savvy, sensible, respectful, and resilient media consumers and digital citizens.
    • In encouraging our kids and fellow Netizens to be good “digital citizens,” we must stress smarter online hygiene (sensible personal data use) and better “Netiquette” (proper behavior toward others), which can further both online safety and digital privacy goals.
    • More generally, as part of these digital literacy and citizenship efforts, we must do more  to explain the potential perils of over-sharing information about ourselves and others while simultaneously encouraging consumers to delete unnecessary online information occasionally and cover their digital footprints in other ways.
    • These education and literacy efforts are also important because they help us adapt to new technological changes by employing a variety of coping mechanisms or new social norms. These efforts and lessons should start at a young age and continue on well into adulthood through other means, such as awareness campaigns and public service announcements.

Additional Reading:

]]>
https://techliberation.com/2013/07/08/what-are-we-going-to-do-after-coppa-fails/feed/ 0 45114
New Paper on “A History of Cronyism & Capture in the Information Technology Sector” https://techliberation.com/2013/07/02/new-paper-on-a-history-of-cronyism-capture-in-the-information-technology-sector/ https://techliberation.com/2013/07/02/new-paper-on-a-history-of-cronyism-capture-in-the-information-technology-sector/#comments Tue, 02 Jul 2013 13:48:02 +0000 http://techliberation.com/?p=45048

WP coverThe Mercatus Center at George Mason University has just released a new paper by Brent Skorup and me entitled, “A History of Cronyism and Capture in the Information Technology Sector.” In this 73-page working paper, which we hope to place in a law review or political science journal shortly, we document the evolution of government-granted privileges, or “cronyism,” in the information and communications technology marketplace and in the media-producing sectors. Specifically, we offer detailed histories of rent-seeking and regulatory capture in: the early history of the telephony and spectrum licensing in the United States; local cable TV franchising; the universal service system; the digital TV transition in the 1990s; and modern video marketplace regulation (i.e., must-carry and retransmission consent rules, among others.

Our paper also shows how cronyism is slowly creeping into new high-technology sectors.We document how Internet companies and other high-tech giants are among the fastest-growing lobbying shops in Washington these days. According to the Center for Responsive Politics, lobbying spending by information technology sectors has almost doubled since the turn of the century, from roughly $200 million in 2000 to $390 million in 2012.  The computing and Internet sector has been responsible for most of that growth in recent years. Worse yet, we document how many of these high-tech firms are increasingly seeking and receiving government favors, mostly in the form of targeted tax breaks or incentives.

We argue that the creeping cronyism could have two major negative ramifications. First, it could dull entrepreneurialism and competition in this highly innovative sector since time and resources spent on influencing politicians and capturing regulators cannot be spent competing and innovating in the marketplace. Cronyism will also negatively impact consumer welfare by denying consumers more and better products and services. Additionally, consumers might end up paying higher prices or higher taxes due to government privileges for industry.

Second, cronyism also raises the specter of greater government control of the Internet and of the digital economy. When policymakers dispense favors, they usually expect something in return. They also become accustomed to having greater informal powers over the sector receiving favors, and contribute to DC’s infamous “revolving door” problem.

High-tech America’s recent embrace of Washington could take it down the familiar path followed by the agriculture, telecommunications, and automotive sectors (among many others), with government becoming both protector and punisher of industry. Today’s dynamic tech industries will increasingly come under the “Mother, may I?” permission-based regulatory regime that encumbered the older information technology sectors.

Tech Lobbying sectoral breakdown

Finally, this paper offers strategies for stalling and diminishing the cronyism already taking root in the high-tech sector. We suggest several targeted reforms to limit or undo cronyism. Generally speaking, however, we note that, as economist David R. Henderson argued in an earlier Mercatus Center report, “There is only one way to end, or at least to reduce, the amount of cronyism, and that is to reduce government power.”

The paper can be downloaded from the Mercatus website, SSRN, or Scribd. The Scribd version is embedded down below. (Also, here’s some coverage of the paper over at the Washington Post’s “Wonkblog” from our old colleague Tim Lee. Here’s more coverage from Bloomberg Businessweek and the San Francisco Chronicle. And here’s a U.S. News oped that Brent and I wrote condensing our paper into just 600 words. Finally, a short 3-minute video of me discussing the problem of tech cronyism is also embedded below.)

A History of Cronyism and Capture in the Information Technology Sector [Thierer and Skorup – July 2013] by Adam Thierer

]]>
https://techliberation.com/2013/07/02/new-paper-on-a-history-of-cronyism-capture-in-the-information-technology-sector/feed/ 1 45048
The Constructive Way to Combat Online Hate Speech: Thoughts on “Viral Hate” by Foxman & Wolf https://techliberation.com/2013/06/24/the-constructive-way-to-combat-online-hate-speech-thoughts-on-viral-hate-by-foxman-wolf/ https://techliberation.com/2013/06/24/the-constructive-way-to-combat-online-hate-speech-thoughts-on-viral-hate-by-foxman-wolf/#comments Mon, 24 Jun 2013 23:04:03 +0000 http://techliberation.com/?p=45012

Viral Hate coverThe Internet’s greatest blessing — its general openness to all speech and speakers — is also sometimes its biggest curse. That is, you cannot expect to have the most widely accessible, unrestricted communications platform the world has ever known and not also have some imbeciles who use it to spew insulting, vile, and hateful comments.

It is important to put things in perspective, however. Hate speech is not the norm online. The louts who spew hatred represent a small minority of all online speakers. The vast majority of online speech is of a socially acceptable — even beneficial — nature.

Still, the problem of hate speech remains very real and a diverse array of strategies are needed to deal with it. The sensible path forward in this regard is charted by Abraham H. Foxman and Christopher Wolf in their new book, Viral Hate: Containing Its Spread on the Internet. Their book explains why the best approach to online hate is a combination of education, digital literacy, user empowerment, industry best practices and self-regulation, increased watchdog / press oversight, social pressure and, most importantly, counter-speech. Foxman and Wolf also explain why — no matter how well-intentioned — legal solutions aimed at eradicating online hate will not work and would raise serious unintended consequences if imposed.

In striking this sensible balance, Foxman and Wolf have penned the definitive book on how to constructively combat viral hate in an age of ubiquitous information flows.

Definitional Challenges & Free Speech Concerns

Defining “hate speech” is a classic eye-of-the-beholder problem: At what point does heated speech become hate speech and who should be in charge of drawing the line between the two? “The notion of a single definition of hate speech that everyone can agree on is probably illusory,” Foxman and Wolf note, especially because of “the continually evolving and morphing nature of online hate.” (p. 52, 103)  “Like every other form of human communication, bigoted or hateful speech is always evolving, changing its vocabulary and style, adjusting to social and demographic trends, and reaching out in new ways to potentially receptive new audiences.” (p. 92)

Many free speech advocates (including me) argue that the government should not be in the business of ensuring that people never have their feelings hurt. Censorial solutions are particularly problematic here in the United States since they would likely run afoul of the protections secured by the First Amendment of the U.S. Constitution.

The clear trajectory of the Supreme Court’s free speech jurisprudence over the past half-century has been in the direction of constantly expanding protection for freedom of expression, even of the most repugnant, hateful varieties. Most recently, in Snyder v. Phelps, for example, the Court ruled that the Westboro Baptist Church could engage in hateful protests near the funerals of soldiers. “[T]his Nation has chosen to protect even hurtful speech on public issues to ensure that public debate is not stifled,” ruled Chief Justice John Roberts for the Court’s 8-1 majority. The Court has also recently held that the First Amendment protects lying about military honors (United States v. Alvarez, 2012), animal cruelty videos (United States v. Stevens, 2010), computer-generated depictions of child pornography (Ashcroft v. Free Speech Coalition, 2002), and the sale of violent video games to minors (Brown v. EMA, 2011). This comes on top of over 15 years of Internet-related jurisprudence in which courts have struck down every effort to regulate online expression.

Some will celebrate this jurisprudential revolution; others with lament it. Regardless, it is likely to remain the constitutional standard here in the U.S. As a result, there is almost no chance that courts here would allow restrictions on hate speech to stand. That means alternative approaches will continue to be relied upon to address it.

Foxman and Wolf acknowledge these constitutional hurdles but also point out that there are other reasons why “laws attempting to prohibit hate speech are probably one of the weakest tools we can use against bigotry.” (p. 171) Most notably, there is the scope and volume problem: “the sheer vastness of the challenge” (p. 103) which means “it’s simply impossible to monitor and police the vast proliferation of bigoted content being distributed through Web 2.0 technologies.” (p. 81) “The borderless nature of the Internet means that, like chasing cockroaches, squashing on offending website, page, or service provider does not solve the problem; there are many more waiting behind the walls — or across the border.” (p. 82) That’s exactly right and it also explains why solutions of a more technical nature aren’t likely to work very well either.

Foxman and Wolf also point out how hate speech laws could backfire and have profound unintended consequences. Beyond targeted laws that address true threats, harassment, and direct incitements to violence, Foxman and Wolf argue that “broader regulation of hate speech may send an ‘educational message’ that actually weakens rather than strengthens our system of democratic values.” (p. 171) That’s because such censorial laws and regulations undermine the very essence of deliberative democracy — robust exchange of potential controversial views — and leads to potential untrammeled majoritarianism. Worse yet, legalistic attempts to shut down hate speech can end up creating martyrs for fringe movements and, paradoxically, end up fueling conspiracy theories. (p. 80)

The Essential Role of Counter-speech & Education

Yet, “the challenge of defining hate speech shouldn’t lead us to give up on solving the problem,” argue Foxman and Woff. (p. 53) We must, they argue, refocus our efforts around “education as a bulwark of freedom.” (p. 170)  Digital literacy — teaching citizens respectful online behavior — is the key to those education efforts.

A vital part of digital literacy efforts is the encouragement of counter-speech solutions to online hate. “[T]he best anecdote to hate speech is counter-speech – exposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of respect and diversity,” note Foxman and Wolf. (p. 129)  Or, as the old saying goes, the best response to bad speech is better speech. This principle has infused countless Supreme Court free speech decisions over the past century and it continues to make good sense. But we could do more through education and digital literacy efforts to encourage more and better forms of counter-speech going forward.

“Counter-speech isn’t only or even primarily about debating hate-mongers,” they note. “It’s about helping to create a climate of tolerance and openness for people of all kinds, not just on the Internet but in every aspect of local, community, and national life.” (p. 146) This is how digital literacy becomes digital citizenship. It’s about forming smart norms and personal best practices regarding beneficial online interactions.

Intermediary Policing

What more can be done beyond education and counter-speech efforts? Foxman and Wolf envision a broad and growing role for intermediaries to help to police viral hate. “We are convinced that if much of the time and energy spent advocating legal action against hate speech was used in collaborating and uniting with the online industry to fight the scourge of online hate, we would be making more gains in this fight,” they say. (p. 121) Among the steps they would like to see online operators take:

  • Establishing clear hate speech policies in their Terms of Service and mechanisms for enforcing them;
  • Making it easier for users to flag hate speech and to speak out against it;
  • Facilitating industry-wide education and best practices via multi-stakeholder approaches; and
  • Limiting anonymity and moving to “real-name” policies to identify speakers.

De-anonymization / Real-name policies

Most of these are imminently sensible solutions that should serve as best practices for online service providers and social media platform operators. But their last suggestion for sites to consider limiting anonymous speech will be controversial, especially at a time when many feel that privacy is already at serious risk online and when some critics argue that intermediaries already “censor” too much content as it is. (See, for example, this Jeff Rosen essay on “The Delete Squad: Google, Twitter, Facebook and the New Global Battle over the Future of Free Speech” and this Evgeny Morozov editorial, “You Can’t Say That on the Internet”).

Anonymous online speech certainly facilitates plenty of nasty online comments. There’s plenty of evidence — both scholarly and anecdotal — that “deindividuation” occurs when people can post anonymously.  As Foxman and Wolf explain it: “People who are able to post anonymously (or pseudonymously) are far more likely to say awful things, sometimes with awful effects. Speaking from behind a blank wall that shields a person from responsibility encourages recklessness – it’s far easier to hit the ‘send’ button without a second thought under those circumstances.” (p. 114)

On the other hand, there needs to be a sense of balance here. We protect anonymous speech for the same reason we protect all other forms of speech, no matter how odious: With the bad comes a lot of good. Forcing all users to identify themselves to get at handful of troublemakers is overkill and it would result in the chilling of a huge amount of legitimate speech.

Nonetheless, many governments across the globe are pushing for restrictions on anonymous speech. As Cole Stryker noted in his recent book, Hacking the Future: Privacy, Identity, and Anonymity on the Web, “we are seeing is an all-out war on anonymity, and thus free speech, waged by a variety of armies with widely diverse motivations, often for compelling reasons.” (p. 229). Stryker is right. In fact, less than two weeks ago, a French court ordered Twitter to produce the names of the people behind anti-Semitic tweets that appeared on the site last year.  Meanwhile, plenty of academics, including many here in the U.S., have stepped up their efforts to ban or limit online anonymity. If you don’t believe me, I suggest you read a few of the chapters of The Offensive Internet: Speech, Privacy, and Reputation (Saul Levmore & Martha C. Nussbaum, eds.). It’s a veritable fusillade against anonymity as well as Section 230, the U.S. law that limits liability for intermediaries who post materials by others.

In Viral Hate, Foxman and Wolf stop short of suggesting legal restrictions on anonymity, preferring to stick with experimentation among private intermediaries. One of the book’s authors (Wolf) penned an essay in The New York Times last November (“Anonymity and Incivility on the Internet”) suggesting that while “this is not a matter for government… it is time for Internet intermediaries voluntarily to consider requiring either the use of real names (or registration with the online service) in circumstances, such as the comments section for news articles, where the benefits of anonymous posting are outweighed by the need for greater online civility.” Specifically, Wolf wants the rest of the Net to follow Facebook’s lead: “It is time to consider Facebook’s real-name policy as an Internet norm because online identification demonstrably leads to accountability and promotes civility.”

These proposals prompted strong responses from some academics and average readers who decried the implications of such a move for both privacy and free speech. But, again, it is worth reiterating that Foxman and Wolf do not call for government mandates to achieve this. “[T]his notion of promulgating a new standard of accountability online is not a matter for government intervention, given the strictures of the First Amendment,” they argue. (p. 117)

However, Foxman and Wolf do suggest one innovative alternative that merits attention: premium placement for registered commenters. The New York Times and some other major content providers have experimented with premium placement, whereby those registered on the site have their comments pushed up in the queue while other comments appear down below them. On the other hand, I don’t like the idea of having to register for every news or content site I visit, so I would hope such approaches are used selectively. Another useful approach involves letting users of various social media sites and content services to determine whether they wish to allow comments on their user-generated content at all. Of course, many sites and services (such as YouTube, Facebook, and most blogging services) already allow that.

Conclusion

There are times in the book when Foxman and Wolf push their cause with a bit too much rhetorical flair, as when they claim that “Hitler and the Nazis could never have dreamed of such an engine of hate (as the Internet”). (p. 10)  Perhaps there is something to that, but it is also true that Hitler and the Nazis could have never of dreamed of a platform for individual empowerment, transparency, and counter-speech such as the Internet. It was precisely because they were able to control the very limited media and communications platforms of their age that the Nazis were about to exert total control over the information systems and create a propaganda hate machine that had no serious challenge from the public or other nations. Just ask Arab dictators which age they’d prefer to rule in! It is certainly much harder for today’s totalitarian thugs to keep secrets bottled up and it is equally hard for them to spread lies and hateful propaganda without being met with a forceful response from the general citizenry as well as those in other nations. So the “Hitler-would-have-loved-the-Net” talk is unwarranted.

I’m also a bit skeptical of some of the metrics used to measure this problem. While there is clearly plenty of online hate to be found across the Net today, efforts to quantify it inevitably run right back into the same subjective definition problems that Foxman and Wolf do such a nice job explaining throughout the text. So, if we have such a profound ‘eye-of-the-beholder’ problem at work here, how is it that we can be sure that quantitative counts are accurate?  That doesn’t mean I’m opposed to efforts to quantify online hate, rather, we just need to take such measures with a grain of salt.

Finally, I wish the authors would have developed more detailed case studies of how companies outside the mainstream are dealing with these issues today. Foxman and Wolf focus on big players like Google, Facebook, and Twitter for obvious reasons, but plenty of other online providers and social media operators have policies and procedures in place today to deal with online hate speech. A more thorough survey of those differing approaches might have helped us gain a better understanding of which policies make the most sense going forward.

Despite those small nitpicks, Foxman and Wolf have done a great service here by offering us a penetrating examination of the problem of online hate speech while simultaneously explaining the practical solutions necessary to combat it. Some will be dissatisfied with their pragmatic approach to the issue, feeling on one hand that the authors have not gone far enough in bringing in the law to solve these problems, while others will desire a more forceful call for freedom of speech and just growing a thicker skin in response to viral hate.  But I believe Foxman and Wolf have struck exactly the right balance here and given us a constructive blueprint for addressing these vexing issues going forward.

]]>
https://techliberation.com/2013/06/24/the-constructive-way-to-combat-online-hate-speech-thoughts-on-viral-hate-by-foxman-wolf/feed/ 2 45012
The Problem with API Neutrality https://techliberation.com/2012/09/21/the-problem-with-api-neutrality/ https://techliberation.com/2012/09/21/the-problem-with-api-neutrality/#comments Fri, 21 Sep 2012 14:33:14 +0000 http://techliberation.com/?p=42416

I’ve been hearing more rumblings about “API neutrality” lately. This idea, which originated with Jonathan Zittrain’s book, The Future of the Internet–And How to Stop It, proposes to apply Net neutrality to the code/application layer of the Internet. A blog called “The API Rating Agency,” which appears to be written by Mehdi Medjaoui, posted an essay last week endorsing Zittrain’s proposal and adding some meat to the bones of it. (My thanks to CNet’s Declan McCullagh for bringing it to my attention).

Medjaoui is particularly worried about some of Twitter’s recent moves to crack down on 3rd party API uses. Twitter is trying to figure out how to monetize its platform and, in a digital environment where advertising seems to be the only business model that works, the company has decided to establish more restrictive guidelines for API use. In essence, Twitter believes it can no longer be a perfectly open platform if it hopes to find a way to make money. The company apparently believes that some restrictions will need to be placed on 3rd party uses of its API if the firm hopes to be able to attract and monetize enough eyeballs.

While no one is sure whether that strategy will work, Medjaoui doesn’t even want the experiment to go forward. Building on Zittrain, he proposes the following approach to API neutrality:

  • Absolute data to 3rd party non-discrimination : all content, data, and views equally distributed on the third party ecosystem. Even a competitor could use an API in the same conditions than all others, with not restricted re-use of the data.
  • Limited discrimination without tiering : If you don’t pay specific fees for quality of service, you cannot have a better quality of service, as rate limit, quotas, SLA than someone else in the API ecosystem.If you pay for a high level Quality of service, so you’ll benefit of this high level quality of service, but in the same condition than an other customer paying the same fee.
  • First come first served : No enqueuing API calls from paying third party applications, as the free 3rd-party are in the rate limits.

Before I critique this, let’s go back and recall why Zittrain suggested we might need API neutrality for certain online services or digital platforms. Although Zittrain does not label it as such, API neutrality assumes the platform or device in question is a sort of public utility or common carrier. Zittrain is concerned that the absence of API neutrality could imperil “generativity,” technologies or networks that invite or allow tinkering and all sorts of creative secondary uses. Primary examples include general-purpose personal computers (PCs) and the traditional “best efforts” Internet. By contrast, Zittrain contemptuously refers to “tethered, sterile appliances,” or digital technologies or networks that discourage or disallow tinkering. Zittrain’s primary examples are proprietary devices like Apple’s iPhone or the TiVo, or online walled gardens like the old AOL and current cell phone networks. Such “take it or leave it” devices or platforms earn Zittrain’s wrath. He argues that we run the risk of seeing the glorious days of generative devices and the open Internet give way to those tethered appliances and closed networks. He fears most users will flock to tethered appliances in search of stability or security, and worries because those tethered appliances are less “open” and more “regulable,” thus allowing easier control by either large corporate intermediaries or government officials. In other words, the “future of the Internet” Zittrain is hoping to “stop” is a world dominated by tethered digital appliances and walled gardens, because they are too easily controlled by other actors. He argues:

If there is a present worldwide threat to neutrality in the movement of bits, it comes not from restrictions on traditional Internet access that can be evaded using generative PCs, but from enhancements to traditional and emerging appliancized services that are not open to third-party tinkering.

Because he fears the rise of “walled gardens” and “mediated experiences,” Zittrain goes on to wonder, “Should we consider network neutrality-style mandates for appliancized systems?” He responds to his own question as follows:

The answer lies in that subset of appliancized systems that seeks to gain the benefits of third-party contributions while reserving the right to exclude it later. . . . Those who offer open APIs on the Net in an attempt to harness the generative cycle ought to remain application-neutral after their efforts have succeeded, so all those who built on top of their interface can continue to do so on equal terms. (p. 183-4)

While many would agree that API neutrality represents a fine generic norm for online commerce and interactions, Zittrain implies it should be a legal standard to which online providers are held. He even alludes to the possibility of applying the common law principle of adverse possession more broadly in these contexts. He notes that adverse possession “dictates that people who openly occupy another’s private property without the owner’s explicit objection (or, for that matter, permission) can, after a lengthy period of time, come to legitimately acquire it.” (p. 183) He does not make it clear when that principle would be triggered as it pertains to digital platforms or social media APIs. But it would seem clear that his API neutrality rule would eventually regulate the major information providers and platforms of our day, including: Apple, Google, Twitter, Facebook, and many others.

As I argued in my paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” API neutrality regulation is a dangerous notion. There are many problems with the logic of Zittrain’s API neutrality proposal and with the application of adverse possession to social media platforms or digital applications. What follows below is my critique of the notion that appeared in that paper, and it also explains why Medjaoui’s new formulation and clarification of the principle is equally problematic.

First, most developers who offer open APIs are unlikely to close them later because they do not want to incur the wrath of “those who built on top of their interfaces,” to use Zittrain’s parlance. Social media services make themselves more attractive to users and advertisers by providing platforms with plentiful opportunities for diverse interactions and innovations. The “walled gardens” of the Internet’s first generation are largely things of the past. Thus, a powerful self-correcting mechanism is at work in this space. If social media operators were to lock down their platforms or applications in a highly restrictive fashion, both application developers and average users would likely revolt. Moreover, a move to foreclose or limit generative opportunities could spur more entry and innovation as other application (“app”) developers and users seek out more open, pro-generative alternatives.

Consider an example involving Apple and the iPhone. Shortly after the iPhone’s release, Apple reversed itself and opened its iPhone platform to third-party app developers. The result was an outpouring of innovation. Customers in more than 123 countries had downloaded more than eighteen billion apps from Apple’s App Store at a rate of more than 1 billion apps per month as of late 2011.

But what if Apple decides to suddenly shut its App Store and prohibit all third-party contributions, after initially allowing them? There is no obvious incentive for Apple to do so, and there are plenty of competitive reasons for Apple not to close off third-party development, especially as its application dominance is a key element of Apple’s success in the smartphone and tablet sectors. Under Zittrain’s proposed paradigm, regulators would treat the iPhone as the equivalent of a commoditized common carriage device and force the App Store to operate on regulated, public utility–like terms without editorial or technological (and perhaps interoperability) control by Apple itself. But if Apple were to open the door to developers only to slam it shut a short time later, the company would likely lose those developers and customers to alternative platforms. Google, Amazon, Microsoft, and others would be only too happy to take Apple’s business by offering a wealth of stores and devices that allow users greater freedom. Market choices, not regulatory edicts such as mandatory API neutrality, should determine the future of the Internet.

The same logic indicates the likely counterproductive effects of efforts to impose API neutrality on Twitter. Until recently, Twitter had a voluntary open access policy in that it allowed nearly unlimited third-party reuse and modification of its API. It is now partially abandoning that policy by taking greater control over the uses of its API. This policy reversal will, no doubt, lead to claims that the company is acting like one of Tim Wu’s proverbial “information empires” and that perhaps Zittrain’s API neutrality regime should be put in place as a remedy. Indeed, Zittrain has already referred to Twitter’s move as a “bait-and-switch” and recommended an API neutrality remedy. Zittrain’s actions could foreshadow more pressure from academics and policymakers that will first encourage Twitter to continue open access, but then potentially force the company to grant nondiscriminatory access to its platform on regulated terms. Nondiscriminatory access would represent a step toward the forced commoditization of the Twitter API and the involuntary surrender of the company’s property rights to some collective authority that will manage the platform as a common carrier or essential facility.

Yet again, innovation and competitive entry remain possible in this arena. There is nothing stopping other microblogging or short-messaging services from offering alternatives to Twitter. Some people would decry the potential lack of interoperability among competing services at first, but innovators would quickly find work-arounds. A decade ago, similar angst surrounded AOL’s growing power in the instant-messaging (IM) marketplace. Many feared AOL would monopolize the market and exclude competitors by denying interconnection. Markets evolved quickly, however. Today, anyone can download a free chat client like Digsby or Adium to manage IM services from AOL, Yahoo!, Google, Facebook, and just about any other company, all within a single interface, essentially making it irrelevant which chat service your friends use. These innovations occurred despite a mandate in the conditions of Time Warner’s acquisition of AOL that the post-merger firm provide for IM interoperability. The provision was quietly sunset as irrelevant a short three years later.

A similar market response could follow Twitter’s to exert excessive control over its APIs. In web 2.0 markets—that is, markets built on pure code—the fixed costs of investment are orders of magnitude less than they were with the massive physical networks of pipes and towers from the era of analog broadcasting and communications. Thus, major competition for Twitter is more than possible, and it is likely to come from sources and platforms we cannot currently imagine, just as few of us could have imagined something like Twitter developing.

Even if some social media platform owners did want to abandon previously open APIs and move to a sort of walled garden, there is no reason to classify such a move as anticompetitive foreclosure or leveraging of the platform. Marketplace experimentation in search of a sustainable business model should not be made illegal. Since most social media sites such as Twitter do not charge for the services they provide, some limited steps to lock down their platforms or APIs might help them earn a return on their investments by monetizing traffic on their own platforms. If a social media provider had to live under a strict version of Zittrain’s API neutrality principle, however, it might be extremely difficult to monetize traffic and increase businesses since the company would be forced to share its only valuable intellectual property.

In sum, if the government were to forcibly apply API neutrality or adverse possession principles through utility-like regulation, it would send a signal to social media entrepreneurs that their platforms are theirs in name only and could be coercively commoditized once they are popular enough. Such a move would constitute a serious disincentive to future innovation and investment. “API neutrality” would upend the way much of the modern digital economy operates and cripple many of America’s most innovative companies and sectors. In the long run, such changes could sacrifice America’s current role as a global information technology leader. For these reasons, API neutrality mandates should be rejected.


Additional Reading

]]>
https://techliberation.com/2012/09/21/the-problem-with-api-neutrality/feed/ 2 42416
new paper: The Perils of Classifying Social Media Platforms as Public Utilities https://techliberation.com/2012/03/19/new-paper-the-perils-of-classifying-social-media-platforms-as-public-utilities/ https://techliberation.com/2012/03/19/new-paper-the-perils-of-classifying-social-media-platforms-as-public-utilities/#respond Mon, 19 Mar 2012 18:25:33 +0000 http://techliberation.com/?p=40360

The Mercatus Center at George Mason University has just released my new white paper, “The Perils of Classifying Social Media Platforms as Public Utilities.” [PDF] I first presented a draft of this paper last November at a Michigan State University conference on “The Governance of Social Media.” [Video of my panel here.]

In this paper, I note that to the extent public utility-style regulation has been debated within the Internet policy arena over the past decade, the focus has been almost entirely on the physical layer of the Internet. The question has been whether Internet service providers should be considered “essential facilities” or “natural monopolies” and regulated as public utilities. The debate over “net neutrality” regulation has been animated by such concerns.

While that debate still rages, the rhetoric of public utilities and essential facilities is increasingly creeping into policy discussions about other layers of the Internet, such as the search layer. More recently, there have been rumblings within academic and public policy circles regarding whether social media platforms, especially social networking sites, might also possess public utility characteristics. Presumably, such a classification would entail greater regulation of those sites’ structures and business practices.

Proponents of treating social media platforms as public utilities offer a variety of justifications for regulation. Amorphous “fairness” concerns animate many of these calls, but privacy and reputational concerns are also frequently mentioned as rationales for regulation. Proponents of regulation also sometimes invoke “social utility” or “social commons” arguments in defense of increased government oversight, even though these notions lack clear definition.

Social media platforms do not resemble traditional public utilities, however, and there are good reasons why policymakers should avoid a rush to regulate them as such. Treating these nascent digital services as regulated utilities would harm consumer welfare because public utility regulation has traditionally been the archenemy of innovation and competition. Furthermore, treating today’s leading social media providers as digital essential facilities threatens to convert “natural monopoly” or “essential facility” claims into self-fulfilling prophecies. Related proposals to mandate “API neutrality” or enforce a “Separations Principle” on integrated information platforms would be particularly problematic. Such regulation also threatens innovation and investment. Marketplace experimentation in search of sustainable business models should not be made illegal.

Remedies less onerous than regulation are available. Transparency and data-portability policies would solve many of the problems that concern critics, and numerous private empowerment solutions exist for those users concerned about their privacy on social media sites.

Finally, because social media are fundamentally tied up with the production and dissemination of speech and expression, First Amendment values are at stake, warranting heightened constitutional scrutiny of proposals for regulation. Social media providers should possess the editorial discretion to determine how their platforms are configured and what can appear on them.

This 63-page paper can be found on the Mercatus site here, on SSRN, or on Scribd.  I’ve also embedded it below in a Scribd reader. Eventually, a shorter version of this paper will appear as a chapter in a MIT Press book.

Social Networks as Public Utilities [Adam Thierer]

]]>
https://techliberation.com/2012/03/19/new-paper-the-perils-of-classifying-social-media-platforms-as-public-utilities/feed/ 0 40360
Are Social Networks Essential Facilities? https://techliberation.com/2011/07/28/are-social-networks-essential-facilities/ https://techliberation.com/2011/07/28/are-social-networks-essential-facilities/#comments Thu, 28 Jul 2011 18:06:12 +0000 http://techliberation.com/?p=37927

That’s the question I take up in my latest Forbes column, “The Danger Of Making Facebook, LinkedIn, Google And Twitter Public Utilities.”  I note the rising chatter in the blogosphere about the potential regulation of social networking sites, including Facebook and Twitter. In response, I argue:

public utilities are, by their very nature, non-innovative. Consumers are typically given access to a plain vanilla service at a “fair” rate, but without any incentive to earn a greater return, innovations suffers. Of course, social networking sites are already available to everyone for free! And they are constantly innovating.  So, it’s unclear what the problem is here and how regulation would solve it.

I don’t doubt that social networking platforms have become an important part of the lives of a great many people, but that doesn’t mean they are “essential facilities” that should treated like your local water company. These are highly dynamic networks and services built on code, not concrete. Most of them didn’t even exist 10 years ago. Regulating them would likely drain the entrepreneurial spirit from this sector, discourage new innovation and entry, and potentially raise prices for services that are mostly free of charge to consumers.  Social norms, public pressure, and ongoing rivalry will improve existing services more than government regulation ever could.

Read my full essay for more.

]]>
https://techliberation.com/2011/07/28/are-social-networks-essential-facilities/feed/ 1 37927
Vivek Wadhwa on High-Tech’s “Best Regulator” https://techliberation.com/2011/07/08/vivek-wadhwa-on-high-techs-best-regulator/ https://techliberation.com/2011/07/08/vivek-wadhwa-on-high-techs-best-regulator/#comments Fri, 08 Jul 2011 14:16:29 +0000 http://techliberation.com/?p=37710

Vivek Wadhwa, who is affiliated with Harvard Law School and is director of research at Duke University’s Center for Entrepreneurship, has a terrific column in today’s Washington Post warning of the dangers of government trying to micromanage high-tech innovation and the Digital Economy from above.

For reasons I have never been able to understand, the Washington Post uses different headlines for its online opeds versus its print edition. That’s a shame, because while I like the online title of Wadhwa’s essay, “Uncle Sam’s Choke-Hold on Innovation,” the title in the print edition is better: “Google, Twitter and the Best Regulator.” By “best regulator” Wadhwa means the marketplace, and this is a point we have hammered on here at the TLF relentlessly: Contrary to what some critics suggest, the best regulator of “market power” is the market itself because of the way it punishes firms that get lethargic, anti-innovative, or just plain cocky. Wadhwa notes:

The technology sector moves so quickly that when a company becomes obsessed with defending and abusing its dominant market position, countervailing forces cause it to get left behind. Consider: The FTC spent years investigating IBM and Microsoft’s anti-competitive practices, yet it wasn’t government that saved the day; their monopolies became irrelevant because both companies could not keep pace with rapid changes in technology — changes the rest of the industry embraced. The personal-computer revolution did IBM in; Microsoft’s Waterloo was the Internet. This — not punishment from Uncle Sam — is the real threat to Google and Twitter if they behave as IBM and Microsoft did in their heydays.

Quite right. I’ve discussed the Microsoft and IBM antitrust sagas many times here before. In particular, see my 2009 review of Gary Reback’s book on antitrust and high-tech and my recent essay on “Libertarianism & Antitrust: A Brief Comment.” I’ve also commented on the FTC’s look at Twitter and Google in my recent essays, “Twitter, the Monopolist? Is this Tim Wu’s “Threat Regime” In Action?” and “The Question of Remedies in a Google Antitrust Case.”

The crucial points I have tried to get across in these essays, as well as all my essays countering the modern cyber-progressives,” is that high-tech market power concerns are ultimately better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting market or “code failure” comes down to the rapidity and nimbleness of those responses, especially in markets built upon bits instead of atoms.

That’s why Wadhwa’s insight — that “the technology sector moves so quickly that when a company becomes obsessed with defending and abusing its dominant market position, countervailing forces cause it to get left behind” — is so cogent. We’re not talking about markets like steel and corn here. Things move much, much more quickly when bits and code and are the foundations of what Tim Wu calls “information empires.” There’s no doubt that some companies will gain scale and even “power” quickly in our new Digital Economy, but they can also lose it in the blink of an eye.

The best modern example that I’ve documented here before is AOL. It’s easy to forget now, but just a short decade ago, academics and regulators were in a tizzy over Big Bad AOL. And why not? After all, 25 million subscribers were willing to pay $20 per month to get a guided tour of AOL’s walled garden version of the Internet.  And then AOL and Time Warner announced a historic mega-merger that had some predicting the rise of “new totalitarianisms” and corporate “Big Brother.”

But the deal quickly went off the rails. By April 2002, just two years after the deal was struck, AOL-Time Warner had already reported a staggering $54 billion loss. By January 2003, losses had grown to $99 billion. By September 2003, Time Warner decided to drop AOL from its name altogether and the deal continued to slowly unravel from there.  In a 2006 interview with the Wall Street Journal, Time Warner President Jeffrey Bewkes famously declared the death of “synergy” and went so far as to call synergy “bullsh*t”!  In early 2008, Time Warner decided to shed AOL’s dial-up service and then to spin off AOL entirely.  Looking back at the deal, Fortune magazine senior editor at large Allan Sloan called it the “turkey of the decade.” The formal divorce between the two firms took place in 2008. Further deconsolidation followed for Time Warner, which spun off its cable TV unit and various other properties.

Meanwhile, AOL has lost its old dial-up business and walled garden empire and is still struggling to reinvent itself as an advertising company. It’s about the last company on anybody’s lips when we talk about tech titans today. What an epic tale of creative destruction! That all happened is less than 10 years! And yet, again, a decade ago, tech pundits and cyberlaw intellectuals like Larry Lessig were penning entire books about the ominous threat posed by the AOL walled garden model of Internet governance.

Lessig’s myopia was based on an inherent techno-pessimism I have discussed and critiqued in my Next Digital Decade book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters.” Countless Ivory Tower cyber-academics today adopt a static view of markets and market problems. This “static snapshot” crowd gets so worked up about short term spells of “market power” – which usually don’t represent serious market power at all – that they call for the reordering of markets to suit their tastes.  Sadly, they sometimes do this under the banner of “Internet freedom,” claiming that techno-cratic elites can “free” consumers from the supposed tyranny of the marketplace.

In reality, that vision wraps markets in chains and ultimately leaves consumers worse off by stifling innovation and inviting in ham-handed regulatory edicts and bureaucracies to plan this fast-paced sector of our economy. Importantly, that vision ignores the deadweight losses associated with expanding government red tape and bureaucracy as well as the very real danger of “regulatory capture” that exists anytime Washington decides to get cozy with a major sector of the economy.

As Wadhwa correctly concludes, “Government has no place in this technology jungle.” I wish other academics and tech pundits would heed that warning.

 

 

 

]]>
https://techliberation.com/2011/07/08/vivek-wadhwa-on-high-techs-best-regulator/feed/ 7 37710
Twitter, the Monopolist? Is this Tim Wu’s “Threat Regime” In Action? https://techliberation.com/2011/07/01/twitter-the-monopolist-is-this-tim-wus-threat-regime-in-action/ https://techliberation.com/2011/07/01/twitter-the-monopolist-is-this-tim-wus-threat-regime-in-action/#comments Fri, 01 Jul 2011 03:57:24 +0000 http://techliberation.com/?p=37610

According to a report today from SAI Business Insider, “The Federal Trade Commission is actively investigating Twitter and the way it deals with the companies building applications and services for its platform.”  Apparently the agency has reached out to some competing application / platform providers to ask questions about Twitter’s recent efforts to exert more control over the uses of its API by third parties. [The Wall Street Journal confirms the FTC’s interest in Twitter.]

It remains to be seen whether this leads to any serious regulatory action against Twitter by the FTC, but such a move wouldn’t necessarily be surprising considering the more activist tilt of the agency recently. It’s even less surprising considering that Columbia University law professor and prolific cyberlaw scholar Tim Wu was appointed as a senior advisor to the FTC earlier this year. When the announcement of Wu’s appointment was made, the Wall Street Journal kicked off an article with the warning, “Silicon Valley has a new fear factor.”  It seems the Journal may have been on to something!

It’s impossible to know how much of an influence Tim Wu is having on the agency, but as I have noted here before, Prof. Wu is man with a healthy appetite for regulatory activism. [See all my essays about Wu’s work here.] Moreover, he’s a man who has already determined that Twitter is a “monopolist” in his November 13, 2010 Wall Street Journal op-ed, “In the Grip of the New Monopolists.”

That essay prompted a fiery response from me [“Tim Wu Redefines Monopoly“] as well as a far more reasoned essay by antitrust gurus Geoff Manne and Josh Wright [“What’s An Internet Monopolist? A Reply to Professor Wu.”] Prof. Wu was kind enough to swing by the TLF and respond to my criticisms in an essay “On the Definition of Monopoly,” which he said served as a “corrective” to my earlier essay [even though I continue to believe that what I said fairly reflected the last four decades of economic wisdom on competition policy and that it is Wu who is well off the reservation with his expansionist views of antitrust enforcement].

Regardless of what one thinks about that exchange, if the FTC is moving forward with a case against Twitter, three practical questions need to be considered: (1) What’s the relevant market? (2) Where’s the harm? and (3) What’s the remedy?

I’ll briefly discuss each question below but should also mention that I already explored many of these issues in my essay,  “A Vision of (Regulatory) Things to Come for Twitter,” so I apologize in advance for the repetition.  I will then discuss all this in the context of Tim Wu’s latest law review article on “Agency Threats” and what he approvingly refers to as regulatory “threat regimes.”

On Market Definition

As I noted in my previous essays, it’s very much unclear how to define the contours of the market Twitter serves. After all, Twitter is only a few years old and it competes with many other forms of communication and information dissemination. For me, Twitter is a partial substitute for blogging, IMs, email, phone calls, RSS feeds, and even radio and television news. Yet, like most others, I continue to use all those other technologies and those technologies continue to pressure Twitter to innovate.

Whatever market it serves, however, Tim Wu is apparently willing to write off that market as already “in the grip” of Twitter. But does Wu really believe that nothing better will come along to compete against Twitter or even replace it entirely?  It reminds me of all the hand-wringing we heard about AOL a decade ago when people predicted its “walled gardens” would someday rule the Internet and IM.  And we all know how that turned out.

If you ask me, this episode again reflects the short-term, static snapshot thinking we all too often see at work in debates over media and technology policy. That is, many cyber-worrywarts are prone to taking snapshots of market activity and suggesting that temporary patterns are permanent disasters requiring immediate correction. Of course, a more dynamic view of progress and competition holds that “market failures” and “code failures” are ultimately better addressed by voluntary, spontaneous, bottom-up responses than by coercive, top-down approaches. [More on that conflict of visions in my book chapter on “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters.”]

Regardless, I just don’t see how Wu or the FTC can claim Twitter has monopolized a market that is still so young that we can’t even define it.

On Harm

Even if one accepted Wu’s premise that Twitter was a monopolist, where is the harm? At least in theory, antitrust law is supposed to be about protecting consumer welfare, not competitors. If this whole thing is about UberMedia losing out in some bidding wars for alternative Twitter platforms, well, that’s just pathetic. UberMedia is free to develop or bid on alternative Twitter applications or work with others to develop entirely new services. It’s not like there’s a shortage of them out there.

If the theory is that consumers are being harmed by Twitter exerting more control over its API, I would just remind everyone that (a) we don’t pay a cent for the service that Twitter provides and (b) Twitter is still scrambling to find a way to monetize its service for the long-haul. There are also some legitimate security issues in play here that cut against the claim that what Twitter is doing is anti-consumer.

In sum, it is hard to understand where the harm lies in Twitter taking greater control of its API, and there’s certainly nothing stopping rival innovators from tying to offer a competing service.  140-character text messages aren’t exactly the stuff of traditional “information empires,” as Wu would call them.

On Remedies

Finally, we come to the thorny issue of remedies. I suppose the easiest remedy would be a prohibition on Twitter acquiring any third-party applications provider that currently relies on Twitter’s API. In other words, downstream vertical integration would be forbidden. But there’s about 40 years of antitrust literature explaining why such integration is generally pro-innovation and pro-consumer and shouldn’t be made illegal by antitrust law. Tim Wu may not buy that–and if you’ve read his recent book The Master Switch, you know he absolutely rejects it–but it is standard thinking in the field of industrial organization and antitrust economics today. Most of the economists at the FTC and DOJ could tell him as much.

Another alternative remedy might be Jonathan Zittrain’s “API neutrality” idea, proposed in his 2008 book, The Future of the Internet and How to Stop It. Zittrain suggested that API neutrality–essentially a variant of Net neutrality but for application protocols–might be needed to ensure fair access to certain services or platforms to guarantee that digital “generativity” was not imperiled. On pg. 181 of the book, Zittrain argued that:

“If there is a present worldwide threat to neutrality in the movement of bits, it comes not from restrictions on traditional Internet access that can be evaded using generative PCs, but from enhancements to traditional and emerging appliancized services that are not open to third-party tinkering.”

After engaging in some hand-wringing about “walled gardens” and “mediated experiences,” Zittrain went on to ask: “So when should we consider network neutrality-style mandates for appliancized systems?” He responds to his own question as follows:

“The answer lies in that subset of appliancized systems that seeks to gain the benefits of third-party contributions while reserving the right to exclude it later. … Those who offer open APIs on the Net in an attempt to harness the generative cycle ought to remain application-neutral after their efforts have succeeded, so all those who built on top of their interface can continue to do so on equal terms.” (p. 184)

This might be a fine generic principle, but Zittrain implies that this should be a legal standard to which online providers are held. At one point, he even alludes to the possibility of applying the common law principle of adverse possession more broadly in these contexts. He notes that adverse possession “dictates that people who openly occupy another’s private property without the owner’s explicit objection (or, for that matter, permission) can, after a lengthy period of time, come to legitimately acquire it.” But he doesn’t make it clear when it would be triggered as it pertains to digital platforms or APIs.

Nonetheless, one could imagine it would be one remedy antitrust officials might look to when considering what to do about Twitter exerting greater control over its API. Essentially, Twitter would become the equivalent of a public utility that all would have access to on regulated terms.

As I noted in the first of my many reviews of Zittrain’s book, there are many problems with the logic of API neutrality or the application of adverse possession in these contexts. Here’s my critique of the “API neutrality” notion (again, this is from 2008 so it now sounds a bit dated):

First, most developers who offer open APIs aren’t likely to close them later precisely because they don’t want to incur the wrath of “those who built on top of their interface.” But, second, for the sake of argument, let’s say they did want to abandoned previously open APIs and move to some sort of walled garden. So what? Isn’t that called marketplace experimentation? Are we really going to make that illegal? Finally, if they were so foolish as to engage in such games, it might be the best thing that ever happened to the market and consumers since it could encourage more entry and innovation as people seek out more open, pro-generative alternatives. Consider this example: Now that Apple has opened to door to third-party iPhone development a bit with the SDK, does that mean that under Jonathan’s proposed paradigm we should treat the iPhone as the equivalent of commoditized common carriage device? That seems incredibly misguided to me. If Steve Jobs opens the development door just a little bit only to slam it shut a short time later, he will pay dearly for that mistake in the marketplace. For God’s sake, just spend a few minutes over on the Howard Forums or the PPC Geeks forum if you want to get a taste for the insane amount of tinkering going on out there in the mobile world right now on other systems. If Apple tries to roll back the clock, Microsoft and others will be all too happy to take their business by offering a wealth of devices that allow you to tinker to your heart’s content. We should let such experiments continue and let the future of the Internet be determined by market choices, not regulatory choices such as forced API neutrality.

I think the same critique would apply to efforts to impose API neutrality on Twitter.  Regardless, would such a remedy be imposed through targeted regulatory action, an antitrust consent decree, or perhaps through what Tim Wu calls “agency threats”?

Wu’s “Threat Regime” Model of Internet Governance

Prof. Wu recently published a law review article on “Agency Threats” and what he approvingly refers to as “threat regimes.” The paper is a “defense of regulatory threats in particular contexts.”  Here’s a portion of the abstract:

The use of threats instead of law can be a useful choice — not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change — under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known.

I’m extremely troubled by this reasoning and can think of a couple of alternative labels for such behavior by government agencies: unaccountable, above-the-law, unconstitutional, anti-democratic, thuggery, regulatory blackmail, and so on.

But what’s even more troubling about Wu’s thinking about “threat regimes” is that he assumes this arbitrary mode of governing-by-intimidation makes even more sense in fast-moving high-tech industries. That seems counter-intuitive. If a given sector finds itself in a state of “high uncertainty” as Wu calls it, doesn’t that mean, by definition, it is dynamic and subject to forces that might bring about beneficial change? And shouldn’t we assume that those are the last sectors we would want regulators monkeying with since bureaucrats lack the requisite knowledge of how to best guide the evolution of complex information technologies?

Wu seems to believe that regulators possess a crystal ball and a set of magical dials that can guide the evolution of technology markets to a better equilibrium through the use of constant Sunstein-ian “nudges” (or perhaps shoves).  I think that’s poppycock.

Regardless, once we realize that this is the way Tim Wu thinks, an FTC investigation into Twitter’s current business practices starts to make a lot more sense. It’s about creating a “threat regime” that intimidates Twitter into to playing by the arbitrary rules of Washington bureaucrats instead of responding to marketplace demands and developments in a natural, evolutionary way. In fact, in his “threats” essay, Wu explicitly rejects that model:

The second option—“wait and see”—may sound attractive because it allows the industry to develop in what might be called a natural way. This approach, however, makes a great sacrifice: the public’s interest may be entirely unrepresented during the industry’s formative period. The risk is that the industry’s norms and business models will, effectively, be set without any public input. Waiting for the industry to settle down may result in undesirable practices that prove extremely hard to reverse or influence with rules issued later. To state the matter more colloquially, the industry may be “baked” by the time there is any real oversight or public input.

In essence, Wu desires a “mixed economy” model for high-tech sectors in which decision are guided at every juncture by the supposed wisdom of techno-cratic philosopher kings like himself. We must trust that he and his fellow regulators will guide us and our economy down an more enlightened path. And we must accept that some “threats” may be necessary to get the job done.

I find this mode of thinking disturbing in the extreme because of the rank hubris at the center of it. Regardless, Twitter appears to be well on its way to becoming a test case for Wu’s “threat” model of Internet governance.

]]>
https://techliberation.com/2011/07/01/twitter-the-monopolist-is-this-tim-wus-threat-regime-in-action/feed/ 9 37610
Government Control of Language and Other Protocols https://techliberation.com/2011/06/06/government-control-of-language-and-other-protocols/ https://techliberation.com/2011/06/06/government-control-of-language-and-other-protocols/#comments Mon, 06 Jun 2011 16:17:40 +0000 http://techliberation.com/?p=37173

It might be tempting to laugh at France’s ban on words like “Facebook” and Twitter” in the media. France’s Conseil Supérieur de l’Audiovisuel recently ruled that specific references to these sites (in stories not about them) would violate a 1992 law banning “secret” advertising. The council was created in 1989 to ensure fairness in French audiovisual communications, such as in allocation of television time to political candidates, and to protect children from some types of programming.

Sure, laugh at the French. But not for too long. The United States has similarly busy-bodied regulators, who, for example, have primly regulated such advertising themselves. American regulators carefully oversee non-secret advertising, too. Our government nannies equal the French in usurping parents’ decisions about children’s access to media. And the Federal Communications Commission endlessly plays footsie with speech regulation.

In the United States, banning words seems too blatant an affront to our First Amendment, but the United States has a fairly lively “English only” movement. Somehow, regulating an entire communications protocol doesn’t have the same censorious stink.

So it is that our Federal Communications Commission asserts a right to regulate the delivery of Internet service. The protocols on which the Internet runs are communications protocols, remember. Withdraw private control of them and you’ve got a more thoroughgoing and insidious form of speech control: it may look like speech rights remain with the people, but government controls the medium over which the speech travels.

The government has sought to control protocols in the past and will continue to do so in the future. The “crypto wars,” in which government tried to control secure communications protocols, merely presage struggles of the future. Perhaps the next battle will be over BitCoin, an online currency that is resistant to surveillance and confiscation. In BitCoin, communications and value transfer are melded together. To protect us from the scourge of illegal drugs and the recently manufactured crime of “money laundering,” governments will almost certainly seek to bar us from trading with one another and transferring our wealth securely and privately.

So laugh at France. But don’t laugh too hard. Leave the smugness to them.

]]>
https://techliberation.com/2011/06/06/government-control-of-language-and-other-protocols/feed/ 1 37173
Super-Injunction Dysfunction & Information Control Follies https://techliberation.com/2011/06/01/super-injunction-dysfunction-information-control-follies/ https://techliberation.com/2011/06/01/super-injunction-dysfunction-information-control-follies/#comments Wed, 01 Jun 2011 13:27:36 +0000 http://techliberation.com/?p=37101

My latest Forbes column is entitled “With Freedom of Speech, The Technological Genie Is Out of the Bottle,” and it’s a look back at the amazing events that unfolded over the past week in the U.K. regarding privacy, free speech, and Twitter. I’m speaking, of course, about the “super-injunction” mess. I relate this episode to the ongoing research Jerry Brito and I are doing examining the increasing challenges of information control.

I begin by noting that:

When it comes to freedom of speech in the age of Twitter, for better or worse, the genie is out of the bottle. Controlling information flows on the Internet has always been challenging, but new communications technologies and media platforms make it increasingly difficult for governments to crack down on speech and data dissemination now that the masses are empowered. The most recent exhibit in the information control follies comes from the United Kingdom, where in the span of just one week the country’s enhanced libel law procedure was rendered a farce.

I go on to explain how Britain’s super-injunction regulatory regime unraveled so quickly and why it’s unlikely to be effectively enforceable in the future. Read the entire essay over at Forbes and then also check out Jerry’s Time TechLand editorial from last week, “Twitter’s Super-Duper U.K. Censorship Trouble.” I also just saw this piece by British defamation expect John Maher: “Law Playing Catch-up with New Media.” It’s worth a read.

]]>
https://techliberation.com/2011/06/01/super-injunction-dysfunction-information-control-follies/feed/ 5 37101
Some Metrics Regarding the Volume of Online Activity https://techliberation.com/2011/05/18/some-metrics-regarding-the-volume-of-online-activity/ https://techliberation.com/2011/05/18/some-metrics-regarding-the-volume-of-online-activity/#comments Wed, 18 May 2011 15:37:44 +0000 http://techliberation.com/?p=36879

One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 23, 4, 5.]  As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story.  It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining.  In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]

  • Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1]  People on Facebook install 20 million applications every day.[2]
  • YouTube: every minute, 48 hours of video were uploaded.  According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
  • eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
  • Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
  • Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second)  “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
  • Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
  • Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
  • Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
  • “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
  • Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information.  This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
  • According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]


[1]     Ken Deeter, “Live Commenting: Behind the Scenes,” Facebook.com, February 7, 2011, http://www.facebook.com/note.php?note_id=496077348919.
[4]     eBay, “Who We Are,” http://www.ebayinc.com/who
[5]     Matt McGee, “By The Numbers: Twitter Vs. Facebook Vs. Google Buzz,” SearchEngineLand, February 23, 2010, http://searchengineland.com/by-the-numbers-twitter-vs-facebook-vs-google-buzz-36709
[7]     Nicholas Jackson, “Infographic: A Look at Twitter’s Explosive Five-Year History,” The Atlantic, July 18, 2011, http://www.theatlantic.com/technology/archive/2011/07/infographic-a-look-at-twitters-explosive-five-year-history/242070
[9]     “10 Things You Should Know about Yelp,” Yelp.com, http://www.yelp.com/about [accessed July 18, 2011]
[10]   “Wikipedia: Edit Growth Measured in Time between Every 10,000,000th Edit,” http://en.wikipedia.org/wiki/User:Katalaveno/TBE
[11]   Martin Hilbert and Priscila Lopez, “The World’s Technological Capacity to Store, Communicate, and Compute Information,” Science, February 10, 2011, http://annenberg.usc.edu/News%20and%20Events/News/110210Hilbert.aspx.
[12]   Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011, http://ucsdnews.ucsd.edu/newsrel/general/04-05BusinessInformation.asp.
[13]   Julie Bort, “10 Technologies That Will Change the World in the Next 10 Years,” Network World, July 15, 2011, http://m.networkworld.com/news/2011/071511-cisco-futurist.html?page=1
]]>
https://techliberation.com/2011/05/18/some-metrics-regarding-the-volume-of-online-activity/feed/ 610 36879
A Vision of (Regulatory) Things to Come for Twitter? https://techliberation.com/2011/03/13/a-vision-of-regulatory-things-to-come-for-twitter/ https://techliberation.com/2011/03/13/a-vision-of-regulatory-things-to-come-for-twitter/#comments Sun, 13 Mar 2011 16:18:08 +0000 http://techliberation.com/?p=35568

Twitter could be in for a world of potential pain. Regulatory pain, that is. The company’s announcement on Friday that it would soon be cracking down on the uses of its API by third parties is raising eyebrows in cyberspace and, if recent regulatory history is any indicator, this high-tech innovator could soon face some heat from regulatory advocates and public policy makers. If this thing goes down as I describe it below, it will be one hell of a fight that once again features warring conceptions of “Internet freedom” butting heads over the question of whether Twitter should be forced to share its API with rivals via some sort of “open access” regulatory regime or “API neutrality,” in particular. I’ll explore that possibility in this essay. First, a bit of background.

Understanding Forced Access Regulation

In the field of communications law, the dominant public policy fight of the past 15 years has been the battle over “open access” and “neutrality” regulation. Generally speaking, open access regulations demand that a company share its property (networks, systems, devices, or code) with rivals on terms established by law. Neutrality regulation is a variant of open access regulation, which also requires that systems be used in ways specified by law, but usually without the physical sharing requirements. Both forms of regulation derive from traditional common carriage principles / regulatory regimes. Critics of such regulation, which would most definitely include me, decry the inefficiencies associated with such “forced access” regimes, as we prefer to label them. Forced access regulation also raises certain constitutional issues related to First and Fifth Amendment rights of speech and property.

The Telecommunications Act of 1996 got this ball rolling with its mandated access provisions for local phone service. To make a very long and tortured history much shorter, this was a battle over how far law should go to force local telephone companies to share their phone lines with rivals at regulated rates. (Check this old piece of mine for a flavor of how well that turned out.) The advocates of open access regulation eventually turned their attention to cable systems and tried (but failed) to apply similar sharing / access rules there. Following those fights, which involved many nasty court skirmishes, the Net neutrality wars broke out. Net neutrality is a type of forced access regime for broadband platforms. Although Net neutrality regulation would not necessarily require carriers to share networks with rivals, it would at least require that platform providers play by special access and interconnection rules set by federal regulators.

Forced access provisions have been used in other contexts. We might think of the provisions we saw at work in the Microsoft antitrust case as a form of forced access regulation. Some may also recall the interconnection provisions that governed AOL’s instant messaging service following its merger with Time Warner (discussed more below). There are other examples, but I think you get the point.

New Frontiers for Forced Access Regulation?

All this history is well known to all readers of this blog and followers of communications policy. The reason I repeat it here is because this fight is now spreading to new sectors, platforms, and technologies.

For example, “search neutrality” is one of those new frontiers of the forced access fight. Some academics and regulatory advocates are pushing for rules that would govern how search results are shown or for special requirements on search providers to eliminate supposed “search bias” or to ensure search “fairness” of various sorts. Make sure to read James Grimmelmann’s terrific treatment of the concept from his chapter in TechFreedom’s book, The Next Digital Decade, and then also listen to this podcast featuring Danny Sullivan dissecting the issue.

Some critics also want to treat search engines (and Google in particular) as “essential facilities.” In another essay from The Next Digital Decade, Geoff Manne has done a good job pointing out why that’s such a misguided idea.

Similarly, some folks (such as danah boyd) are already calling for Facebook to be regulated as a public utility or essential facility. I responded in my essay, “Facebook Isn’t a “Utility” & You Certainly Shouldn’t Want it to Be Regulated As Such,” in which I pointed out that Facebook isn’t exactly a “life-essential” service that is gouging customers, who have plenty of other choices in social networking services.

Adverse Possession & API Neutrality for Twitter?

An equally interesting battle is now set to unfold for Twitter following Friday’s announced changes. To get a flavor for what might lie ahead for the company, we might begin by taking a second look at what Harvard University’s Jonathan Zittrain proposed in his 2008 book, The Future of the Internet and How to Stop It. In that book, Zittrain suggested that “API neutrality” might be needed to ensure fair access to certain cyber-services or digital platforms to ensure “generativity” was not imperiled. On pg. 181 of the book, Zittrain argued that:

“If there is a present worldwide threat to neutrality in the movement of bits, it comes not from restrictions on traditional Internet access that can be evaded using generative PCs, but from enhancements to traditional and emerging appliancized services that are not open to third-party tinkering.”

After engaging in some hand-wringing about “walled gardens” and “mediated experiences,” Zittrain went on to ask: “So when should we consider network neutrality-style mandates for appliancized systems?” He responds to his own question as follows:

“The answer lies in that subset of appliancized systems that seeks to gain the benefits of third-party contributions while reserving the right to exclude it later. … Those who offer open APIs on the Net in an attempt to harness the generative cycle ought to remain application-neutral after their efforts have succeeded, so all those who built on top of their interface can continue to do so on equal terms.” (p. 184)

This might be a fine generic principle, but Zittrain implies that this should be a legal standard to which online providers are held. At one point, he even alludes to the possibility of applying the common law principle of adverse possession more broadly in these contexts. He notes that adverse possession “dictates that people who openly occupy another’s private property without the owner’s explicit objection (or, for that matter, permission) can, after a lengthy period of time, come to legitimately acquire it.” But he doesn’t make it clear when it would be triggered as it pertains to digital platforms or APIs.

As I noted in the first of my many reviews of his book, there are many problems with the logic of API neutrality or the application of adverse possession in these contexts. Here’s my critique of the “API neutrality” notion (again, this is from 2008):

First, most developers who offer open APIs aren’t likely to close them later precisely because they don’t want to incur the wrath of “those who built on top of their interface.” But, second, for the sake of argument, let’s say they did want to abandoned previously open APIs and move to some sort of walled garden. So what? Isn’t that called marketplace experimentation? Are we really going to make that illegal? Finally, if they were so foolish as to engage in such games, it might be the best thing that ever happened to the market and consumers since it could encourage more entry and innovation as people seek out more open, pro-generative alternatives. Consider this example: Now that Apple has opened to door to third-party iPhone development a bit with the SDK, does that mean that under Jonathan’s proposed paradigm we should treat the iPhone as the equivalent of commoditized common carriage device? That seems incredibly misguided to me. If Steve Jobs opens the development door just a little bit only to slam it shut a short time later, he will pay dearly for that mistake in the marketplace. For God’s sake, just spend a few minutes over on the Howard Forums or the PPC Geeks forum if you want to get a taste for the insane amount of tinkering going on out there in the mobile world right now on other systems. If Apple tries to roll back the clock, Microsoft and others will be all too happy to take their business by offering a wealth of devices that allow you to tinker to your heart’s content. We should let such experiments continue and let the future of the Internet be determined by market choices, not regulatory choices such as forced API neutrality.

I think the same critique would apply to efforts to impose API neutrality on Twitter. But before going into more detail, we need to first ask another question: Does Twitter possess “market power” such that their actions warrant antitrust or regulatory scrutiny at all?

But Isn’t Twitter a “Monopoly”?

Savvy readers will recall that influential Columbia Law School cyberlaw professor Tim Wu has already labeled Twitter a “monopoly,” although he has not yet bothered telling us what the relevant market is here. As I pointed out in an essay critiquing the way Prof. Wu flippantly assigns the label “monopoly” to just about any big tech provider, it’s very much unclear what to call the market Twitter serves. After all, the service is only a few years old and competes with many other forms of communication and information dissemination. For me, Twitter is a partial substitute for blogging, IMs, email, phone calls, and my RSS feed. Yet, like most others, I continue to use all those other technologies and those technologies continue to pressure Twitter to innovate.

Regardless, Prof. Wu is now in a position to put his ideas into action since he is currently serving a short tenure as special advisor to the Federal Trade Commission (FTC).  Might he act on his instincts, therefore, and advise the agency to take action against Twitter? It is unlikely that Prof Wu will be around the FTC long enough to help them bring any sort of formal action against Twitter, but he could help lay the groundwork for a creative interpretation of our nation’s antitrust laws such that Twitter somehow comes to be labeled a “monopoly” or what he refers to as an “information empire” in his new book The Master Switch. (See my last review of the book here.)

But I think he’d have a very hard time convincing the folks in the FTC’s Economics Bureau that Twitter is really worth worrying about or that it has anything approximating a “monopoly” in this emerging market, whatever that market is. But Wu has the ear of key people in government right now and could be lobbying for more expansive constructions of “information monopoly” since he made it very clear in his book that traditional antitrust analysis was not sufficient for information sectors. “[I]nformation industries… can never be properly understood as ‘normal’ industries,” Wu claimed, and even traditional forms of regulation, including antitrust, “are clearly inadequate for the regulation of information industries,” he says. (p. 303)

The Principle of the Matter

So here’s my take on the issue. Twitter is an amazing innovator. It created the space it now plays in and that market is still so new and unique that we don’t even have a name for it yet. In America, we should – and usually do – celebrate such entrepreneurialism. But sometimes certain Ivory Tower elites, regulatory-minded advocates, paternalistic policymakers, or even disgruntled competitors, claim that such innovators “owe” the rest of us something because they got rich or powerful thanks to that innovation. “Forced access” or “neutrality” mandates becomes a convenient regulatory prescription to achieve that end even though the motivating principle behind such regulation is, essentially, “what’s yours is mine.”

Indeed, from my perspective, the entire notion of forced access to the Twitter API could be dismissed by noting that, technically speaking, Twitter’s API is its private property and they should be free to do as they wish with it. That’s why I’m particularly concerned with Zittrain’s notion that we might consider applying adverse possession principles to any digital platform with enough users; at root, it’s a call to limit or even abolish property rights for digital platforms once they gain popularity or have a large number of users. As noted below, that has extremely dangerous ramifications for digital innovation but, more profoundly in my opinion, it is an unjust and unconstitutional taking of an innovator’s property. Of course, I understand that property rights aren’t exactly in vogue in America anymore and that this isn’t really a satisfying answer from the consumer’s perspective, so let’s continue on and consider a few other reasons why forced access regulation of Twitter via API neutrality would be a mistake.

First, we should not forget that Twitter has yet to find a way to turn its service into a serious revenue-generator. The most obvious reason for that is that Twitter (a) doesn’t charge anything for the service it provides and (b) doesn’t lock down its platform / API such that they might be able to earn a return on their investment by monetizing eyeballs via advertising on their own platform. That’s why Twitter’s announcement on Friday won’t come to a shock to anyone with a whiff of business sense in their heads. At some point, Twitter probably had to do something like this if they wanted to find a way to monetize and grow their business.

I can hear some out there screaming out “but it’s not fair!” as if there was cosmic sense of cyber-justice that has been betrayed because Twitter had the audacity to lock-down their platform. Of course, it is certainly true that some third-party app providers may suffer because of Twitter’s move here.  I’m not going to lie to you; if Twitter’s move to exert greater control over its API somehow destroys the beauty that is the TweetDeck desktop interface, I am going to be screaming mad myself! I do not think there has ever been a slicker, more user-friendly interface for any web service in Internet history than what TweetDeck offers consumers. For my money – which means nothing since TweetDeck is free! – TweetDeck is digital perfection defined. And, incidentally, I’d be happy to pay for it if they asked.

But despite my gushing love for it, let’s be clear about something: TweetDeck has no inherent right to exist. Indeed, TweetDeck owes its very existence to the fact that Twitter offered its API to the world on a completely free, unlicensed, unrestricted basis. The same holds true for all those other third-party platforms that depend upon the Twitter API. What Twitter giveth, Twitter can taketh away.

Stated differently, Twitter has thus far had a voluntary open access policy in place for the first few years of its existence but now wants to partially abandon that policy. This policy reversal will, no doubt, lead to claims that the company is acting like one of Wu’s proverbial “information empires” and that perhaps Zittrain’s API neutrality regime should be put in place as a remedy.  Indeed, Zittrain has already referred to it as a “bait-and-switch” and cited back to the provisions of his book that I outlined above. I believe that foreshadows what’s to come: more pressure from the Ivory Tower and then, potentially, from public policy makers that will first encourage and then push to force Twitter to grant access to its platform on terms set by others.  It’s a potential first step toward the forced commoditization of the Twitter API and the involuntary surrender of its property rights to some collective authority who will manage it as a “collective good,” “common carrier,” or “essential facility.”

But Consider This… (on API Neutrality and Disincentives)

Of course, the people at Twitter certain realize how important all those third-party apps and platforms have been to growing the Twitter information empire. Thus, an overly-zealous move to crush third parities by denying them the API or any incidental use of the Twitter name / branding could backfire in two ways: it could lead to a major consumer backlash which in turn spurs the development of alternative platforms and entirely new types of competing services.

Vertical integration might be one way to partially alleviate those problems. Twitter could start cutting deals with existing third-party platforms that rely upon its API such that they were brought under the Twitter corporate umbrella, where more standardization could occur. But Twitter doesn’t have the money to buy them all out. Moreover, Twitter doesn’t want to see dozens of interfaces under its corporate umbrella. For them, this is about “a consistent user experience.” In other words, they’d obviously prefer a more standardized platform / interface that simply got rid of some of those third-party apps and platforms altogether.

As a result, in the short term, I think we’ll likely end up with a market dominated by Twitter’s proprietary platform(s) but with a couple of other leading existing third-party providers being tolerated by the company so as not to rock the boat too much. And that’s not a bad thing. Here’s the key principle to keep in mind: If we apply API neutrality or adverse possession principles forcibly, it sends a horrible signal to entrepreneurs that basically says their platforms are theirs in name only and will be forcibly commoditized once they are popular enough. That’s a horrible disincentive to future innovation and investment. However, it means we must sometimes tolerate short term spells of “market power” when we allow entrepreneurs to realize the benefits of their past innovations and investments if we hope to get more of them in the future.

Avoiding Static Snapshots

But wait, you say, isn’t this all quite horrible for the consumers and competition? Isn’t this just Wu’s “information empire” fear manifesting itself such that antitrust or API neutrality really is required?

Here’s where those warring conceptions of “Internet freedom” come into play. As I’ve noted here many times before in my work on the “conflict of visions” about Internet freedom today, it is during what some might regard as a market’s darkest hour when some of the most exciting disruptive technologies and innovations develop. People don’t sit still; they respond to incentives, including short spells of apparently excessive private power.

By contrast, the “static snapshot” crowd gets so worked up about short term spells of “market power” – which usually don’t represent serious market power at all – that they call for the reordering of markets to suit their tastes.  Sadly, they sometimes do this under the banner of “Internet freedom,” claiming that we can “free” consumers from the supposed tyranny of the marketplace. In reality, that vision wraps markets in chains and ultimately leaves consumers worse off by stifling innovation and inviting in ham-handed regulatory edits and bureaucracies to plan this fast-paced sector of our economy.

“Splitting the Root”

And innovation is possible. Is it really that unthinkable that a Twitter competitor might come along? In a sense, TweetDeck shows the way forward.  TweetDeck has already bucked Twitter’s 140-character limit by offering “Deck.ly,” an exclusive service that allows TweetDeck users to type Twitter messages longer than 140 characters, but which will only be visible via TweetDeck platforms. What if TweetDeck took the next bold step and offered an entirely separate API in direct competition to Twitter? It would be the tweeting equivalent of “splitting the root,” to borrow a concept from the domain name space.

Some would decry the potential lack of interoperability at first. But I bet some sharp folks out there would quickly find work-arounds. Has everyone forgotten the hand-wringing that took place over instant message interoperability just a decade ago (and the resulting restrictions placed on the company following its merger with Time Warner)? Big bad AOL was going to eat everyone’s lunch in the IM space, don’t you remember? But all the hand-wringing about AOL’s looming monopolization of instant messaging seems particularly silly now since anyone can download a free chat client like Digsby or Adium to manage IM services from AOL, Yahoo!, Google, Facebook and just about anyone else, all within a single interface — essentially making it irrelevant which chat service your friends use.

Again, people respond to incentives, and sometimes it takes bone-headed moves by market leaders to really get people off their butts and motivate them to code work-arounds and superior solutions. Is it so hard to imagine that a similar response might follow Twitter’s move this week? After all, we are not talking about replicating a massive physical network of pipes or towers here. We are talking about pure code, for God’s sake! Competition to Twitter is more than possible and it’s likely to come from sources and platforms we cannot currently imagine (just as few of us could have imagined something like Twitter developing just five years ago).

Conclusion

So, Twitter’s move is not an end but rather a new beginning. Personally, I think it could spawn another amazing round of innovation in this space. Again, we must not forget that we are dealing with a space that is still so new that we do not know what to call it. For that reason alone, we should be skeptical of calls for a preemptive regulatory strike. We need to have a little faith in the entrepreneurial spirit and the dynamic nature of markets built upon code, which have the uncanny ability to morph and upend themselves seemingly every few years. In the short term, Twitter will continue to possess a dominant position in whatever we call this market that it serves. But the short term is just that; it’s not the end of the story.

Now excuse me while I get back to Tweeting!

]]>
https://techliberation.com/2011/03/13/a-vision-of-regulatory-things-to-come-for-twitter/feed/ 2 35568
Does the Internet Cause Freedom? https://techliberation.com/2011/02/25/does-the-internet-cause-freedom/ https://techliberation.com/2011/02/25/does-the-internet-cause-freedom/#comments Fri, 25 Feb 2011 13:04:34 +0000 http://techliberation.com/?p=35321

That will be the subject of a Cato on Campus session this afternoon entitled: “The Internet and Social Media: Tools of Freedom or Tools of Oppression?” Watch live online at the link starting at 3:30 p.m., or attend in person. A reception follows.

The delight that so many felt to see protesters in Iran using social media has given way to delight about the use of Facebook to organize for freedom in Egypt. But this serial enthusiasm omits that the “Twitter revolution” in Iran did not succeed. The fiercest skeptics even suggest that the Tweeting during Iran’s suppressed uprising was mostly Iranian ex-pats goosing excitable westerners and not any organizing force within Iran itself. Coming to terms with the Internet, dictatorships are learning to use it for surveillance and control, possibly with help from American tech companies.

So is the cause of freedom better off with the Internet? Or is social media a shiny bauble that distracts from the long, heavy slog of liberating the people of the world?

Joining the discussion will be Chris Preble, Director of Foreign Policy Studies at Cato; Alex Howard, Government 2.0 Correspondent for O’Reilly Media; and Tim Karr, Campaign Director at Free Press. More info here.

]]>
https://techliberation.com/2011/02/25/does-the-internet-cause-freedom/feed/ 2 35321
Cloud Users and Providers Win Big Privacy Victory – U.S. v. Warshak https://techliberation.com/2010/12/16/cloud-users-and-providers-win-big-privacy-victory-%e2%80%93-u-s-v-warshak/ https://techliberation.com/2010/12/16/cloud-users-and-providers-win-big-privacy-victory-%e2%80%93-u-s-v-warshak/#respond Thu, 16 Dec 2010 07:08:43 +0000 http://techliberation.com/?p=33650

The Sixth Circuit ruled on Tuesday that criminal investigators must obtain a warrant to seize user data from cloud providers, voiding parts of the notorious Stored Communication Act.  The SCA allowed investigators to demand providers turn over user data under certain circumstances (e.g., data stored more than 180 days) without obtaining a warrant supported by probable cause.

I have a very long piece analyzing the decision, published on CNET this evening.  See “Search Warrants and Online Data:  Getting Real.” (I also wrote extensively about digital search and seizure in “The Laws of Disruption.”)  The opinion is from the erudite and highly-readable Judge Danny Boggs.    The case is notable if for no other reason than its detailed and lurid description of the business model for Enzyte, a supplement that promises to, well, you know what it promises to do….

The SCA’s looser rules for search and seizure created real headaches for cloud providers and weird results for criminal defendants.  Emails stored on a user’s home computer or on a service provider’s computer for less than 180 days get full Fourth Amendment protection.  But after 180 days the same emails stored remotely lose some of their privacy under some circumstances.   As the commercial Internet has evolved (the SCA was written in 1986), these provisions have become increasingly anomalous, random and worrisome, both to users and service providers.  (As well as to a wide range of public interest groups.)

Why 180 days?  I haven’t had a chance to check the legislative history, but my guess is that in 1986 data left on a service provider’s computer would have taken on the appearance of being abandoned.

Assuming the Sixth Circuit decision is upheld and embraced by other circuits, digital information will finally be covered by traditional Fourth Amendment protections regardless of age or location.  Which means that the government’s ability to seize emails (Tuesday’s case applied only to emails, but other user data would likely get the same treatment) without a warrant that is based on probable cause will turn on whether or not the defendant had a “reasonable expectation of privacy” in the data.  If the answer is yes, a warrant will be required.

(If the government seizes the data anyway, the evidence could be excluded as a penalty.  The “exclusionary rule” was not invoked in the Warshak case, however, because the government acted on a good-faith belief that the SCA was Constitutional.)

Where does the “reasonable expectation of privacy” test come from?  The Fourth Amendment protects against “unreasonable” searches and seizures, and, since the Katz decision in 1968, Fourth Amendment cases turn on an analysis of whether a criminal defendant’s  expectation of privacy in whatever evidence is obtained was reasonable.

Katz involved an electronic listening device attached to the outside of a phone booth—an early form of electronic surveillance.  Discussions about whether a phone conversation could be “searched” or “seized” got quickly metaphysical, so the U.S. Supreme Court decided that what the Fourth Amendment really protected was the privacy interest a defendant had in whatever evidence the government obtained.  “Reasonable expectation of privacy” covered all the defendant’s “effects,” whether tangible or intangible.

Which means, importantly, that not all stored data would pass the test requiring a warrant.   Only stored data that the user reasonably expects to be kept private by the service provider would require a warrant.  Information of any kind that the defendant makes no effort to keep private—e.g., talking on a  cell phone in a public place where anyone can hear—can be used as evidence without a warrant.

Here the Warshak court suggested that if the terms of service were explicit that user data would not be kept private, then users wouldn’t have a reasonable expectation of privacy that the Fourth Amendment protected.  On the other hand, terms that reserved the service provider’s own right to audit or inspect user data did not defeat a reasonable expectation of privacy, as the government has long argued.

An interesting test case, not discussed in the opinion, would be Twitter.  Could a criminal investigator demand copies of a defendant’s Tweets without a warrant, arguing that Tweets are by design public information?  On the one hand, Twitter users can exclude followers they don’t want.  But at the same time, allowed followers can retweet without the permission of the original poster.   So, is there a reasonable expectation of privacy here?

There’s no answer to this simplified  hypothetical (yet), but it is precisely the kind of analysis that courts perform when a defendant challenges the government’s acquisition of evidence without full Fourth Amendment process being followed.

To pick an instructive tangible evidence example, last month appellate Judge Richard Posner wrote a fascinating decision that shows the legal mind in its most subtle workings.  In U.S. v. Simms, the defendant challenged the inclusion of evidence that stemmed from a warranted search of his home and vehicle.  The probable cause that led to the warrant was the discovery in the defendant’s trash of marijuana cigarette butts.  The defendant argued that the search leading to the warrant was a violation of the Fourth Amendment, since the trash can was behind a high fence on his property.

Courts have held that once trash is taken to the curb, the defendant has no “reasonable” expectation of privacy and therefore is deemed to consent to a police officer’s search of that trash.  But trash cans behind a fence are generally protected by the Fourth Amendment, subject to several other exceptions.

Here Judge Posner noted that the defendant’s city had an ordinance that prohibited taking the trash to the curb during the winter, out of concern that cans would interfere with snow plowing.  Instead, the “winter rules” require that trash collectors take the cans from the resident’s property, and that the residents leave a safe and unobstructed path to wherever the cans are stored.  Since the winter rules were in effect, and the cans were left behind a fence but the gate was left open (perhaps stuck in the snow), and the police searched them on trash pickup day, the search did not violate the defendant’s reasonable expectation of privacy.

For better or worse, this is the kind of analysis judges must perform in the post-Katz era, when much of what we consider to be private is not memorialized in papers or other physical effects but which is likely to be intangible—the state of our blood chemistry, information stored in various data bases, heat given off and detectable by infrared scanners.

The good news is that the Warshak case is a big step in including digital information under that understanding of the Fourth Amendment.  Search and seizure is evolving to catch up with the reality of our digital lives.

]]>
https://techliberation.com/2010/12/16/cloud-users-and-providers-win-big-privacy-victory-%e2%80%93-u-s-v-warshak/feed/ 0 33650
Location-Based Services: A Privacy Check-In https://techliberation.com/2010/12/06/location-based-services-a-privacy-check-in/ https://techliberation.com/2010/12/06/location-based-services-a-privacy-check-in/#respond Mon, 06 Dec 2010 21:24:50 +0000 http://techliberation.com/?p=33274

The ACLU of Northern California says it’s time for a privacy check-in on location based-services. Their handy chart compares several of the most popular location-based services along a number of dimensions.

Little of what they examine has to do with civil liberties—cough, cough, ahem (this is a favorite critique of mine for my ACLU friends)—but the report does find that five out of six location-based providers are unclear about whether they require a warrant before handing information over to the government. Facebook is the winner here. Yelp, Foursquare, Gowalla, Loopt, and Twitter are unclear about whether they protect your location data from government prying.

]]>
https://techliberation.com/2010/12/06/location-based-services-a-privacy-check-in/feed/ 0 33274
Social Media Replacing Traditional News https://techliberation.com/2010/12/06/social-media-replacing-traditional-news/ https://techliberation.com/2010/12/06/social-media-replacing-traditional-news/#respond Mon, 06 Dec 2010 18:44:26 +0000 http://techliberation.com/?p=33456

I am so gonna retweet this.

]]>
https://techliberation.com/2010/12/06/social-media-replacing-traditional-news/feed/ 0 33456