Technology Liberation Front http://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 23 Apr 2014 12:55:09 +0000 en-US hourly 1 NETmundial is about to begin http://techliberation.com/2014/04/23/netmundial-is-about-to-begin/ http://techliberation.com/2014/04/23/netmundial-is-about-to-begin/#comments Wed, 23 Apr 2014 12:55:09 +0000 http://techliberation.com/?p=74431

As I blogged last week, I am in São Paulo to attend NETmundial, the meeting on the future of Internet governance hosted by the Brazilian government. The opening ceremony is about to begin. A few more observations:

  • The Brazilian Senate passed the landmark Marco Civil bill last night, and Dilma Rousseff, the Brazilian president, may use here appearance here today to sign it into law. The bill subjects data stored on Brazilians anywhere in the world to Brazilian jurisdiction and imposes net neutrality domestically. It also provides a safe harbor for ISPs and creates a notice-and-takedown system for offensive content.
  • Some participants are framing aspects of the meeting, particularly the condemnation of mass surveillance in the draft outcome document, as civil society v. the US government. There is a lot of concern that the US will somehow water down the surveillance language so that it doesn’t apply to the NSA’s surveillance. WikiLeaks has stoked some of this concern with breathless tweets. I don’t see events playing out this way. I am as opposed to mass US surveillance as anyone, but I haven’t seen much resistance from the US government participants in this regard. Most of the comments by the US on the draft have been benign. For example, WikiLeaks claimed that the US “stripped” language referring to the UN Human Rights Council; in fact, the US hasn’t stripped anything because it is not in charge (it can only make suggestions), and eliminating the reference to the HRC is actually a good idea because the HRC is a multilateral, not a multistakeholder, body. I expect a strong anti-surveillance statement to be included in the final outcome document. If it is not, it will probably be other governments, not the US, that block it.
  • In my view, the privacy section of the draft still needs work, however. In particular, it is important to cabin the paragraph to address governmental surveillance, not to interfere with voluntary, private arrangements in which users disclose information to receive free services.
  • I expect discussions over net neutrality to be somewhat contentious. Civil society participants are generally for it, with some governments, businesses, parts of the technical community, and yours truly opposed.
  • Although surveillance and net neutrality have received a lot of attention, they are not the important issues at NETmundial. Instead, look for the language that will affect “the future of Internet governance,” which is after all what the meeting is about. For example, will the language on stakeholders’ “respective roles and responsibilities” be stricken? This is language held over from the Tunis Agenda and it has a lot of meaning. Do stakeholders participate as equals or do they, especially governments, have separate roles? There is also a paragraph on “enhanced cooperation,” which is a codeword for governments running the show. Look to see in the final draft if it is still there.
  • Speaking of the final draft, here is how it will be produced: During the meeting, participants will have opportunities to make 2-minute interventions on specific topics. The drafting group will make note of the comments and then retreat to a drafting room to make final edits to the draft. This is, of course, not really the open governance process that many of us want for the Internet, one where select, unaccountable participants have the final say. Yet two days is not a long enough time to really have an open, free-wheeling drafting conference. I think the structure of the conference, driven by the perceived need to produce an outcome document with certainty, is unfortunate and somewhat detracts from the legitimacy of whatever will be produced, even though I expect the final document to be OK on substance.
]]>
http://techliberation.com/2014/04/23/netmundial-is-about-to-begin/feed/ 0
Will the FCC Force Television Online Even If Aereo Loses in Court? http://techliberation.com/2014/04/22/will-the-fcc-force-television-online-even-if-aereo-loses-in-court/ http://techliberation.com/2014/04/22/will-the-fcc-force-television-online-even-if-aereo-loses-in-court/#comments Tue, 22 Apr 2014 15:44:13 +0000 http://techliberation.com/?p=74427

The Supreme Court hears oral arguments today in a case that will decide whether Aereo, an over-the-top video distributor, can retransmit broadcast television signals online without obtaining a copyright license. If the court rules in Aereo’s favor, national programming networks might stop distributing their programming for free over the air, and without prime time programming, local TV stations might go out of business across the country. It’s a make or break case for Aereo, but for broadcasters, it represents only one piece of a broader regulatory puzzle regarding the future of over-the-air television.

If the court rules in favor of the broadcasters, they could still lose at the Federal Communications Commission (FCC). At a National Association of Broadcasters (NAB) event earlier this month, FCC Chairman Tom Wheeler focused on “the opportunity for broadcast licensees in the 21st century . . . to provide over-the-top services.” According to Chairman Wheeler, TV stations shouldn’t limit themselves to being in the “television” business, because their “business horizons are greater than [their] current product.” Wheeler wants TV stations to become over-the-top “information providers”, and he sees the FCC’s role as helping them redefine themselves as a “growing source of competition” in that market segment.

If TV stations share Chairman Wheeler’s vision for their future, the FCC’s “help” in redefining the role of broadcast licensees in the digital era could represent a potential win rather than a loss. If Wheeler truly seeks to enable TV stations to deliver a competitive, fixed and mobile cable-like service, it could signal a positive shift in the FCC’s traditionally stagnant approach to broadcast regulation.

Like all regulatory pronouncements, the devil is always in the details — notwithstanding the existing and legitimate skepticism that TV stations have as to whether the FCC can and will treat them fairly in the future. For better or worse, many will judge the “success” of the broadcast incentive auction by the amount of revenue it raises. This reality provides the FCC with unique incentives to “encourage” TV stations to give up their spectrum licenses. In Washington, “encouragement” can range from polite entreaty to regulatory pain.

After the FCC imposed new ownership limits on TV stations last month, some fear the FCC will choose pain as its persuader. Last month’s FCC action prompts them to ask, if Wheeler is sincere in his desire to help broadcasters pivot to a broader business model, why impose new ownership limits on TV stations that could hinder their ability to compete with cable and over-the-top companies?

Chairman Wheeler attempted to address this question in his NAB speech, but his answer was oddly inconsistent with his broader vision. He said the FCC’s new ownership limits are rooted in the traditional goals of competition, diversity, and localism among TV stations. That only makes sense, however, if you believe TV stations should compete only with other TV stations. Imposing new ownership limits on TV stations won’t help them pivot to a future in which they compete in a broader “information provider” market — it would hinder them.

I expect TV station owners are wondering: If we accept Chairman Wheeler’s invitation to look beyond our current product, will he meet us on the horizon? Or will we find ourselves standing there alone? It’s hard to predict the future, because the future is always just over the horizon.

]]>
http://techliberation.com/2014/04/22/will-the-fcc-force-television-online-even-if-aereo-loses-in-court/feed/ 1
Patrick Byrne on online retailers accepting Bitcoin http://techliberation.com/2014/04/22/byrne/ http://techliberation.com/2014/04/22/byrne/#comments Tue, 22 Apr 2014 10:00:25 +0000 http://techliberation.com/?p=74423 Post image for Patrick Byrne on online retailers accepting Bitcoin

Patrick Byrne, CEO of Overstock.com, discusses how Overstock.com became one of the first online retail stores to accept Bitcoin. Byrne provides insight into how Bitcoin lowers transaction costs, making it beneficial to both retailers and consumers, and how governments are attempting to limit access to Bitcoin. Byrne also discusses his project DeepCapture.com, which raises awareness for market manipulation and naked short selling, as well as his philanthropic work and support for education reform.

Download

Related Links

]]>
http://techliberation.com/2014/04/22/byrne/feed/ 1
Pre-NETmundial Notes http://techliberation.com/2014/04/18/pre-netmundial-notes/ http://techliberation.com/2014/04/18/pre-netmundial-notes/#comments Fri, 18 Apr 2014 14:29:46 +0000 http://techliberation.com/?p=74411

Next week I’ll be in São Paulo for the NETmundial meeting, which will discuss “the future of Internet governance.” I’ll blog more while I’m there, but for now I just wanted to make a few quick notes.

  • This is the first meeting of its kind, so it’s difficult to know what to expect, in part because it’s not clear what others’ expectations are. There is a draft outcome document, but no one knows how significant it will be or what weight it will carry in other fora.
  • The draft outcome document is available here. The web-based tool for commenting on individual paragraphs is quite nice. Anyone in the world can submit comments on a paragraph-by-paragraph basis. I think this is a good way to lower the barriers to participation and get a lot of feedback.
  • I worry that we won’t have enough time to give due consideration to the feedback being gathered. The meeting is only two days long. If you’ve ever participated in a drafting conference, you know that this is not a lot of time. What this means, unfortunately, is that the draft document may be something of a fait accompli. Undoubtedly it will change a little, but the amount of changes that can be contemplated will be limited due to sheer time constraints.
  • Time will be even more constrained by the absurd amount of time allocated to opening ceremonies and welcome remarks. The opening ceremony begins at 9:30 am and the welcome remarks are not scheduled to conclude until 1 pm on the first day. This is followed by a lunch break, and then a short panel on setting goals for NETmundial, so that the first drafting session doesn’t begin until 2:30 pm. This seems like a mistake.
  • Speaking of the agenda, it was not released until yesterday. While NETmundial has indeed been open to participation by all, it has not been very transparent. An earlier draft outcome document had to be leaked by WikiLeaks on April 8. Not releasing an agenda until a few days before the event is also not very transparent. In addition, the processes by which decisions have been made have not been transparent to outsiders.

See you all next week.

]]>
http://techliberation.com/2014/04/18/pre-netmundial-notes/feed/ 1
New Paper on the Cybersecurity Framework http://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/ http://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/#comments Thu, 17 Apr 2014 14:46:24 +0000 http://techliberation.com/?p=74409

Andrea Castillo and I have a new paper out from the Mercatus Center entitled “Why the Cybersecurity Framework Will Make Us Less Secure.” We contrast emergent, decentralized, dynamic provision of security with centralized, technocratic cybersecurity plans. Money quote:

The Cybersecurity Framework attempts to promote the outcomes of dynamic cybersecurity provision without the critical incentives, experimentation, and processes that undergird dynamism. The framework would replace this creative process with one rigid incentive toward compliance with recommended federal standards. The Cybersecurity Framework primarily seeks to establish defined roles through the Framework Profiles and assign them to specific groups. This is the wrong approach. Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts. What’s more, an assessment of DHS critical infrastructure categorizations by the Government Accountability Office (GAO) finds that the DHS itself has failed to adequately communicate its internal categories with other government bodies. Adding to the confusion is the proliferating amalgam of committees, agencies, and councils that are necessarily invited to the table as the number of “critical” infrastructures increases. By blindly beating the drums of cyber war and allowing unfocused anxieties to clumsily force a rigid structure onto a complex system, policymakers lose sight of the “far broader range of potentially dangerous occurrences involving cyber-means and targets, including failure due to human error, technical problems, and market failure apart from malicious attacks.” When most infrastructures are considered “critical,” then none of them really are.

We argue that instead of adopting a technocratic approach, the government should take steps to improve the existing emergent security apparatus. This means declassifying information about potential vulnerabilities and kickstarting the cybersecurity insurance market by buying insurance for federal agencies, which experienced 22,000 breaches in 2012. Read the whole thing, as they say.

]]>
http://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/feed/ 0
Renters and Rent-Seeking in San Francisco http://techliberation.com/2014/04/15/renters-and-rent-seeking-in-san-francisco/ http://techliberation.com/2014/04/15/renters-and-rent-seeking-in-san-francisco/#comments Tue, 15 Apr 2014 12:53:57 +0000 http://techliberation.com/?p=74404

[The following essay is a guest post from Dan Rothschild, director of state projects and a senior fellow with the R Street Institute.]

As anyone who’s lived in a major coastal American city knows, apartment renting is about as far from an unregulated free market as you can get. Legal and regulatory stipulations govern rents and rent increases, what can and cannot be included in a lease, even what constitutes a bedroom. And while the costs and benefits of most housing policies can be debated and deliberated, it’s generally well known that housing rentals are subject to extensive regulation.

But some San Francisco tenants have recently learned that, in addition to their civil responsibilities under the law, their failure to live up to some parts of the city’s housing code may trigger harsh criminal penalties as well. To wit: tenants who have been subletting out part or all of their apartments on a short-term basis, usually through web sites like Airbnb, are finding themselves being given 72 hours to vacate their (often rent-controlled) homes.

San Francisco’s housing stock is one of the most highly regulated in the country. The city uses a number of tools to preserve affordable housing and control rents, while at the same time largely prohibiting higher buildings that would bring more units online, increasing supply and lowering prices. California’s Ellis Act provides virtually the only legal and effective means of getting tenants (especially those benefiting from rent control) out of their units — but it has the perverse incentive of causing landlords to demolish otherwise useable housing stock.

Again, the efficiency and equity ramifications of these policies can be discussed; the fact that demand curves slope downward, however, is really not up for debate.

Under San Francisco’s municipal code it may be a crime punishable by jail time to rent an apartment on a short-term basis. More importantly, it gives landlords the excuse they need to evict tenants they otherwise can’t under the city’s and state’s rigorous tenant protection laws. After all, they’re criminals!

Here’s the relevant section of the code:

Any owner who rents an apartment unit for tourist or transient use as defined in this Chapter shall be guilty of a misdemeanor. Any person convicted of a misdemeanor hereunder shall be punishable by a fine of not more than $1,000 or by imprisonment in the County Jail for a period of not more than six months, or by both. Each apartment unit rented for tourist or transient use shall constitute a separate offense.

Here lies the rub. There are certainly legitimate reasons to prohibit the short-term rental of a unit in an apartment or condo building — some people want to know who their neighbors are, and a rotating cast of people coming and going could potentially be a nuisance.

But that’s a matter for contracts and condo by-laws to sort out. If people value living in units that they can list on Airbnb or sublet to tourists when they’re on vacation, that’s a feature like a gas stove or walk-in closet that can come part-and-parcel of the rental through contractual stipulation. Similarly, if people want to live in a building where overnight guests are verboten, that’s something landlords or condo boards can adjudicate. The Coase Theorem can be a powerful tool, if the law will allow it.

The fact that, so far as I can tell, there’s no prohibition on having friends or family stay a night — or even a week — under San Francisco code, it seems that the underlying issue isn’t a legitimate concern about other tenants’ rights but an aversion to commerce. From the perspective of my neighbor, there’s no difference between letting my friend from college crash in my spare bedroom for a week or allowing someone I’ve never laid eyes on before do the same in exchange for cash.

The peer production economy is still in its infancy, and there’s a lot that needs to be worked out. Laws like those in San Francisco’s that circumvent the discovery process of markets prevent landlords, tenants, condos, homeowners, and regulators from leaning from experience and experimentation — and lock in a mediocre system that threatens to put people in jail for renting out a room.

]]>
http://techliberation.com/2014/04/15/renters-and-rent-seeking-in-san-francisco/feed/ 0
Our new draft paper on Bitcoin financial regulation: securities, derivatives, prediction markets, & gambling http://techliberation.com/2014/04/10/our-new-draft-paper-on-bitcoin-financial-regulation-securities-derivatives-prediction-markets-gambling/ http://techliberation.com/2014/04/10/our-new-draft-paper-on-bitcoin-financial-regulation-securities-derivatives-prediction-markets-gambling/#comments Thu, 10 Apr 2014 18:23:37 +0000 http://techliberation.com/?p=74395

opengraphI’m thrilled to make available today a discussion draft of a new paper I’ve written with Houman Shadab and Andrea Castillo looking at what will likely be the next wave of Bitcoin regulation, which we think will be aimed at financial instruments, including securities and derivatives, as well as prediction markets and even gambling. You can grab the draft paper from SSRN, and we very much hope you will give us your feedback and help us correct any errors. This is a complicated issue area and we welcome all the help we can get.

While there are many easily regulated intermediaries when it comes to traditional securities and derivatives, emerging bitcoin-denominated instruments rely much less on traditional intermediaries. Additionally, the block chain technology that Bitcoin introduced for the first time makes completely decentralized markets and exchanges possible, thus eliminating the need for intermediaries in complex financial transactions. In the article we survey the type of financial instruments and transactions that will most likely be of interest to regulators, including traditional securities and derivatives, new bitcoin-denominated instruments, and completely decentralized markets and exchanges.

We find that bitcoin derivatives would likely not be subject to the full scope of regulation under the Commodities and Exchange Act because such derivatives would likely involve physical delivery (as opposed to cash settlement) and would not be capable of being centrally cleared. We also find that some laws, including those aimed at online gambling, do not contemplate a payment method like Bitcoin, thus placing many transactions in a legal gray area.

Following the approach to Bitcoin taken by FinCEN, we conclude that other financial regulators should consider exempting or excluding certain financial transactions denominated in Bitcoin from the full scope of the regulations, much like private securities offerings and forward contracts are treated. We also suggest that to the extent that regulation and enforcement becomes more costly than its benefits, policymakers should consider and pursue strategies consistent with that new reality, such as efforts to encourage resilience and adaptation.

I look forward to your comments!

]]>
http://techliberation.com/2014/04/10/our-new-draft-paper-on-bitcoin-financial-regulation-securities-derivatives-prediction-markets-gambling/feed/ 0
New Books in Technology podcast about my new book http://techliberation.com/2014/04/07/new-books-in-technology-podcast-about-my-new-book/ http://techliberation.com/2014/04/07/new-books-in-technology-podcast-about-my-new-book/#comments Mon, 07 Apr 2014 14:33:50 +0000 http://techliberation.com/?p=74391

It was my great pleasure to join Jasmine McNealy last week on the “New Books in Technology” podcast to discuss my new book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. (A description of my book can be found here.)

My conversation with Jasmine was wide-ranging and lasted 47 minutes. The entire show can be heard here if you’re interested.

By the way, if you don’t follow Jasmine, you should begin doing so immediately. She’s on Twitter and here’s her page at the University of Kentucky School of Library and Information Science.  She’s doing some terrifically interesting work. For example, check out her excellent essay on “Online Privacy & The Right To Be Forgotten,” which I commented on here.

]]>
http://techliberation.com/2014/04/07/new-books-in-technology-podcast-about-my-new-book/feed/ 0
Can NSA Force Telecom Companies To Collect More Data? http://techliberation.com/2014/04/06/can-nsa-force-telecom-companies-to-collect-more-data/ http://techliberation.com/2014/04/06/can-nsa-force-telecom-companies-to-collect-more-data/#comments Mon, 07 Apr 2014 01:44:18 +0000 http://techliberation.com/?p=74388

Recent reports highlight that the telephone meta-data collection efforts of the National Security Agency are being undermined by the proliferation of flat-rate, unlimited voice calling plans.  The agency is collecting data for less than a third of domestic voice traffic, according to one estimate.

It’s been clear for the past couple months that officials want to fix this, and President Obama’s plan for leaving meta-data in the hands of telecom companies—for NSA to access with a court order—might provide a back door opportunity to expand collection to include all calling data.  There was a potential new twist last week, when Reuters seemed to imply that carriers could be forced to collect data for all voice traffic pursuant to a reinterpretation of the current rule.

While the Federal Communications Commission requires phone companies to retain for 18 months records on “toll” or long-distance calls, the rule’s application is vague (emphasis added) for subscribers of unlimited phone plans because they do not get billed for individual calls.

The current FCC rule (47 C.F.R. § 42.6) requires carriers to retain billing information for “toll telephone service,” but the FCC doesn’t define this familiar term.  There is a statutory definition, but you have to go to the Internal Revenue Code to find it.  According to 26 U.S.C. § 4252(b),

the term “toll telephone service” means—

(1) a telephonic quality communication for which

(A) there is a toll charge which varies in amount with the distance and elapsed transmission time of each individual communication…

This Congressional definition describes the dynamics of long-distance pricing in 1965, but it pre-dates the FCC rule (1986) and it’s still on the books.

Distance subsequently became virtually irrelevant as a cost factor due to improving technology by the 1990s, when long-distance prices became based on minutes of use only (although clashing federal and state regulatory regimes frequently did result in higher rates for many short-haul intrastate calls as compared to long-haul interstate calls).  Incidentally, it was estimated at the time that telephone companies spent between 30 and 40 percent of their revenues on their billing systems.

In any event, with the elimination of distance-sensitive pricing, the Internal Revenue Service’s efforts to collect the Telephone Excise Tax—first enacted during the Spanish American War—were stymied.  In 2006, the IRS announced it would no longer litigate whether a toll charge that varies with elapsed transmission time but not distance (time-only service) is taxable “toll telephone service.”

I don’t see why telecom companies are required to collect and store for 18 months any telephone data, since it’s hard to imagine they are providing any services these days that actually qualify as “toll telephone service,” as that term is currently defined in the United States Code.

]]>
http://techliberation.com/2014/04/06/can-nsa-force-telecom-companies-to-collect-more-data/feed/ 1
A Short Response to Michael Sacasas on Advice for Tech Writers http://techliberation.com/2014/04/03/a-short-response-to-michael-sacasas-on-advice-for-tech-writers/ http://techliberation.com/2014/04/03/a-short-response-to-michael-sacasas-on-advice-for-tech-writers/#comments Thu, 03 Apr 2014 14:41:58 +0000 http://techliberation.com/?p=74384

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.

______________________________

Michael:

Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.

In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”

I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.

That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms.

To be clear, I entirely agree with your admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” But, again, we can agree at least agree that such acclimation has happened regularly throughout human history, right?  What were the mechanics of that process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

I know you agree that these questions are worthy of exploration, but I suppose where we might part ways is over the question of the metrics by which judge whether “the changes were inconsequential or benign.” Because I believe that while technological change often brings sweeping and quite consequential change, there is a value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, however, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Even if you don’t agree with all of that, again, I would think you would find great value in studying the process by which such adaptation happens. And then we could argue about whether it was all really worth it! Alas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

With all this in mind, let me suggest this friendly reformulation of your second recommendation: Tech writers should not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today. But how people and institutions learned to cope with those concerns is worthy of serious investigation. And what we learned from living through that process may be valuable in its own right.

I have been trying to sketch out an essay on all this entitled, “Muddling Through: Toward a Theory of Societal Adaptation to Disruptive Technologies.” I am borrowing that phrase (“muddling through”) from Joel Garreau, who used it in his book “Radical Evolution” when describing a third way of viewing humanity’s response to technological change. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.” That pretty much sums up my own perspective on things, but much study remains to be done on how that very messy process of “muddling through” works and whether we are left better off as a result. I remain optimistic that we do!

As always, I look forward to our continuing dialog over these interesting issues and I wish you all the best.

Cheers,

Adam Thierer

]]>
http://techliberation.com/2014/04/03/a-short-response-to-michael-sacasas-on-advice-for-tech-writers/feed/ 0
“Big Data” Inquiry Should Study Economics & Free Speech: TechFreedom urges reform of blanket surveillance and FTC processes http://techliberation.com/2014/04/02/big-data-inquiry-should-study-economics-free-speech-techfreedom-urges-reform-of-blanket-surveillance-and-ftc-processes/ http://techliberation.com/2014/04/02/big-data-inquiry-should-study-economics-free-speech-techfreedom-urges-reform-of-blanket-surveillance-and-ftc-processes/#comments Thu, 03 Apr 2014 00:12:41 +0000 http://techliberation.com/?p=74382

Monday, TechFreedom submitted comments urging the White House to apply economic thinking to its inquiry into “Big Data,” also pointing out that the worst abuses of data come not from the private sector, but government. The comments were in response to a request by the Office of Science and Technology Policy.

“On the benefits of Big Data, we urge OSTP to keep in mind two cautions. First, Big Data is merely another trend in an ongoing process of disruptive innovation that has characterized the Digital Revolution. Second, cost-benefit analyses generally, and especially in advance of evolving technologies, tend to operate in aggregates which can be useful for providing directional indications of future trade-offs, but should not be mistaken for anything more than that,” writes TF President Berin Szoka.

The comments also highlight the often-overlooked reality that data, big or small, is speech. Therefore, OSTP’s inquiry must address the First Amendment analysis. Historically, policymakers have ignored the First Amendment in regulating new technologies, from film to blogs to video games, but in 2011 the Supreme Court made clear in Sorrell v. IMS Health that data is a form of speech. Any regulation of Big Data should carefully define the government’s interest, narrowly tailor regulations to real problems, and look for less restrictive alternatives to regulation, such as user empowerment, transparency and education. Ultimately, academic debates over how to regulate Big Data are less important than how the Federal Trade Commission currently enforces existing consumer protection laws, a subject that is the focus of the ongoing FTC: Technology & Reform Project led by TechFreedom and the International Center for Law & Economics.

More important than the private sector’s use of Big Data is the government’s abuse of it, the group says, referring to the NSA’s mass surveillance programs and the Administration’s opposition to requiring warrants for searches of Americans’ emails and cloud data. Last December, TechFreedom and its allies garnered over 100,000 signatures on a WhiteHouse.gov petition for ECPA reform. While the Administration has found time to reply to frivolous petitions, such as asking for the construction of a Death Star, it has ignored this serious issue for over three months. Worse, the administration has done nothing to help promote ECPA reform and, instead, appears to be actively orchestrating opposition to it from theoretically independent regulatory agencies, which has stalled reform in the Senate.

“This stubborn opposition to sensible, bi-partisan privacy reform is outrageous and shameful, a hypocrisy outweighed only by the Administration’s defense of its blanket surveillance of ordinary Americans,” said Szoka. “It’s time for the Administration to stop dodging responsibility or trying to divert attention from the government-created problems by pointing its finger at the private sector, by demonizing private companies’ collection and use of data while the government continues to flaunt the Fourth Amendment.”

Szoka is available for comment at media@techfreedom.org. Read the full comments and see TechFreedom’s other work on ECPA reform.

]]>
http://techliberation.com/2014/04/02/big-data-inquiry-should-study-economics-free-speech-techfreedom-urges-reform-of-blanket-surveillance-and-ftc-processes/feed/ 0
How to Privatize the Internet http://techliberation.com/2014/04/02/how-to-privatize-the-internet/ http://techliberation.com/2014/04/02/how-to-privatize-the-internet/#comments Wed, 02 Apr 2014 15:52:08 +0000 http://techliberation.com/?p=74378

Today on Capitol Hill, the House Energy and Commerce Committee is holding a hearing on the NTIA’s recent announcement that it will relinquish its small but important administrative role in the Internet’s domain name system. The announcement has alarmed some policymakers with a well-placed concern for the future of Internet freedom; hence the hearing. Tomorrow, I will be on a panel at ITIF discussing the IANA oversight transition, which promises to be a great discussion.

My general view is that if well executed, the transition of the DNS from government oversight to purely private control could actually help secure a measure of Internet freedom for another generation—but the transition is not without its potential pitfalls.

The NTIA’s technical administration of the DNS’ “root zone” is an artifact of the Internet’s origins as a U.S. military experiment. In 1989, the government began the process of privatizing the Internet by opening it up to general and commercial use. In 1998, the Commerce Department created ICANN to oversee the DNS on a day-to-day basis. The NTIA’s announcement is arguably the culmination of this single decades-long process of privatization.

The announcement also undercuts the primary justification used by authoritarian regimes to agitate for control of the Internet. Other governments have long cited the United States’ unilateral control of the root zone, arguing that they, too, should have roles in governing the Internet. By relinquishing its oversight of the DNS, the United States significantly undermines that argument and bolsters the case for private administration of the Internet.

The United States’ stewardship of the root zone is largely apolitical. This apolitical approach to DNS administration is precisely what is at stake during the transition, hence the three pitfalls the Obama administration must avoid to preserve it.

The first pitfall is the most serious but also the least likely to materialize. Despite the NTIA’s excellent track record, authoritarian regimes like Russia, China, and Iran have long lobbied for the ITU, a clumsy and heavily politicized U.N. technical agency, to take over the NTIA’s duties. In its announcement, the NTIA said it would not accept a proposal from an intergovernmental organization, a clear rebuke to the ITU.

Nevertheless, liberal governments would be wise to send the organization a clear message in the form of much-needed reform. The ITU should adopt the transparency we expect of communications standards bodies, and it should focus on its core competency—international coordination of radio spectrum—instead of on Internet governance. If the ITU resists these reforms at its Plenipotentiary Conference this fall, the United States and other countries should slash funding or quit the Union.

ICANN’s Governmental Advisory Committee (GAC) presents a second pitfall. Indeed, the GAC is already the source of much mischief. For example, France and Luxembourg objected to the creation of the .vin top-level domain on the grounds that “vin” (wine) is a regulated term in those countries. Brazil and Peru have held up Amazon.com’s application for .amazon despite the fact that they previously agreed to the list of reserved place names, and rivers and states were not on it. Last July, the U.S. government, reeling from the Edward Snowden revelations, threw Amazon and the rule of law under the bus at the GAC as a conciliatory measure.

ICANN created the GAC to appease other governments in light of the United States’ outsized role. Since the United States is giving up its special role, the case for the GAC is much diminished. In practice, the limits on the GAC’s power are gradually eroding. ICANN’s board seems increasingly hesitant to overrule it out of fear that governments will go back to the ITU and complain that the GAC “isn’t working.” As part of the transition of the root zone to ICANN, therefore, new limits need to be placed on the GAC’s power. Ideally, it would dissolve the GAC.

The third pitfall comes from ICANN itself. The organization is awash in cash from domain registration fees and new top-level domain name applications—which cost $185,000 each—and when the root zone transition is completed, it will face no external accountability. Long-time ICANN insiders speak of “mission creep,” noting that the supposedly purely technical organization increasingly deals with trademark policy and has aided police investigations in the past, a dangerous precedent.

How can we prevent an unaccountable, cash-rich technical organization from imposing its own internal politics on what is supposed to be an apolitical administrative role? In the long run, we may never be able to stop ICANN from becoming a government-like entity, which is why it is important to support research and experimentation in peer-to-peer, decentralized domain name systems. This matter is under discussion, among other places, at the Internet Engineering Task Force, which may ultimately play something of a counterweight to an independent ICANN.

Despite these potential pitfalls, it is time for an Internet that is fully in private hands. The Obama administration deserves credit for proposing to complete the privatization of the Internet, but we must also carefully monitor the process to intercept any blunders that might result in politicization of the root zone.

]]>
http://techliberation.com/2014/04/02/how-to-privatize-the-internet/feed/ 0
America in the golden age of broadband http://techliberation.com/2014/04/02/america-in-the-golden-age-of-broadband/ http://techliberation.com/2014/04/02/america-in-the-golden-age-of-broadband/#comments Wed, 02 Apr 2014 15:20:01 +0000 http://techliberation.com/?p=74370

This blog was made in cooperation with Michael James Horney, George Mason University master’s student, based upon our upcoming paper on broadband innovation, investment and competition.

Ezra Klein’s interview with Susan Crawford paints a glowing picture of  publicly provided broadband, particularly fiber to the home (FTTH), but the interview missed a number of important points.

The international broadband comparisons provided were selective and unstandardized.  The US is much bigger and more expensive to cover than many small, highly populated countries. South Korea is the size of Minnesota but has 9 times the population. Essentially the same amount of network can be deployed and used by 9 times as many people. This makes the business case for fiber more cost effective.  However South Korea has limited economic growth to show for its fiber investment. A recent Korean government report complained of “jobless growth”.  The country still earns the bulk of its revenue from the industries from the pre-broadband days.

It is more realistic and correct to compare the US to the European Union, which has a comparable population and geographic areas.  Data from America’s National Broadband Map and the EU Digital Agenda Scoreboard show that  the US exceeds the EU on many important broadband measures, including the deployment of fiber to the home (FTTH), which is twice the rate of EU.  Considering where fiber networks are available in the EU, the overall adoption rate is just 2%.  The EU government itself, as part of its Digital Single Market initiative, has recognized that its approach to broadband has not worked and is now looking to the American model.

The assertion that Americans are “stuck” with cable as the only provider of broadband is false.  It is more correct to say that Europeans are “stuck” with DSL, as 74% of all EU broadband connections are delivered on copper networks. Indeed broadband and cable together account for 70% of America’s broadband connections, with the growing 30% comprising FTTH, wireless, and other  broadband solutions.  In fact, the US buys and lays more fiber than all of the EU combined.

The reality is that Europeans are “stuck” with a tortured regulatory approach to broadband, which disincentivizes investment in next generation networks. As data from Infonetics show, a decade ago the EU accounted for one-third of the world’s investment in broadband; that amount has plummeted to less than one-fifth today. Meanwhile American broadband providers invest at twice the rate of European and account for a quarter of the world’s outlay in communication networks. Americans are just 4% of the world’s population, but enjoy one quarter of its broadband investment.

The following chart illustrates the intermodal competition between different types of broadband networks (cable, fiber, DSL, mobile, satellite, wifi) in the US and EU.

US (%)

EU (%)

Availability of broadband with a download speed of 100 Mbps or higher

57*

30

Availability of cable broadband

88

42

Availability of LTE

94**

26

Availability of FTTH

25

12

Percent of population that subscribes to broadband by DSL

34

74

Percent of households that subscribe to broadband by cable

36***

17

 

The interview offered some cherry picked examples, particularly Stockholm as the FTTH utopia. The story behind this city is more complex and costly than presented.  Some $800 million has been invested in FTTH in Stockholm to date with an additional $38 million each year.  Subscribers purchase the fiber broadband with a combination of monthly access fees and increases to municipal fees assessed on homes and apartments. Acreo, a state-owned consulting company charged with assessing Sweden’s fiber project concludes that the FTTH project shows at best a ”weak but statistically significant correlation between fiber and employment” and that ”it is difficult to estimate the value of FTTH for end users in dollars and some of the effects may show up later.”

Next door Denmark took a different approach.  In 2005, 14 utility companies in Denmark invested $2 billion in FTTH.  With advanced cable and fiber networks, 70% of Denmark’s households and businesses has access to ultra-fast broadband, but less than 1 percent subscribe to the 100 mbps service.  The utility companies have just 250,000 broadband customers combined, and most customers subscribe to the tiers below 100 mbps because it satisfies their needs and budget. Indeed 80% of the broadband subscriptions in Denmark are below 30 mbps.  About 20 percent of homes and businesses subscribe to 30 mbps, but more than two-thirds subscribe to 10 mbps.

Meanwhile, LTE mobile networks have been rolled out, and already 7 percent (350,000) of Danes use 3G/4G as their primary broadband connection, surpassing FTTH customers by 100,000.  This is particularly important because in many sectors of the Danish economy, including banking, health, and government, users can only access services only digitally. Services are fully functional on mobile devices and their associated speeds.  The interview claims that wireless will never be a substitute for fiber, but millions of people around the world are proving that wrong every day.

The price comparisons provided between the US and selected European countries also leave out compulsory media license fees (to cover state broadcasting) and taxes that can add some $80 per month to the cost of every broadband subscription. When these real fees are added up, the real price of broadband is not so cheap in Sweden and other European countries.  Indeed, the US frequently comes out less expensive.

The US broadband approach has a number of advantages.  Private providers bear the risks, not taxpayers. Consumers dictate the broadband they want, not the government.  Also prices are scalable and transparent. The price reflects the real cost. Furthermore, as the OECD and the ITU have recognized, the entry level costs for broadband in the US are some of the lowest in the world. The ITU recommends that people pay no more than 5% of their income for broadband; most developed countries fall within 2-3% for the highest tier of broadband, including the US.  It is only fair to pay more more for better quality. If your needs are just email and web browsing, then basic broadband will do. But if you wants high definition Netflix, you should pay more.  There is no reason why your neighbor should subsidize your entertainment choices.

The interview asserted that government investment in FTTH is needed to increase competitiveness, but there was no evidence given.  It’s not just a broadband network that creates economic growth. Broadband is just one input in a complex economic equation.  To put things into perspective, consider that the US has transformed its economy through broadband in the last two decades.   Just the internet portion alone of America’s economy is larger than the entire GDP of Sweden.

The assertion that the US is #26 in broadband speed is simply wrong. This is an outdated statistic from 2009 used in Crawford’s book. The Akamai report references is released quarterly, so there should have been no reason not to include a more recent figure in time for publication in December 2012. Today the US ranks #8 in the world for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from #26 to #8. In any case, according to Akamai, many US cities and states have some of the fastest download speeds in the world and would rank in the top ten in the world.

There is no doubt that fiber is an important technology and the foundation of all modern broadband networks, but the economic question is to what extent should fiber be brought to every household, given the cost of deployment (many thousands of dollars per household), the low level of adoption (it is difficult to get a critical mass of a community to subscribe given diverse needs), and that other broadband technologies continue to improve speed and price.

The interview didn’t mention the many failed federal and municipal broadband projects.  Chattanooga is just one example of a federally funded fiber projects costing hundreds of millions of dollars with too few users  A number of municipal projects that have failed to meet expectations include Chicago, Burlington, VT; Monticello, MN; Oregon’s MINET, and Utah’s UTOPIA.

Before deploying costly FTTH networks, the feasibility to improve existing DSL and cable networks as well as to deploy wireless broadband markets should be considered. As case in point is Canada.  The OECD reports that both Canada and South Korea have essentially the same advertised speeds, 68.33 and 66.83 Mbps respectively.  Canada’s fixed broadband subscriptions are shared almost equally between DSL and cable, with very little FTTH.   This shows that fast speeds are possible on different kinds of networks.

The future demands a multitude of broadband technologies. There is no one technology that is right for everyone. Consumers should have the ability to choose based upon their needs and budget, not be saddled with yet more taxes from misguided politicians and policymakers.

Consider that mobile broadband is growing at four times the rate of fixed broadband according to the OECD, and there are some 300 million mobile broadband subscriptions in the US, three times as many fixed broadband subscriptions.  In Africa mobile broadband is growing at 50 times the rate of fixed broadband.  Many Americans have selected mobile as their only broadband connection and love its speed and flexibility. Vectoring on copper wires enables speeds of 100 mbps. Cable DOCSIS3 enables speeds of 300 mbps, and cable companies are deploying neighborhood wifi solutions.  With all the innovation and competition, it is mindless to create a new government monopoly.  We should let the golden age of broadband flourish.


Source for US and EU Broadband Comparisons: US data from National Broadband Map, “Access to Broadband Technology by Speed,” Broadband Statistics Report, July 2013, http://www.broadbandmap.gov/download/Technology%20by%20Speed.pdf and http://www.broadbandmap.gov/summarize/nationwide. EU data from European Commission, “Chapter 2: Broadband Markets,” Digital Agenda Scoreboard 2013 (working document, December 6, 2013), http://ec.europa.eu/digital-agenda/sites/digital-agenda/files/DAE%20SCOREBOARD%202013%20-%202-BROADBAND%20MARKETS%20_0.pdf.

*The National Cable Telecommunications Association suggests speeds of 100 Mbps are available to 85% of Americans.  See “America’s Internet Leadership,” 2013, www.ncta.com/positions/americas-internet-leadership.

**Verizon’s most recent report notes that it reaches 97 percent of America’s population with 4G/LTE networks. See Verizon, News Center: LTE Information Center, “Overview,” www.verizonwireless.com/news/LTE/Overview.html.

***This figure is based on 49,310,131 cable subscribers at the end of 2013, noted by Leichtman Research http://www.leichtmanresearch.com/press/031714release.html compared to 138,505,691 households noted by the National Broadband Map.

]]>
http://techliberation.com/2014/04/02/america-in-the-golden-age-of-broadband/feed/ 0
Bitcoin hearing in the House today, fun event tonight http://techliberation.com/2014/04/02/bitcoin-hearing-in-the-house-today-fun-event-tonight/ http://techliberation.com/2014/04/02/bitcoin-hearing-in-the-house-today-fun-event-tonight/#comments Wed, 02 Apr 2014 14:15:03 +0000 http://techliberation.com/?p=74367

Later today I’ll be testifying at a hearing before the House Small Business Committee titled “Bitcoin: Examining the Benefits and Risks for Small Business.” It will be live streamed starting at 1 p.m. My testimony will be available on the Mercatus website at that time, but below is some of my work on Bitcoin in case you’re new to the issue.

Also, tonight I’ll be speaking at a great event hosted by the DC FinTech meetup on “Bitcoin & the Internet of Money.” I’ll be joined by Bitcoin core developer Jeff Garzik and we’ll be interviewed on stage by Joe Weisenthal of Business Insider. It’s open to the public, but you have to RSVP.

Finally, stay tuned because in the next couple of days my colleagues Houman Shadab, Andrea Castillo, and I will be posting a draft of our new law review article looking at Bitcoin derivatives, prediction markets, and gambling. Bitcoin is the most fascinating issue I’ve ever worked on.

Here’s Some Bitcoin Reading…

And here’s my interview with Reihan Salam discussing Bitcoin…

]]>
http://techliberation.com/2014/04/02/bitcoin-hearing-in-the-house-today-fun-event-tonight/feed/ 0
Video – DisCo Policy Forum Panel on Privacy & Innovation in the 21st Century http://techliberation.com/2014/04/02/video-disco-policy-forum-panel-on-privacy-innovation-in-the-21st-century/ http://techliberation.com/2014/04/02/video-disco-policy-forum-panel-on-privacy-innovation-in-the-21st-century/#comments Wed, 02 Apr 2014 13:32:14 +0000 http://techliberation.com/?p=74357

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:

  • benefit-cost analysis in digital privacy debates (building on this law review article);
  • the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
  • the problem of “technopanics” in information policy debates (building on this law review article);
  • the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
  • the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
  • the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
  • the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:

]]>
http://techliberation.com/2014/04/02/video-disco-policy-forum-panel-on-privacy-innovation-in-the-21st-century/feed/ 0
Congress Should Lead FCC by Example, Adopt Clean STELA Reauthorization http://techliberation.com/2014/04/01/congress-should-lead-fcc-by-example-adopt-clean-stela-reauthorization/ http://techliberation.com/2014/04/01/congress-should-lead-fcc-by-example-adopt-clean-stela-reauthorization/#comments Tue, 01 Apr 2014 15:31:13 +0000 http://techliberation.com/?p=74354

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry.

Yesterday’s FCC meeting was unabashedly pro-cable and anti-broadcaster. The agency decided to prohibit television broadcasters from engaging in the same industry behavior as cable, satellite, and telco television distributors and programmers. The resulting disparity in regulatory treatment highlights the inherent dangers in addressing regulatory reform piecemeal rather than comprehensively as contemplated by the #CommActUpdate. Congress should lead the FCC by example and adopt a “clean” approach to STELA reauthorization that avoids the agency’s regulatory mistakes.

The FCC meeting offered a study in the way policymakers pick winners and losers in the marketplace without acknowledging unfair regulatory treatment. It’s a three-step process.

  • First, the policymaker obfuscates similarities among issues by referring to substantively similar economic activity across multiple industry segments using different terminology.
  • Second, it artificially narrows the issues by limiting any regulatory inquiry to the disfavored industry segment only.
  • Third, it adopts disparate regulations applicable to the disfavored industry segment only while claiming the unfair regulatory treatment benefits consumers.

The broadcast items adopted by the FCC yesterday hit all three points.

“Broadcast JSAs”

The FCC adopted an order prohibiting two broadcast television stations from agreeing to jointly sell more than 15% of their advertising time using the three-step process described above.

  • First, the FCC referred to these agreements as “JSA’s” or “joint sales agreements”.
  • Second, the FCC prohibited these agreements only among broadcast television stations even though the largest cable, satellite, and telco video distributors sell their advertising time through a single entity.
  • Third, FCC Chairman Tom Wheeler said all the agency was “doing [yesterday was] leveling the negotiating table” for negotiations involving the largely unrelated issue of “retransmission consent”, even though the largest cable, satellite, and telco video distributors all sell their advertising through a single entity.

If the FCC had acknowledged that cable, satellite, and telcos jointly sell their advertising, and had the FCC included them in its inquiry as well, Chairman Wheeler could not have kept a straight face while asserting that all the agency was doing was leveling the playing field. Hence the power of obfuscatory terminology and artificially narrowed issues.

“Broadcast Exclusivity Agreements”

The FCC also issued a further notice yesterday seeking comment on broadcast “non-duplication exclusivity agreements” and “syndicated exclusivity agreements.” These agreements, which are collectively referred to as “broadcast exclusivity agreements”, are a form of territorial exclusivity: They provide a local television station with the exclusive right to transmit broadcast network or syndicated programming in the station’s local market only.

Unlike cable, satellite, and telco television distributors, broadcast television stations are prohibited by law from entering into exclusive programming agreements with other television distributors in the same market: The Satellite Television Extension and Localism Act (STELA) prohibits television stations from entering into exclusive retransmission consent agreements — i.e., a television station must make its programming available to all other television distributors in the same market. Cable, satellite, and telco distributors are legally permitted to enter into exclusive programming agreements on a nationwide basis — e.g., DIRECTV’s NFL Sunday Ticket.

If the FCC is concerned by the limited form of territorial exclusivity permitted for broadcasters, it should be even more concerned about the broader exclusivity agreements that have always been permitted for cable, satellite, and telco television distributors. But the FCC nevertheless used the three-step process for picking winners and losers to limit its consideration of exclusive programming agreements to broadcasters only.

  • First, the FCC uses unique terminology to refer to “broadcast” exclusivity agreements (i.e., “non-duplication” and “syndicated exclusivity”), which obfuscates the fact that these agreements are a limited form of exclusive programming agreements.
  • Second, the FCC is seeking comment on exclusive programming agreements between broadcast television stations and programmers only even though satellite and other video programming distributors have entered into exclusive programming agreements.
  • Third, it appears the pretext for limiting the scope of the FCC’s inquiry to broadcasters will again be “leveling the playing field” between broadcasters and other television distributors — to benefit consumers, of course.

“Joint Retransmission Consent Negotiations”

Finally, the FCC prohibited a television broadcast station ranked among the top four stations (as measured by audience share) from negotiating “retransmission consent” jointly with another top four station in the same market if the stations are not commonly owned. The FCC reasoned that “the threat of losing programming of two more top four stations at the same time gives the stations undue bargaining leverage in negotiations with [cable, satellite, and telco television distributors].”

As an economic matter, “retransmission consent” is essentially a substitute for the free market copyright negotiations that could occur absent the “compulsory copyright license” in the 1976 Copyright Act and an earlier Supreme Court decision interpreting the term “public performance”. In the absence of retransmission consent, compensation for the use of programming provided by broadcast television stations and programming networks would be limited to the artificially low amounts provided by the compulsory copyright license.

To the extent retransmission consent is merely another form of program licensing, it is indistinguishable from negotiations between cable, satellite and telco distributors and cable programming networks — which typically involve the sale of bundled channels. If bundling two television channels together “gives the stations undue bargaining leverage” in retransmission consent negotiations, why doesn’t a cable network’s bundling of multiple channels together for sale to a cable, satellite, or telco provider give the cable network “undue bargaining leverage” in its licensing negotiations? The FCC avoided this difficultly using the old one, two, three approach.

  • First, the FCC used the unique term “retransmission consent” to refer to the sale of programming rights by broadcasters.
  • Second, the FCC instituted a proceeding seeking comment only on “retransmission consent” rather than all programming negotiations.
  • Third, the FCC found that lowering retransmission consent costs could lower the prices consumers pay to cable, satellite, and telco television distributors — to remind us that it’s all about consumers, not competitors.

If it were really about lowering prices for consumers, the FCC would also have considered whether prohibiting channel bundling by cable programming networks would lower consumer prices too. For reasons left unexplained, cable programmers are permitted to bundle as many channels as possible in their licensing negotiations.

“Clean STELA”

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry. To be sure, the disparate results of yesterday’s FCC meeting could be unintentional. But, even so, they highlight the inherent dangers in any piecemeal approach to industry regulation. That’s why Congress should adopt a “clean” approach to STELA reauthorization and reject the demands of special interests for additional piecemeal legislative changes. Consumers would be better served by a more comprehensive effort to update video regulations.

]]>
http://techliberation.com/2014/04/01/congress-should-lead-fcc-by-example-adopt-clean-stela-reauthorization/feed/ 0
The Beneficial Uses of Private Drones [Video] http://techliberation.com/2014/03/28/the-beneficial-uses-of-private-drones-video/ http://techliberation.com/2014/03/28/the-beneficial-uses-of-private-drones-video/#comments Fri, 28 Mar 2014 16:10:21 +0000 http://techliberation.com/?p=74341

Give us our drone-delivered beer!

That’s how the conversation got started between John Stossel and me on his show this week. I appeared on Stossel’s Fox Business TV show to discuss the many beneficial uses of private drones. The problem is that drones — which are more appropriately called unmanned aircraft systems — have an image problem. When we think about drones today, they often conjure up images of nefarious military machines dealing death and destruction from above in a far-off land. And certainly plenty of that happens today (far, far too much in my personal opinion, but that’s a rant best left for another day!).

But any technology can be put to both good and bad uses, and drones are merely the latest in a long list of “dual-use technologies,” which have both military uses and peaceful private uses. Other examples of dual-use technologies include: automobiles, airplanes, ships, rockets and propulsion systems, chemicals, computers and electronic systems, lasers, sensors, and so on. Put simply, almost any technology that can be used to wage war can also be used to wage peace and commerce. And that’s equally true for drones, which come in many sizes and have many peaceful, non-military uses. Thus, it would be wrong to judge them based upon their early military history or how they are currently perceived. (After all, let’s not forget that the Internet’s early origins were militaristic in character, too!)

Some of the other beneficial uses and applications of unmanned aircraft systems include: agricultural (crop inspection & management, surveying); environmental (geological, forest management, tornado & hurricane research); industrial (site & service inspection, surveying); infrastructure management (traffic and accident monitoring); public safety (search & rescue, post-natural disaster services, other law enforcement); and delivery services (goods & parcels, food & beverages, flowers, medicines, etc.), just to name a few.


This is why it is troubling that the Federal Aviation Administration (FAA) continues to threaten private drone operators with cease-and-desist letters and discourage the many beneficial uses of these technologies, even as other countries rush ahead and green-light private drone services. As I noted on the Stossel show, while the FAA is well-intentioned in its efforts to keep the nation’s skies safe, the agency is allowing hypothetical worst-case scenarios get in the way of beneficial innovation. A lot of this fear is driven by privacy concerns, too. But as Brookings Institution senior fellow John Villasenor has explained, we need to be careful about rushing to preemptively control new technologies based on hypothetical privacy fears:

If, in 1995, comprehensive legislation to protect Internet privacy had been enacted, it would have utterly failed to anticipate the complexities that arose after the turn of the century with the growth of social networking and location-based wireless services. The Internet has proven useful and valuable in ways that were difficult to imagine over a decade and a half ago, and it has created privacy challenges that were equally difficult to imagine. Legislative initiatives in the mid-1990s to heavily regulate the Internet in the name of privacy would likely have impeded its growth while also failing to address the more complex privacy issues that arose years later.

This is a key theme discussed throughout my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” The central lesson of the booklet is that living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. We shouldn’t let our initial (and often irrational) fears of new technologies dictate the future course of innovation.We can and will find constructive solutions to the hard problems posed by new technologies because we creative and resilient creatures. And, yes, some regulation will be necessary. But how and when we regulate matters profoundly. Preemptive, precautionary-based proposals are almost never the best way to start.

Finally, as I also noted during the interview with Stossel, it’s always important to consider trade-offs and opportunity costs when discussing the disruptive impact of new technologies. For example, while some fear the safety implications of private drones, we should not forget that over 30,000 people die in automobile-related accidents every year in the United States. While the number of vehicle-related deaths has been declining in recent years, that remains an astonishing number of deaths. What if a new technology existed that could help prevent a significant number of these fatalities? Certainly, “smart car” technology and fully autonomous “driverless cars” should help bring down that number significantly. But how might drones help?

Consider some of the mundane tasks that automobiles are used for today. Cars are used to go grab dinner or have someone else deliver it, to pick up medicine at a local pharmacy, to have newspapers or flowers delivered, and so on. Every time a human gets behind the wheel of an automobile to do these things the chance for injury or even death exists, even close to home. In fact, a large percentage of all accidents happen with just a few miles of the car owner’s home. A significant number of those accidents could be avoided if we were able to rely on drone-delivery of things we today use cars and trucks for.

These are just some of the things to consider as the debate over unmanned aircraft systems continues. Drones have gotten a very bad name thus far, but we should remain open-minded about their many beneficial, peaceful, and pro-consumer uses.

(For more on this issue, read this April 2013 filing to the FAA I wrote along with my Mercatus colleagues Eli Dourado and Jerry Brito.)

]]>
http://techliberation.com/2014/03/28/the-beneficial-uses-of-private-drones-video/feed/ 3
The End of Net Neutrality and the Future of TV http://techliberation.com/2014/03/26/the-end-of-net-neutrality-and-the-future-of-tv/ http://techliberation.com/2014/03/26/the-end-of-net-neutrality-and-the-future-of-tv/#comments Wed, 26 Mar 2014 15:03:51 +0000 http://techliberation.com/?p=74327

Some recent tech news provides insight into the trajectory of broadband and television markets. These stories also indicate a poor prognosis for a net neutrality. Political and ISP opposition to new rules aside (which is substantial), even net neutrality proponents point out that “neutrality” is difficult to define and even harder to implement. Now that the line between “Internet video” and “television” delivered via Internet Protocol (IP) is increasingly blurring, net neutrality goals are suffering from mission creep.

First, there was the announcement that Netflix, like many large content companies, was entering into a paid peering agreement with Comcast, prompting a complaint from Netflix CEO Reed Hastings who argued that ISPs have too much leverage in negotiating these interconnection deals.

Second, Comcast and Apple discussed a possible partnership whereby Comcast customers would receive prioritized access to Apple’s new video service. Apple’s TV offering would be a “managed service” exempt from net neutrality obligations.

Interconnection and managed services are generally not considered net neutrality issues. They are not “loopholes.” They were expressly exempted from the FCC’s 2010 (now-defunct) rules. However, net neutrality proponents are attempting to bring interconnection and managed services to the FCC’s attention as the FCC crafts new net neutrality rules. Net neutrality proponents have an uphill battle already, and the following trends won’t help.

1. Interconnection becomes less about traffic burden and more about leverage.

The ostensible reason that content companies like Netflix (or third parties like Cogent) pay ISPs for interconnection is because video content unloads a substantial amount of traffic onto ISPs’ last-mile networks.

Someone has to pay for network upgrades to handle the traffic. Typically, the parties seem to abide by the equity principle that whoever is sending the traffic–in this case, Netflix–should bear the costs via paid peering. That way, the increased expense is incurred by Netflix who can spread costs across its subscribers. If ISPs incurred the expense of upgrades, they’d have to spread costs over its subscriber base, but many of their subscribers are not Netflix users.

That principle doesn’t seem to hold for WatchESPN, which is owned by Disney. WatchESPN is an online service that provides live streams of ESPN television programming, like ESPN2 and ESPNU, to personal computers and also includes ESPN3, an online-only livestream of non-marquee sports. If a company has leverage in other markets, like Disney does in TV programming markets, I suspect ISPs can’t or won’t charge for interconnection. These interconnection deals are non-public but Disney probably doesn’t pay ISPs for transmitting WatchESPN traffic onto ISPs’ last-mile networks. The existence of a list of ESPN’s “Participating Providers” indicates that ISPs actually have to pay ESPN for the privilege of carrying WatchESPN content.

Netflix is different from WatchESPN in significant ways (it has substantially more traffic, for one). However, it is a popular service and seems to be flexing its leverage muscle with its Open Connect program, which provided higher-quality videos to participating ISPs. It’s plausible that someday video sources like Netflix will gain leverage, especially as broadband competition increases, and ISPs will have to pay content companies for traffic, rather than the reverse. When competitive leverage is the issue, antitrust agencies, not the FCC, have the appropriate tools to police business practices.

2. The rise of managed services in video.

Managed services include services ISPs provide to customers like VoIP and video-on-demand (VOD). They are on data streams that receive priority for guaranteed quality assurance since customers won’t tolerate a jittery phone call or movie stream. Crucially, managed services are carried on the same physical broadband network but are on separate data streams that don’t interfere with a customer’s Internet service.

The Apple-Comcast deal, if it comes to fruition, would be the first major video offering provided as a managed service. (Comcast has experimented with managed services affiliated with Xbox and TiVo.) Verizon is also a potential influential player since it just bought an Intel streaming TV service. Future plans are uncertain but Verizon might launch a TV product that it could sell outside of the FiOS footprint with a bundle of cable channels, live television, and live sports.

Net neutrality proponents decry managed services as exploiting a loophole in the net neutrality rules but it’s hardly a loophole. The FCC views managed services as a social good that ISPs should invest in. The FCC’s net neutrality advisory committee last August released a report and concluded that managed services provide “considerable” benefits to consumers. The report went on to articulate principles that resemble a safe harbor for ISPs contemplating managed services. Given this consensus view, I see no reason why the FCC would threaten managed services with new rules.

3. Uncertainty about what is “the Internet” and what is “television.”

Managed services and other developments are blurring the line between the Internet and television, which makes “neutrality” on the Internet harder to define and implement. We see similar tensions in phone service. Residential voice service is already largely carried via IP. According to FCC data, 2014 will likely be the year that more people subscribe to VoIP service than plain-old-telephone service. The IP Transition reveals the legal and practical tensions when technology advances make the FCC’s regulatory silos–”phone” and “Internet”–anachronistic.

Those same technology changes and legal ambiguity are carrying over into television. TV is also increasingly carried via IP and it’s unclear where “TV” ends and “Internet video” begins. This distinction matters because television is regulated heavily while Internet video is barely regulated at all. On one end of the spectrum you have video-on-demand from a cable operator. VOD is carried over a cable operator’s broadband lines but fits under the FCC’s cable service rules. On the other end of the spectrum you have Netflix and YouTube. Netflix and YouTube are online-only video services delivered via broadband but are definitely outside of cable rules.

In the gray zone between “TV” and “Internet video” lies several services and physical networks that are not entirely in either category. These services include WatchESPN and ESPN3, which are owned by a cable network and are included in traditional television negotiations but delivered via a broadband connection.

IPTV, also, is not entirely TV nor Internet video. AT&T’s UVerse, Verizon’s FiOS, and Google Fiber’s television product are pure or hybrid IPTV networks that “look” like cable or satellite TV to consumers but are not. AT&T, Verizon, and Google voluntarily assent to many, but not all, cable regulations even though their service occupies a legally ambiguous area.

Finally, on the horizon, are managed video and gaming services and “virtual MSOs” like Apple’s or Verizon’s video products. These are probably outside of traditional cable rules–like program access rules and broadcast carriage mandates–but there is still regulatory uncertainty.

Broadband and video markets are in a unique state of flux. New business models are slowly emerging and firms are attempting to figure out each other’s leverage. However, as phone and video move out of their traditional regulatory categories and converge with broadband services, companies face substantial regulatory compliance risks. In such an environment, more than ever, the FCC should proceed cautiously and give certainty to firms. In any case, I’m optimistic that experts’ predictions will be borne out: ex ante net neutrality rules are looking increasingly rigid and inappropriate for this ever-changing market environment.

Related Posts

1. Yes, Net Neutrality is a Dead Man Walking. We Already Have a Fast Lane.
2. Who Won the Net Neutrality Case?
3. If You’re Reliant on the Internet, You Loathe Net Neutrality.

]]>
http://techliberation.com/2014/03/26/the-end-of-net-neutrality-and-the-future-of-tv/feed/ 0
Video Double Standard: Pay-TV Is Winning the War to Rig FCC Competition Rules http://techliberation.com/2014/03/25/video-double-standard-pay-tv-is-winning-the-war-to-rig-fcc-competition-rules/ http://techliberation.com/2014/03/25/video-double-standard-pay-tv-is-winning-the-war-to-rig-fcc-competition-rules/#comments Tue, 25 Mar 2014 17:44:05 +0000 http://techliberation.com/?p=74320

Most conservatives and many prominent thinkers on the left agree that the Communications Act should be updated based on the insight provided by the wireless and Internet protocol revolutions. The fundamental problem with the current legislation is its disparate treatment of competitive communications services. A comprehensive legislative update offers an opportunity to adopt a technologically neutral, consumer focused approach to communications regulation that would maximize competition, investment and innovation.

Though the Federal Communications Commission (FCC) must continue implementing the existing Act while Congress deliberates legislative changes, the agency should avoid creating new regulatory disparities on its own. Yet that is where the agency appears to be heading at its meeting next Monday.

recent ex parte filing indicates that the FCC is proposing to deem joint retransmission consent negotiations by two of the top four Free-TV stations in a market a per se violation of the FCC’s good-faith negotiation standard and adopt a rebuttable presumption that joint negotiations by non-top four station combinations constitute a failure to negotiate in good faith.” The intent of this proposal is to prohibit broadcasters from using a single negotiator during retransmission consent negotiations with Pay-TV distributors.

This prohibition would apply in all TV markets, no matter how small, including markets that lack effective competition in the Pay-TV segment. In small markets without effective competition, this rule would result in the absurd requirement that marginal TV stations with no economies of scale negotiate alone with a cable operator who possesses market power.

In contrast, cable operators in these markets would remain free to engage in joint negotiations to purchase their programming. The Department of Justice has issued a press release “clear[ing] the way for cable television joint purchasing” of national cable network programming through a single entity. The Department of Justice (DOJ) concluded that allowing nearly 1,000 cable operators to jointly negotiate programming prices would not facilitate retail price collusion because cable operators typically do not compete with each other in the sale of programming to consumers.

Joint retransmission consent negotiations don’t facilitate retail price collusion either. Free-TV distributors don’t compete with each other for the sale of their programming to consumers — they provide their broadcast signals to consumers for free over the air. Pay-TV operators complain that joint agreements among TV stations are nevertheless responsible for retail price increases in the Pay-TV segment, but have not presented evidence supporting that assertion. Pay-TV’s retail prices have increased at a steady clip for years irrespective of retransmission consent prices.

To the extent Pay-TV distributors complain that joint agreements increase TV station leverage in retransmission consent negotiations, there is no evidence of harm to competition. The retransmission consent rules prohibit TV stations from entering into exclusive retransmission consent agreements with any Pay-TV distributor — even though Pay-TV distributors are allowed to enter into such agreements for cable programming — and the FCC has determined that Pay- and Free-TV distributors do not compete directly for viewers. The absence of any potential for competitive harm is especially compelling in markets that lack effective competition in the Pay-TV segment, because the monopoly cable operator in such markets is the de facto single negotiator for Pay-TV distributors.

It is even more surprising that the FCC is proposing to prohibit joint sales agreements among Free-TV distributors. This recent development apparently stems from a DOJ Filing in the FCC’s incomplete media ownership proceeding.

A fundamental flaw exists in the DOJ Filing’s analysis: It failed to consider whether the relevant product market for video advertising includes other forms of video distribution, e.g., cable and online video programming distribution. Instead, the DOJ relied on precedent that considers the sale of advertising in non-video media only.

Similarly, the Department has repeatedly concluded that the purchase of broadcast television spot advertising constitutes a relevant antitrust product market because advertisers view spot advertising on broadcast television stations as sufficiently distinct from advertising on other media (such as radio and newspaper). (DOJ Filing at p.8)

The DOJ’s conclusions regarding joint sales agreements are clearly based on its incomplete analysis of the relevant product market.

Therefore, vigorous rivalry between multiple independently controlled broadcast stations in each local radio and television market ensures that businesses, charities, and advocacy groups can reach their desired audiences at competitive rates. (Id. at pp. 8-9, emphasis added)

The DOJ’s failure to consider the availability of advertising opportunities provided by cable and online video programming renders its analysis unreliable.

Moreover, the FCC’s proposed rules would result in another video market double standard. Cable, satellite, and telco video programming distributors, including DIRECTV, AT&T U-verse, and Verizon FIOS, have entered into a joint agreement to sell advertising through a single entityNCC Media (owned by Comcast, Time Warner Cable, and Cox Media). NCC Media’s Essential Guide to planning and buying video advertising says that cable programming has surpassed 70% of all viewing to ad-supported television homes in Prime and Total Day, and 80% of Weekend daytime viewing. According to NCC, “This viewer migration to cable [programming] is one of the best reasons to shift your brand’s media allocation from local broadcast to Spot Cable,” especially with the advent of NCC’s new consolidated advertising platform. (Essential Guide at p. 8) The Essential Guide also states:

  • “It’s harder than ever to buy the GRP’s [gross rating points] you need in local broadcast in prime and local news.” (Id. at p. 16)
  • “[There is] declining viewership on broadcast with limited inventory creating a shortage of rating points in prime, local news and other dayparts.” (Id. at p. 17)
  • “The erosion of local broadcast news is accelerating.” (Id. at p. 18)
  • “Thus, actual local broadcast TV reach is at or below the cume figures for wired cable in most markets.” (Id. at p. 19)

This Essential Guide clearly indicates that cable programming is part of the relevant video advertising product market and that there is intense competition between Pay- and Free-TV distributors for advertising dollars. So why is the FCC proposing to restrict joint marketing agreements among Free-TV distributors in local markets when virtually the entire Pay-TV industry is jointly marketing all of their advertising spots nationwide?

The FCC should refrain from adopting new restrictions on local broadcasters until it can answer questions like this one. Though it is appropriate for the FCC to prevent anticompetitive practices, adopting disparate regulatory obligations that distort competition in the same product market is not good for competition or consumers. Consumer interests would be better served if the FCC decided to address video competition issues more broadly — or there might not be any Free-TV competition to worry about.

]]>
http://techliberation.com/2014/03/25/video-double-standard-pay-tv-is-winning-the-war-to-rig-fcc-competition-rules/feed/ 0
New Book Release: “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom” http://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/ http://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/#comments Tue, 25 Mar 2014 15:06:28 +0000 http://techliberation.com/?p=74314

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.

The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.

I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.

The central lesson of the booklet is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:

  • education and empowerment efforts (including media literacy, digital citizenship efforts);
  • social pressure from activists, academics, and the press and the public more generally.
  • voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
  • increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.

To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.

In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.

If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available. I’ll be doing more blogging about the book in coming weeks and months. The debate between the “permissionless innovation” and “precautionary principle” worldviews is just getting started and it promises to touch every tech policy debate going forward.


_______________

Related Essays:

]]>
http://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/feed/ 0
New Mercatus Paper from Daniel Lyons about Wireless Net Neutrality http://techliberation.com/2014/03/18/new-mercatus-paper-from-daniel-lyons-about-wireless-net-neutrality/ http://techliberation.com/2014/03/18/new-mercatus-paper-from-daniel-lyons-about-wireless-net-neutrality/#comments Tue, 18 Mar 2014 20:58:28 +0000 http://techliberation.com/?p=74298

The Mercatus Center at George Mason University has released a new working paper by Daniel A. Lyons, professor at Boston College Law School, entitled “Innovations in Mobile Broadband Pricing.”

In 2010, the FCC passed net neutrality rules for mobile carriers and ISPs that included a “no blocking” provision (since struck down in FCC v. Verizon). The FCC prohibited mobile carriers from blocking Internet content and promised to scrutinize carriers’ non-standard pricing decisions. These broad regulations had a predictable chilling effect on firms trying new business models. For instance, Lyons describes how MetroPCS was hit with a net neutrality complaint because it allowed YouTube but not other video streaming sites on its budget LTE plan (something I’ve written on). Some critics also allege that AT&T’s Sponsored Data program is a net neutrality violation.

In his paper, Lyons explains that the FCC might still regulate mobile networks but advises against a one-size-fits-all net neutrality approach. Instead, he encourages regulatory humility in order to promote investment in mobile networks and devices and to allow new business models. For support, he points out that several developing and rich countries have permitted commercial arrangements between content companies and carriers that arguably violate principles of net neutrality. Lyons makes the persuasive argument that these “non-neutral” service bundles and pricing decisions on the whole, rather than harming consumers, expand online access and ease non-connected populations into the Internet Age. As Lyons says,

The wide range of successful wireless innovations and partnerships at the international level should prompt U.S. regulators to rethink their commitment to a rigid set of rules that limit flexibility in American broadband markets. This should be especially true in the wireless broadband space, where complex technical considerations, rapid change, and robust competition make for anything but a stable and predictable business environment.

Further,

In the rapidly changing world of information technology, it is sometimes easy to forget that experimental new pricing models can be just as innovative as new technological developments. By offering new and different pricing models, companies can provide better value to consumers or identify niche segments that are not well-served by dominant pricing strategies.

Despite the January 2014 court decision striking down the FCC’s net neutrality rules, it’s an issue that hasn’t died. Lyons’ research provides support for the position that a fixation on enforcing net neutrality, however defined, distracts policymakers from serious discussion of how to expand online access. Rules should be written with consumers and competition in mind. Wired ISPs get the lion’s share of scholars’ attention when discussing net neutrality. In an increasingly wireless world, Lyon’s paper provides important research to guide future US policies.

]]>
http://techliberation.com/2014/03/18/new-mercatus-paper-from-daniel-lyons-about-wireless-net-neutrality/feed/ 0
Toward a Post-Government Internet http://techliberation.com/2014/03/17/toward-a-post-government-internet/ http://techliberation.com/2014/03/17/toward-a-post-government-internet/#comments Mon, 17 Mar 2014 13:41:53 +0000 http://techliberation.com/?p=74294

The Internet began as a U.S. military project. For two decades, the government restricted access to the network to government, academic, and other authorized non-commercial use. In 1989, the U.S. gave up control—it allowed private, commercial use of the Internet, a decision that allowed it to flourish and grow as few could imagine at the time.

Late Friday, the NTIA announced its intent to give up the last vestiges of its control over the Internet, the last real evidence that it began as a government experiment. Control of the Domain Name System’s (DNS’s) Root Zone File has remained with the agency despite the creation of ICANN in 1998 to perform the other high-level domain name functions, called the IANA functions.

The NTIA announcement is not a huge surprise. The U.S. government has always said it eventually planned to devolve IANA oversight, albeit with lapsed deadlines and changes of course along the way.

The U.S. giving up control over the Root Zone File is a step toward a world in which governments no longer assert oversight over the technology of communication. Just as freedom of the printing press was important to the founding generation in America, an unfettered Internet is essential to our right to unimpeded communication. I am heartened to see that the U.S. will not consider any proposal that involves IANA oversight by an intergovernmental body.

Relatedly, next month’s global multistakeholder meeting in Brazil will consider principles and roadmaps for the future of Internet governance. I have made two contributions to the meeting, a set of proposed high-level principles that would limit the involvement of governments in Internet governance to facilitating participation by their nationals, and a proposal to support experimentation in peer-to-peer domain name systems. I view these proposals as related: the first keeps governments away from Internet governance and the second provides a check against ICANN simply becoming another government in control of the Internet.

]]>
http://techliberation.com/2014/03/17/toward-a-post-government-internet/feed/ 2
Shane Greenstein on bias in Wikipedia articles http://techliberation.com/2014/03/11/greenstein/ http://techliberation.com/2014/03/11/greenstein/#comments Tue, 11 Mar 2014 10:00:07 +0000 http://techliberation.com/?p=74289 Post image for Shane Greenstein on bias in Wikipedia articles

Shane Greenstein, Kellogg Chair in Information Technology at Northwestern’s Kellogg School of Management, discusses his recent paper, Collective Intelligence and Neutral Point of View: The Case of Wikipedia , coauthored by Harvard assistant professor Feng Zhu. Greenstein and Zhu’s paper takes a look at whether Linus’ Law applies to Wikipedia articles. Do Wikipedia articles have a slant or bias? If so, how can we measure it? And, do articles become less biased over time, as more contributors become involved? Greenstein explains his findings.

Download

Related Links

]]>
http://techliberation.com/2014/03/11/greenstein/feed/ 0
In His Bid to Buy T-Mobile, Sprint Chairman Slams US Wireless Policies that Sprint Helped Create http://techliberation.com/2014/03/10/in-his-bid-to-buy-t-mobile-sprint-chairman-slams-us-wireless-policies-that-sprint-helped-create/ http://techliberation.com/2014/03/10/in-his-bid-to-buy-t-mobile-sprint-chairman-slams-us-wireless-policies-that-sprint-helped-create/#comments Mon, 10 Mar 2014 20:30:17 +0000 http://techliberation.com/?p=74286

Sprint’s Chairman, Masayoshi Son, is coming to Washington to explain how wireless competition in the US would be improved if only there were less of it.

After buying Sprint last year for $21.6 billion, he has floated plans to buy T-Mobile. When antitrust officials voiced their concerns about the proposed plan’s potential impact on wireless competition, Son decided to respond with an unusual strategy that goes something like this: The US wireless market isn’t competitive enough, so policymakers need to approve the merger of the third and fourth largest wireless companies in order to improve competition, because going from four nationwide wireless companies to three will make things even more competitive. Got it? Me neither.

An argument like that takes nerve, especially now. When AT&T attempted to buy T-Mobile a few years ago, Sprint led the charge against it, arguing vociferously that permitting the market to consolidate from four to only three nationwide wireless companies would harm innovation and wireless competition. After the Administration blocked the merger, T-Mobile rebounded in the marketplace, which immediately made it the poster child for the Administration’s antitrust policies.

It also makes Son’s plan a non-starter. Allowing Sprint to buy T-Mobile three years after telling AT&T it could not would take incredible regulatory nerve. It would be hard to convince anyone that such an immediate about face in favor of the company that fought the previous merger the hardest isn’t motivated by a desire to pick winners in losers in the marketplace or even outright cronyism. That would be true in almost any circumstance, but is doubly true now that T-Mobile is flourishing. It’s hard to swallow the idea that it would harm competition if a nationwide wireless company were to buy T-Mobile — unless the purchaser is Sprint.

The special irony here is that Son has built his reputation on a knack for relentless innovation. When he bought Sprint, he expressed confidence that Sprint would become the number 1 company in the world. But, a year later, it is T-Mobile that is rebounding in the marketplace, even though T-Mobile has fewer customers than Sprint and less spectrum than Sprint. Buying into T-Mobile’s success now wouldn’t improve Son’s reputation for innovation, but it would double down on his confidence. I expect US regulators will want to see how he does with Sprint before betting the wireless competition farm on a prodigal Son.

]]>
http://techliberation.com/2014/03/10/in-his-bid-to-buy-t-mobile-sprint-chairman-slams-us-wireless-policies-that-sprint-helped-create/feed/ 0
TacoCopters are Legal (for Now) http://techliberation.com/2014/03/07/tacocopters-are-legal-for-now/ http://techliberation.com/2014/03/07/tacocopters-are-legal-for-now/#comments Fri, 07 Mar 2014 16:08:17 +0000 http://techliberation.com/?p=74283

Yesterday, an administrative judge ruled in Huerta v. Pirker that the FAA’s “rules” banning commercial drones don’t have the force of law because the agency never followed the procedures required to enact them as an official regulation. The ruling means that any aircraft that qualifies as a “model aircraft” plausibly operates under laissez-faire. Entrepreneurs are free for now to develop real-life TacoCopters, and Amazon can launch its Prime Air same-day delivery service.

Laissez-faire might not last. The FAA could appeal the ruling, try to issue an emergency regulation, or simply wait 18 months or so until its current regulatory proceedings culminate in regulations for commercial drones. If they opt for the last of these, then the drone community has an interesting opportunity to show that regulations for small commercial drones do not pass a cost-benefit test. So start new drone businesses, but as Matt Waite says, “Don’t do anything stupid. Bad actors make bad policy.”

Kudos to Brendan Schulman, the attorney for Pirker, who has been a tireless advocate for the freedom to innovate using drone technology. He is on Twitter at @dronelaws, and if you’re at all interested in this issue, he is a great person to follow.

]]>
http://techliberation.com/2014/03/07/tacocopters-are-legal-for-now/feed/ 2
Repeal Satellite Television Law http://techliberation.com/2014/03/04/repeal-satellite-television-law/ http://techliberation.com/2014/03/04/repeal-satellite-television-law/#comments Tue, 04 Mar 2014 21:56:47 +0000 http://techliberation.com/?p=74275

The House Subcommittee on Communications and Technology will soon consider whether to reauthorize the Satellite Television Extension and Localism Act (STELA) set to expire at the end of the year. A hearing scheduled for this week has been postponed on account of weather.

Congress ought to scrap the current compulsory license in STELA that governs the importation of distant broadcast signals by Direct Broadcast Satellite providers.  STELA is redundant and outdated. The 25 year-old statute invites rent-seeking every time it comes up for reauthorization.

At the same time, Congress should also resist calls to use the STELA reauthorization process to consider retransmission consent reforms.  The retransmission consent framework is designed to function like the free market and is not the problem.

Those advocating retransmission consent changes are guilty of exaggerating the fact that retransmission consent fees have been on the increase and blackouts occasionally occur when content producers and pay-tv providers fail to reach agreement.  They are also at fault for attempting to  pass the blame.  DIRECTV dropped the Weather Channel in January, for example, rather than agree to pay “about a penny a subscriber” more than it had in the past.

A DIRECTV executive complained at a hearing in June that “between 2010 and 2015, DIRECTV’s retransmission consent costs will increase 600% per subscriber.”  As I and other have noted in the past, retransmission consent fees account for an extremely small share of pay-tv revenue.  Multichannel News has estimated that only two cents of the average dollar of cable revenue goes to retransmission consent.

According to SNL Kagan, retransmission-consent fees were expected to be about 1.2% of total video revenue in 2010, rising to 2% by 2014. at that rate, retrans currently makes up about 3% of total video expenses.

Among other things, DIRECTV recommended that Congress use the STELA reauthorization process to outlaw blackouts or permit pay-tv providers to deliver replacement distant broadcast signals during local blackouts.  In effect, DIRECTV wants to eliminate the bargaining power of content producers, and force them to offer their channels for retransmission at whatever price DIRECTV is willing to pay.

There is a need for regulatory reform in the video marketplace.  Unfortunately, proposals such as these do not advance that goal.  The government intervention DIRECTV is seeking would simply add to the problem by forcing local broadcasters to subsidize pay-tv providers instead of being allowed to recover the fair market value of their programming.  Broadcaster Marci Burdick was correct when she observed that regulation which unfairly siphons local broadcast revenue could have the unintended effect of reducing the “quality and diversity of broadcast programming, including local news, public affairs, severe weather, and emergency alerts, available both via [pay-tv providers] and free, over-the-air to all Americans.”

Broad regulatory reform of the video marketplace can and should be considered as part of the process House Energy and Commerce Committee Chairman Fred Upton (R-MI) and Communications and Technology Subcommittee Chairman Greg Walden (R-OR) recently announced by which the committee will examine and update the Communications Act.

]]>
http://techliberation.com/2014/03/04/repeal-satellite-television-law/feed/ 0
What’s Wrong with Two-Sided Markets? http://techliberation.com/2014/02/24/whats-wrong-with-two-sided-markets/ http://techliberation.com/2014/02/24/whats-wrong-with-two-sided-markets/#comments Mon, 24 Feb 2014 14:53:47 +0000 http://techliberation.com/?p=74267

It seems to me that a lot of the angst about the Comcast-Netflix paid transit deal results from a general discomfort with two-sided markets rather than any specific harm caused by the deal. But is there any reason to be suspicious of two-sided markets per se?

Consider a (straight) singles bar. Men and women come to the singles bar to meet each other. On some nights, it’s ladies’ night, and women get in free and get a free drink. On other nights, it’s not ladies’ night, and both men and women have to pay to get in and buy drinks.

There is no a priori reason to believe that ladies’ night is more just or efficient than other nights. The owner of the bar will benefit if the bar is a good place for social congress, and she will price accordingly. If men in the area are particularly shy, she may have to institute a “mens’ night” to get them to come out. If women start demanding too many free drinks, she may have to put an end to ladies’ night (even if some men benefit from the presence of tipsy women, they may not be as willing as the women to pay the full cost of all of the drinks). Whether a market should be two-sided or one-sided is an empirical question, and the answer can change over time depending on circumstances.

Some commentators seem to be arguing that two-sided markets are fine as long as the market is competitive. Well, OK, suppose the singles bar is the only singles bar in a 100-mile radius? How does that change the analysis above? Not at all, I say.

Analysis of two-sided markets can get very complex, but we shouldn’t let that complexity turn into reflexive opposition.

]]>
http://techliberation.com/2014/02/24/whats-wrong-with-two-sided-markets/feed/ 0
Google Fiber: The Uber of Broadband http://techliberation.com/2014/02/21/google-fiber-the-uber-of-broadband/ http://techliberation.com/2014/02/21/google-fiber-the-uber-of-broadband/#comments Fri, 21 Feb 2014 16:01:23 +0000 http://techliberation.com/?p=74263

Google’s announcement this week of plans to expand to dozens of more cities got me thinking about the broadband market and some parallels to transportation markets. Taxi cab and broadband companies are seeing business plans undermined with the emergence of nimble Silicon Valley firms–Uber and Google Fiber, respectively.

The incumbent operators in both cases were subject to costly regulatory obligations in the past but in return they were given some protection from competitors. The taxi medallion system and local cable franchise requirements made new entry difficult. Uber and Google have managed to break into the market through popular innovations, the persistence to work with local regulators, and motivated supporters. Now, in both industries, localities are considering forbearing from regulations and welcoming a competitor that poses an economic threat to the existing operators.

Notably, Google Fiber will not be subject to the extensive build-out requirements imposed on cable companies who typically built their networks according to local franchise agreements in the 1970s and 1980s. Google, in contrast, generally does substantial market research to see if there is an adequate uptake rate among households in particular areas. Neighborhoods that have sufficient interest in Google Fiber become Fiberhoods.

Similarly, companies like Uber and Lyft are exempted from many of the regulations governing taxis. Taxi rates are regulated and drivers have little discretion in deciding who to transport, for instance. Uber and Lyft drivers, in contrast, are not price-regulated and can allow rates to rise and fall with demand. Further, Uber and Lyft have a two-way rating system: drivers rate passengers and passengers rate drivers via smartphone apps. This innovation lowers costs and improves safety: the rider who throws up in cars after bar-hopping, who verbally or physically abuses drivers (one Chicago cab driver told me he was held up at gunpoint several times per year), or who is constantly late will eventually have a hard time hailing an Uber or Lyft. The ratings system naturally forces out expensive riders (and ill-tempered drivers).

Interestingly, support and opposition for Uber and Google Fiber cuts across partisan lines (and across households–my wife, after hearing my argument, is not as sanguine about these upstarts). Because these companies upset long-held expectations, express or implied, strong opposition remains. Nevertheless, states and localities should welcome the rapid expansion of both Uber and Google Fiber.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions. Of course, there is a decades-long debate about when deregulation turns into subsidies, and this conversation applies to Uber and Google Fiber.

That debate is important, but regulators and policymakers should take every chance to roll back the rules of the past–not layer on more mandates in an ill-conceived attempt to “level the playing field.” Transportation and broadband markets are changing for the better with more competition and localities should generally stand aside.

]]>
http://techliberation.com/2014/02/21/google-fiber-the-uber-of-broadband/feed/ 2
Announcing btcvol.info, Your One-Stop Shop for Bitcoin Volatility Data http://techliberation.com/2014/02/19/announcing-btcvol-info-your-one-stop-shop-for-bitcoin-volatility-data/ http://techliberation.com/2014/02/19/announcing-btcvol-info-your-one-stop-shop-for-bitcoin-volatility-data/#comments Wed, 19 Feb 2014 15:26:11 +0000 http://techliberation.com/?p=74260

The volatility of Bitcoin prices is one of the strongest headwinds the currency faces. Unfortunately, until my quantitative analysis last month, most of the discussion surrounding Bitcoin volatility so far has been anecdotal. I want to make it easier for people to move beyond anecdotes, so I have created a Bitcoin volatility index at btcvol.info, which I’m hoping can become or inspire a standard metric that people can agree on.

The volatility index at btcvol.info is based on daily closing prices for Bitcoin as reported by CoinDesk. I calculate the difference in daily log prices for each day in the dataset, and then calculate the sample standard deviation of those daily returns for the preceding 30 days. The result is an estimate of how spread out daily price fluctuations are—volatility.

The site also includes a basic API, so feel free to integrate this volatility measure into your site or use it for data analysis.

I of course hope that Bitcoin volatility becomes much lower over time. I expect both the maturing of the ecosystem as well as the introduction of a Bitcoin derivatives market will cause volatility to decrease. Having one or more volatility metrics will help us determine whether these or other factors make a difference.

You can support btcvol.info by spreading the word or of course by donating via Bitcoin to the address at the bottom of the site.

]]>
http://techliberation.com/2014/02/19/announcing-btcvol-info-your-one-stop-shop-for-bitcoin-volatility-data/feed/ 0
Net Neutrality Opinion Indicates Internet Service Providers Are Entitled to First Amendment Protection http://techliberation.com/2014/02/17/net-neutrality-opinion-indicates-internet-service-providers-are-entitled-to-first-amendment-protection/ http://techliberation.com/2014/02/17/net-neutrality-opinion-indicates-internet-service-providers-are-entitled-to-first-amendment-protection/#comments Mon, 17 Feb 2014 15:43:35 +0000 http://techliberation.com/?p=74252

Verizon v. FCC, the court decision overturning the Federal Communications Commission’s (FCC) net neutrality rules, didn’t rule directly on the First Amendment issues. It did, however, reject the reasoning of net neutrality advocates who claim Internet service providers (ISPs) are not entitled to freedom of speech.

The court recognized that, in terms of the functionality that it offers consumers and the economic relationships among industry participants, the Internet is as similar to analog cable networks as it is to analog telephone networks. As a result, the court considered most of the issues in the net neutrality case to be “indistinguishable” from those addressed in Midwest Video II, a seminal case addressing the FCC’s authority over cable systems. The court’s emphasis on the substantive similarities between analog cable services, which are clearly entitled to First Amendment protection, indicates that ISPs are likewise entitled to protection.

Net neutrality advocates argued that ISPs are not First Amendment “speakers” because ISPs do not exercise editorial discretion over Internet content. In essence, these advocates argued that ISPs forfeited their First Amendment rights as a result of their “actual conduct” in the marketplace.

Though the court didn’t address the First Amendment issues directly, the court’s reasoning regarding common carrier issues indicates that the “actual conduct” of ISPs is legally irrelevant to their status as First Amendment speakers.

In Verizon v. FCC , the FCC argued that its net neutrality rules couldn’t be considered common carrier obligations with respect to edge providers because ISPs did not have direct commercial relationships with edge providers. But the court concluded that the nature of preexisting commercial relationships between ISPs and edge providers was irrelevant to the legal status of ISPs:

[T]he Commission appears to misunderstand the nature of the inquiry in which we must engage. The question is not whether, absent the [net neutrality rules], broadband providers would or did act as common carriers with respect to edge providers; rather, the question is whether, given the rules imposed by the [FCC], broadband providers are now obligated to act as common carriers.

FCC v. Verizon, No. 11-1355 at 52 (2014) (emphasis in original).

A court must engage in a similar inquiry when determining whether ISPs are “speakers” entitled to First Amendment protection. The question is not whether ISPs would or actually have exercised editorial discretion in the past. There is no Constitutional requirement that ISPs (or anyone else) must speak at the earliest opportunity in order to preserve their right to speak in the future. The question is whether ISPs have the legal option of speaking — i.e., exercising editorial discretion.[2]

Of course, everyone knows ISPs have the ability to exercise such discretion. The court noted there was little dispute regarding the FCC’s finding that that ISPs have the technological ability to distinguish among different types of Internet traffic. Indeed, ISPs’ ability to exercise editorial discretion is the very reason the FCC adopted its net neutrality rules. It is also for this reason that, for First Amendment purposes, ISPs are substantially similar to television broadcasters and analog cable operators for whom First Amendment protections have already been applied.

Some net neutrality advocates attempt to skirt this fact by arguing that ISPs don’t “need” to exercise editorial discretion because today’s ISPs are less capacity constrained than broadcasters and analog cable operators. The essence of this argument is that the First Amendment permits the government to abridge a potential speaker’s freedom of speech if, in the government’s subjective view, the speaker would be able to get along just fine without speaking.

In their zeal to defend net neutrality, these advocates appear to have forgotten that, no matter how comfortable or familiar it may be, a muzzle is still a muzzle. The courts have not.

In Verizon v. FCC, the court recognized that the relationships among ISPs, their subscribers, and edge providers are “indistinguishable” from those present in the analog cable market addressed by the Supreme Court in Midwest Video II:

The Midwest Video II cable operators’ primary “customers” were their subscribers, who paid to have programming delivered to them in their homes. There, as here, the Commission’s regulations required the regulated entities to carry the content of third parties to these customers—content the entities otherwise could have blocked at their discretion. Moreover, much like the rules at issue here, the Midwest Video II regulations compelled the operators to hold open certain channels for use at no cost—thus permitting specified programmers to “hire” the cable operators’ services for free.

FCC v. Verizon, No. 11-1355 at 54 (2014).

The court rejected the FCC’s arguments attempting to distinguish the Internet from cable — arguments that are substantially the same as those advanced by net neutrality advocates in the First Amendment context.

First, the court was unmoved by the argument that Internet content is delivered to end users only when an end user “requests” it, i.e., by clicking on a link. The court noted that cable customers could not actually receive content on a particular cable channel either unless they affirmatively chose to watch those channels, i.e., by changing the channel. (See id.) The court recognized that, “The access requested by [cable video] programmers in Midwest Video II, like the access requested by edge providers here, is the ability to have their communications transmitted to end-user subscribers if those subscribers so desire.” (Id.)

Second, the court considered the capacity differences between the analog cable systems at issue in Midwest Video II and the broadband Internet to be irrelevant to common carriage analysis:

Whether an entity qualifies as a carrier does not turn on how much content it is able to carry or the extent to which other content might be crowded out. A short train is no more a carrier than a long train, or even a train long enough to serve every possible customer.

FCC v. Verizon, No. 11-1355 at 55 (2014). The capacity issue is irrelevant to the applicability of the First Amendment for the same reason. A speaker has the right to refrain from speaking even if speaking would be undemanding.

Finally, the court concluded that the FCC could not distinguish its net neutrality rules from the rules at issue in Midwest Video II using another variation on the “actual conduct” argument. In Midwest Video II, the Supreme Court emphasized that the FCC cable regulations in question “transferred control of the content of access cable channels from cable operators to members of the public.” Midwest Video II, 440 U.S. at 700. In Verizon v. FCC, the FCC argued that its net neutrality rules had not “transferred control” over the Internet content transmitted by ISPs because, “unlike cable systems, Internet access providers traditionally have not decided what sites their end users visit.” (FCC Brief at 65) The court did not consider the “actual conduct” of ISPs a relevant distinction:

The [net neutrality] regulations here accomplish the very same sort of transfer of control: whereas previously broadband providers could have blocked or discriminated against the content of certain edge providers, they must now carry the content those edge providers desire to transmit.

FCC v. Verizon, No. 11-1355 at 56 (2014).

Based on the court’s repeated emphasis on the substantive similarities between analog cable services, which the Supreme Court has held are “speakers”, and Internet services, it should now be obvious that ISPs are also “speakers” entitled to First Amendment protection. The use of Internet protocol rather than analog cable technology to deliver video services changes neither the economic nor the First Amendment considerations applicable to network operators, edge providers, and end users.

To be clear, application of the First Amendment to ISPs does not automatically mean that net neutrality rules would be unconstitutional. Whether a particular regulation is violative of the First Amendment depends on the applicable level of judicial scrutiny, the importance of the government interest at stake, and the degree of relatedness between the law and its purpose. Whether net neutrality rules would survive First Amendment scrutiny would thus depend in part on their own terms and the government’s rationale for adopting them.

That is why the applicability of the First Amendment to ISPs is so important. When Constitutional rights are at stake, the government has stronger incentives to adopt regulations that are well-reasoned and likely to achieve their intended goals than it does when it makes rules in the ordinary administrative context.

- – -

[1] The doctrine of constitutional avoidance counsels against deciding a constitutional question when a case can be resolved on some other basis. Once the court concluded that the FCC exceeded its authority in adopting the anti-blocking and anti-discrimination rules, the court had no need to address their constitutionality.

[2] Even if the “actual conduct” argument were valid, it would not control application of the First Amendment to ISPs. The fact that ISPs don’t exercise editorial discretion was motivated in part by FCC policies that chilled or prohibited the exercise of such discretion.

  • In the dial-up era, telephone companies were subject to common carrier regulations prohibiting their exercise of editorial discretion over Internet content transmitted by third-party companies (e.g., America Online, who exercised editorial discretion over Internet content) while reducing economic incentives for telephone companies to provide their own Internet services;
  • Though the FCC exempted cable broadband services from common carrier regulation relatively early in the broadband era, the FCC simultaneously asked whether and to what extent it should impose editorial restrictions on such services;
  • In conjunction with its subsequent order extending the cable broadband exemption to telephone companies, the FCC issued a Broadband Policy Statement announcing that it would take action if it observed ISPs exercising editorial discretion; and
  • After the DC Circuit ruled that the Broadband Policy Statement was unenforceable, the FCC adopted the net neutrality rules that the court struck down in Verizon v. FCC.

This history indicates that the “actual conduct” of ISPs evidences nothing more than their intent to comply with FCC rules and policies. It would be absurd to conclude that ISPs forfeited their right to First Amendment protection by virtue of their regulatory compliance.

]]>
http://techliberation.com/2014/02/17/net-neutrality-opinion-indicates-internet-service-providers-are-entitled-to-first-amendment-protection/feed/ 1