Technology Liberation Front http://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Fri, 30 Jan 2015 14:59:25 +0000 en-US hourly 1 DRM for Drones Will Fail http://techliberation.com/2015/01/28/drm-for-drones-will-fail/ http://techliberation.com/2015/01/28/drm-for-drones-will-fail/#comments Wed, 28 Jan 2015 22:00:18 +0000 http://techliberation.com/?p=75358

I suppose it was inevitable that the DRM wars would come to the world of drones. Reporting for the Wall Street Journal today, Jack Nicas notes that:

In response to the drone crash at the White House this week, the Chinese maker of the device that crashed said it is updating its drones to disable them from flying over much of Washington, D.C.SZ DJI Technology Co. of Shenzhen, China, plans to send a firmware update in the next week that, if downloaded, would prevent DJI drones from taking off within the restricted flight zone that covers much of the U.S. capital, company spokesman Michael Perry said.

Washington Post reporter Brian Fung explains what this means technologically:

The [DJI firmware] update will add a list of GPS coordinates to the drone’s computer telling it where it can and can’t go. Here’s how that system works generally: When a drone comes within five miles of an airport, Perry explained, an altitude restriction gets applied to the drone so that it doesn’t interfere with manned aircraft. Within 1.5 miles, the drone will be automatically grounded and won’t be able to fly at all, requiring the user to either pull away from the no-fly zone or personally retrieve the device from where it landed. The concept of triggering certain actions when reaching a specific geographic area is called “geofencing,” and it’s a common technology in smartphones. Since 2011, iPhone owners have been able to create reminders that alert them when they arrive at specific locations, such as the office.

This is complete overkill and it almost certainly will not work in practice. First, this is just DRM for drones, and just as DRM has failed in most other cases, it will fail here as well. If you sell somebody a drone that doesn’t work within a 15-mile radius of a major metropolitan area, they’ll be online minutes later looking for a hack to get it working properly. And you better believe they will find one.

Second, other companies or even non-commercial innovators will just use such an opportunity to promote their DRM-free drones, making the restrictions on other drones futile.

Perhaps, then, the government will push for all drone manufacturers to include DRM on their drones, but that’s even worse. The idea that the Washington, DC metro area should be a completely drone-free zone is hugely troubling. We might as well put up a big sign at the edge of town that says, “Innovators Not Welcome!”

And this isn’t just about commercial operators either. What would such a city-wide restriction mean for students interested in engineering or robotics in local schools? Or how about journalists who might want to use drones to help them report the news?

For these reasons, a flat ban on drones throughout this or any other city just shouldn’t fly.

Moreover, the logic behind this particular technopanic is particularly silly. It’s like saying that we should install some sort of kill switch in all automobile ignitions so that they will not start anywhere in the DC area on the off chance that one idiot might use their car to drive into the White House fence. We need clear and simple rules for drone use; not technically-unworkable and unenforceable bans on all private drone use in major metro areas.

[Update 1/30: Washington Post reporter Matt McFarland was kind enough to call me and ask for comment on this matter. Here’s his excellent story on “The case for not banning drone flights in the Washington area,” which included my thoughts.]

]]>
http://techliberation.com/2015/01/28/drm-for-drones-will-fail/feed/ 0
Some Initial Thoughts on the FTC Internet of Things Report http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/ http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/#comments Wed, 28 Jan 2015 14:54:30 +0000 http://techliberation.com/?p=75351

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be consider synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

]]>
http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/feed/ 0
Television is competitive. Congress should end mass media industrial policy. http://techliberation.com/2015/01/27/television-is-competitive/ http://techliberation.com/2015/01/27/television-is-competitive/#comments Tue, 27 Jan 2015 18:41:46 +0000 http://techliberation.com/?p=75340

Congress is considering reforming television laws and solicited comment from the public last month. On Friday, I submitted a letter encouraging the reform effort. I attached the paper Adam and I wrote last year about the current state of video regulations and the need for eliminating the complex rules for television providers.

As I say in the letter, excerpted below, pay TV (cable, satellite, and telco-provided) is quite competitive, as this chart of pay TV market share illustrates. In addition to pay TV there is broadcast, Netflix, Sling, and other providers. Consumers have many choices and the old industrial policy for mass media encourages rent-seeking and prevents markets from evolving.

Pay TV Market Share

Dear Chairman Upton and Chairman Walden:

Thank you for the opportunity to respond to the Committee’s December 2014 questions on video regulation.

…The labyrinthine communications and copyright laws governing video distribution are now distorting the market and therefore should be made rational. Congress should avoid favoring some distributors at the expense of free competition. Instead, policy should encourage new entrants and consumer choice.

The focus of the committee’s white paper on how to “foster” various television distributors, while understandable, was nonetheless misguided. Such an inquiry will likely lead to harmful rules that favor some companies and programmers over others, based on political whims. Congress and the FCC should get out of “fostering” the video distribution markets completely. A light-touch regulatory approach will prevent the damaging effects of lobbying for privilege and will ensure the primacy of consumer choice.

Some of the white paper’s questions may actually lead policy astray. Question 4, for instance, asks how we should “balance consumer welfare and the rights of content creators” in video markets. Congress should not pursue this line of inquiry too far. Just consider an analogous question: how do we balance consumer welfare and the interests of content creators in literature and written content? The answer is plain: we don’t. It’s bizarre to even contemplate.

Congress does not currently regulate the distribution markets of literature and written news and entertainment. Congress simply gives content producers copyright protection, which is generally applicable. The content gets aggregated and distributed on various platforms through private ordering via contract. Congress does not, as in video, attempt to keep competitive parity between competing distributors of written material: the Internet, paperback publishers, magazine publishers, books on tape, newsstands, and the like. Likewise, Congress should forego any attempt at “balancing” in video content markets. Instead, eliminate top-down communications laws in favor of generally applicable copyright laws, antitrust laws, and consumer protection laws.

As our paper shows, the video distribution marketplace has changed drastically. From the 1950s to the 1990s, cable was essentially consumers’ only option for pay TV. Those days are long gone, and consumers now have several television distributors and substitutes to choose from. From close to 100 percent market share of the pay TV market in the early 1990s, cable now has about 50 percent of the market. Consumers can choose popular alternatives like satellite- and telco-provided television as well as smaller players like wireless carriers, online video distributors (such as Netflix and Sling), wireless Internet service providers (WISPs), and multichannel video and data distribution service (MVDDS or “wireless cable”). As many consumers find Internet over-the-top television adequate, and pay TV an unnecessary expense, “free” broadcast television is also finding new life as a distributor.

The New York Times reported this month that “[t]elevision executives said they could not remember a time when the competition for breakthrough concepts and creative talent was fiercer” (“Aiming to Break Out in a Crowded TV Landscape,” January 11, 2015). As media critics will attest, we are living in the golden age of television. Content is abundant and Congress should quietly exit the “fostering competition” game. Whether this competition in television markets came about because of FCC policy or in spite of it (likely both), the future of television looks bright, and the old classifications no longer apply. In fact, the old “silo” classifications stand in the way of new business models and consumer choice.

Therefore, Congress should (1) merge the FCC’s responsibilities with the Federal Trade Commission or (2) abolish the FCC’s authority over video markets entirely and rely on antitrust agencies and consumer protection laws in television markets. New Zealand, the Netherlands, Denmark, and other countries have merged competition and telecommunications regulators. Agency merger streamlines competition analyses and prevents duplicative oversight.

Finally, instead of fostering favored distribution channels, Congress’ efforts are better spent on reforms that make it easier for new entrants to build distribution infrastructure. Such reforms increase jobs, increase competition, expand consumer choice, and lower consumer prices.

Thank you for initiating the discussion about updating the Communications Act. Reform can give America’s innovative telecommunications and mass-media sectors a predictable and technology neutral legal framework. When Congress replaces industrial planning in video with market forces, consumers will be the primary beneficiaries.

Sincerely,

Brent Skorup
Research Fellow, Technology Policy Program
Mercatus Center at George Mason University

]]>
http://techliberation.com/2015/01/27/television-is-competitive/feed/ 0
The government sucks at cybersecurity http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/ http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/#comments Tue, 20 Jan 2015 21:19:11 +0000 http://techliberation.com/?p=75327

Originally posted at Medium.

The federal government is not about to allow last year’s rash of high-profile security failures of private systems like Home Depot, JP Morgan, and Sony Entertainment to go to waste without expanding its influence over digital activities.

Last week, President Obama proposed a new round of cybersecurity policies that would, among other things, compel private organizations to share more sensitive information about information security incidents with the Department of Homeland Security. This endeavor to revive the spirit of CISPA is only the most recent in a long line of government attempts to nationalize and influence private cybersecurity practices.

But the federal government is one of the last organizations that we should turn to for advice on how to improve cybersecurity policy.

Don’t let policymakers’ talk of getting tough on cybercrime fool you. Their own network security is embarrassing to the point of parody and has been getting worse for years despite spending billions of dollars on the problem.

C2-Spending-and-Breaches_0

The chart above comes from a new analysis on federal information security incidents and cybersecurity spending by me and my colleague Eli Dourado at the Mercatus Center.

The chart uses data from the Congressional Research Service and the Government Accountability Office to display total federal cybersecurity spending required by the Federal Information Security Management Act of 2002 displayed by the green bars and measured on the left-hand axis along with the total number of reported information security incidents of federal systems displayed by the blue line and measured by the right-hand axis from 2006 to 2013. The chart shows that the number of federal cybersecurity failures has increased every year since 2006, even as investments in cybersecurity processes and systems have increased considerably.

In 2002, the federal government created an explicit goal for itself to modernize and strengthen its cybersecurity infrastructure by the end of that decade with the passage of the Federal Information Security Management Act (FISMA). FISMA required agency leaders to develop and implement information security protections with the guidance of offices like the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS)—some of the same organizations tasked with coordinating information-sharing about cybersecurity threats with the private sector in Obama’s proposal, by the way—and authorized robust federal investments in IT infrastructure to meet these goals.

The chart is striking, but a quick data note on the spending numbers is in order. Both the dramatic increase in FISMA spending from $7.4 billion in FY 2009 to $12.8 billion in FY 2010 and the dramatic decrease in FISMA spending from $14.8 billion in FY 2012 to $10.3 billion in FY 2013 are partially attributable to OMB’s decision to change its FISMA spending calculation methodology in those years.

Even with this caveat on inter-year spending comparisons, the chart shows that the federal government has invested billions of dollars to improve its internal cybersecurity defenses in recent years. Altogether, the OMB reports that the federal government spent $78.8 billion on FISMA cybersecurity investments from FY 2006 to FY 2013.

(And this is just cybersecurity spending authorized through FISMA. When added to the various other authorizations on cybersecurity spending tucked in other federal programs, the breadth of federal spending on IT preparedness becomes staggering indeed.)

However, increased federal spending on cybersecurity is not reflected in the rate of cyberbreaches of federal systems reported by the GAO. The number of reported federal cybersecurity incidents increased by an astounding 1012% over the selected years, from 5,503 in 2006 to 61,214 in 2013.

Yes, 1012%. That’s not a typo.

C3b-Breaches-blue

What’s worse, a growing number of these federal cybersecurity failures involve the potential exposure of personally identifiable information—private data about individuals’ contact information, addresses, and even Social Security numbers and financial accounts.

The second chart displays the proportion of all reported federal information security incidents that involved the exposure of personally identifiable information from 2009 to 2013. By 2013, over 40 percent of all reported cybersecurity failures involved the potential exposure of private data to outside groups.

It is hard to argue that these failures stem from lack of adequate security investments. This is as much a problem of scale as it is of an inability to follow one’s own directions. In fact, the government’s own Government Accountability Office has been sounding the alarm about poor information security practices since 1997. After FISMA was implemented to address the problem, government employees promptly proceeding to ignore or undermine the provisions that would improve security—rendering the “solution” merely another checkbox on the bureaucrat’s list of meaningless tasks.

The GAO reported in April of 2014 that federal agencies systematically fail to meet federal security standards due to poor implementation of key FISMA practices outlined by the OMB, NIST, and DHS. After more than a decade of billion dollar investments and government-wide information sharing, in 2013 “inspectors general at 21 of the 24 agencies cited information security as a major management challenge for their agency, and 18 agencies reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting.”

This weekend’s POLITICO report on lax federal security practices makes it easy to see how ISIS could hack into the CENTCOM Twitter account:

Most of the staffers interviewed had emailed security passwords to a colleague or to themselves for convenience. Plenty of offices stored a list of passwords for communal accounts like social media in a shared drive or Google doc. Most said they individually didn’t think about cybersecurity on a regular basis, despite each one working in an office that dealt with cyber or technology issues. Most kept their personal email open throughout the day. Some were able to download software from the Internet onto their computers. Few could remember any kind of IT security training, and if they did, it wasn’t taken seriously.

“It’s amazing we weren’t terribly hacked, now that I’m thinking back on it,” said one staffer who departed the Senate late this fall. “It’s amazing that we have the same password for everything [like social media.]”

Amazing, indeed.

What’s also amazing is the gall that the federal government has in attempting to butt its way into assuming more power over cybersecurity policy when it can’t even get its own house in order.

While cybersecurity vulnerabilities and data breaches remain a considerable problem in the private sector as well as the public sector, policies that failed to protect the federal government’s own information security are unlikely to magically work when applied to private industry. The federal government’s own poor track record of increasing data breaches and exposures of personally identifiable information render its systems a dubious safehouse for the huge amounts of sensitive data affected by the proposed legislation.

President Obama is expected to make cybersecurity policy a key platform issue in tonight’s State of the Union address. Given his own shop’s pathetic track record in protecting its own network security, one has to ponder the efficacy and reasoning in his intentions. The federal government should focus on properly securing its own IT systems before trying to expand its control over private systems.

]]>
http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/feed/ 0
Striking a Sensible Balance on the Internet of Things and Privacy http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/ http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/#comments Fri, 16 Jan 2015 21:08:39 +0000 http://techliberation.com/?p=75274

FPF logoThis week, the Future of Privacy Forum (FPF) released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” which I believe can help us find policy consensus regarding the privacy and security concerns associated with the Internet of Things (IoT) and wearable technologies. I’ve been monitoring IoT policy developments closely and I recently published a big working paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will appear shortly in the Richmond Journal of Law & Technology. I have also penned several other essays on IoT issues. So, I will be relating the FPF report to some of my own work.

The new FPF report, which was penned by Christopher Wolf, Jules Polonetsky, and Kelsey Finch, aims to accomplish the same goal I had in my own recent paper: sketching out constructive and practical solutions to the privacy and security issues associated with the IoT and wearable tech so as not to discourage the amazing, life-enriching innovations that could flow from this space. Flexibility is the key, they argue. “Premature regulation at an early stage in wearable technological development may freeze or warp the technology before it achieves its potential, and may not be able to account for technologies still to come,” the authors note. “Given that some uses are inherently more sensitive than others, and that there may be many new uses still to come, flexibility will be critical going forward.” (p. 3)

That flexible approach is at the heart of how the FPF authors want to see Fair Information Practice Principles (FIPPs) applied in this space. The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. The FPF authors correctly note that,

The FIPPs do not establish specific rules prescribing how organizations should provide privacy protections in all contexts, but rather provide high-level guidelines. Over time, as technologies and the global privacy context have changed, the FIPPs have been presented in different ways with different emphases. Accordingly, we urge policymakers to enable the adaptation of these fundamental principles in ways that reflect technological and market developments. (p. 4)

They continue on to explain how each of the FIPPS can provide a certain degree of general guidance for the IoT and wearable tech, but also caution that: “A rigid application of the FIPPs could inhibit these technologies from even functioning, and while privacy protections remain essential, a degree of flexibility will be key to ensuring the Internet of Things can develop in ways that best help consumer needs and desires.” (p. 4) And throughout the report, the FPF authors stress the need for the FIPPS to be “practically applied” and they nicely explain how the appropriate application of any particular one of the FIPPS “will depend on the circumstances.”  For those reasons, they conclude by saying, “we urge policymakers to adopt a forward-thinking, flexible application of the FIPPs.” (p. 11)

The approach that Wolf, Polonetsky, and Finch set forth in this new FPF report is very much consistent with the policy framework I sketched out in my forthcoming law review article. “The need for flexibility and adaptability will be paramount if innovation is to continue in this space,” I argued. In essence, best practices need to remain just that: best practicesnot fixed, static, top-down regulatory edicts. As I noted:

Regardless of whether they will be enforced internally by firms or by ex post FTC enforcement actions, best practices must not become a heavy-handed, quasi-regulatory straitjacket. A focus on security and privacy by design does not mean those are the only values and design principles that developers should focus on when innovating. Cost, convenience, choice, and usability are all important values too. In fact, many consumers will prioritize those values over privacy and security — even as activists, academics, and policymakers simultaneously suggest that more should be done to address privacy and security concerns.

Finally, best practices for privacy and security issues will need to evolve as social acceptance of various technologies and business practices evolve. For example, had “privacy by design” been interpreted strictly when wireless geolocation capabilities were first being developed, these technologies might have been shunned because of the privacy concerns they raised. With time, however, geolocation technologies have become a better understood and more widely accepted capability that consumers have come to expect will be embedded in many of their digital devices.  Those geolocation capabilities enable services that consumers now take for granted, such as instantaneous mapping services and real-time traffic updates.

This is why flexibility is crucial when interpreting the privacy and security best practices.

The only thing I think that was missing from the FPF report was a broader discussion of other constructive privacy and security solutions that involve education, etiquette, and empowerment-based solutions. I would have also liked to have seen some discussion of how other existing legal mechanisms — privacy torts, contractual enforcement mechanisms, property rights, state “peeping Tom” law, and existing privacy statutes — might cover some of the hard cases that could develop on this front. I discuss those and other “bottom-up” solutions in Section IV of my law review article and note that they can contribute to the sort of “layered” approach we need to address privacy and security concerns for the IoT and wearable tech.

In any event, I encourage everyone to check out the new Future of Privacy Forum report as well as the many excellent best practice guidelines they have put together to help innovators adopt sensible privacy and security best practices. FPF has done some great work on this front.

Additional Reading

]]>
http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/feed/ 0
Again, We Humans Are Pretty Good at Adapting to Technological Change http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/ http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/#comments Fri, 16 Jan 2015 16:58:19 +0000 http://techliberation.com/?p=75292

Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. I elaborated on these issues in a lengthy essay last summer entitled,  “Muddling Through: How We Learn to Cope with Technological Change.” I borrowed the term “muddling through” from Joel Garreau’s terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human.  Garreau argued that history can be viewed “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

Garreau associated this with what he called the “Prevail” scenario and he contrasted it with the “Heaven” scenario, which believes that technology drives history relentlessly, and in almost every way for the better, and the “Hell” scenario, which always worries that “technology is used for extreme evil, threatening humanity with extinction.” Under the “Prevail” scenario, Garreau argued, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he concluded. (p. 154) Or, as John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

In my essay last summer, I sketched out the reasons why I think this “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process. Again, it comes down to the fact that people and institutions learned to cope with technological change and become more resilient over time. It’s a learning process, and we humans are good at rolling with the punches and finding new baselines along the way. While “muddling through” can sometimes be quite difficult and messy, we adjust to most of the new technological realities we face and, over time, find constructive solutions to the really hard problems.

So, while it’s always good to reflect on the challenges of life in an age of never-ending, rapid-fire technological change, there’s almost never cause for panic. Read my old essay for more discussion on why I remain so optimistic about the human condition.

]]>
http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/feed/ 0
Regulatory Capture: FAA and Commercial Drones Edition http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/ http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/#comments Fri, 16 Jan 2015 14:02:54 +0000 http://techliberation.com/?p=75279

FAA sealRegular readers know that I can get a little feisty when it comes to the topic of “regulatory capture,” which occurs when special interests co-opt policymakers or political bodies (regulatory agencies, in particular) to further their own ends. As I noted in my big compendium, “Regulatory Capture: What the Experts Have Found“:

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity.  Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.

Indeed, the more I highlight the problem of regulatory capture and offer concrete examples of it in practice, the more push-back I get from true believers in the idea of “independent” agencies. Even if I can get them to admit that history offers countless examples of capture in action, and that a huge number of scholars of all persuasions have documented this problem, they will continue to persist that, WE CAN DO BETTER! and that it is just a matter of having THE RIGHT PEOPLE! who will TRY HARDER!

Well, maybe. But I am a realist and a believer in historical evidence. And the evidence shows, again and again, that when Congress (a) delegates broad, ambiguous authority to regulatory agencies, (b) exercises very limited oversight over that agency, and then, worse yet, (c) allows that agency’s budget to grow without any meaningful constraint, then the situation is ripe for abuse. Specifically, where unchecked power exists, interests will look to exploit it for their own ends.

In any event, all I can do is to continue to document the problem of regulatory capture in action and try to bring it to the attention of pundits and policymakers in the hope that we can start the push for real agency oversight and reform. Today’s case in point comes from a field I have been covering here a lot over the past year: commercial drone innovation.

Yesterday, via his Twitter account, Wall Street Journal reporter Christopher Mims brought this doozy of an example of regulatory capture to my attention, which involves Federal Aviation Administration officials going to bat for the pilots who frequently lobby the agency and want commercial drone innovations constrained. Here’s how Jack Nicas begins the WSJ piece that Mims brought to my attention:

In an unfolding battle over U.S. skies, it’s man versus drone. Aerial surveyors, photographers and moviemaking pilots are increasingly losing business to robots that often can do their jobs faster, cheaper and better. That competition, paired with concerns about midair collisions with drones, has made commercial pilots some of the fiercest opponents to unmanned aircraft. And now these aviators are fighting back, lobbying regulators for strict rules for the devices and reporting unauthorized drone users to authorities. Jim Williams, head of the Federal Aviation Administration’s unmanned-aircraft office, said many FAA investigations into commercial-drone flights begin with tips from manned-aircraft pilots who compete with those drones. “They’ll let us know that, ’Hey, I’m losing all my business to these guys. They’re not approved. Go investigate,’” Mr. Williams said at a drone conference last year. “We will investigate those.”

Well, that pretty much says it all. If you’re losing business because an innovative new technology or pesky new entrant has the audacity to come onto your turf and compete, well then, just come on down to your friendly neighborhood regulator and get yourself a double serving of tasty industry protectionism!

And so the myth of “agency independence” continues, and perhaps it will never die. It reminds me of a line from those rock-and-roll sages in Guns N’ Roses: ” I’ve worked too hard for my illusions just to throw them all away!”

]]>
http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/feed/ 0
Dispatches from CES 2015 on Privacy Implications of New Technologies http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/ http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/#comments Thu, 15 Jan 2015 19:22:30 +0000 http://techliberation.com/?p=75266

Over at the International Association of Privacy Professionals (IAPP) Privacy Perspectives blog, I have two “Dispatches from CES 2015″ up. (#1 & #2) While I was out in Vegas for the big show, I had a chance to speak on a panel entitled, “Privacy and the IoT: Navigating Policy Issues.” (Video can be found here. It’s the second one on the video playlist.) Federal Trade Commission (FTC) Chairwoman Edith Ramirez kicked off that session and stressed some of the concerns she and others share about the Internet of Things and wearable technologies in terms of the privacy and security issues they raise.

Before and after our panel discussion, I had a chance to walk the show floor and take a look at the amazing array of new gadgets and services that will soon hitting the market. A huge percentage of the show floor space was dedicated to IoT technologies, and wearable tech in particular. But the show also featured many other amazing technologies that promise to bring consumers a wealth of new benefits in coming years. Of course, many of those technologies will also raise privacy and security concerns, as I noted in my two essays for IAPP. The first of my dispatches focuses primarily on the Internet of Things and wearable technologies that I saw at CES.  In my second dispatch, I discuss the privacy and security implications of the increasing miniaturization of cameras, drone technologies, and various robotic technologies (especially personal care robots).

I open the first column by noting that “as I was walking the floor at this year’s massive CES 2015 tech extravaganza, I couldn’t help but think of the heartburn that privacy professionals and advocates will face in coming years.” And I close the second dispatch by concluding that, “The world of technology is changing rapidly and so, too, must the role of the privacy professional. The technologies on display at this year’s CES 2015 make it clear that a whole new class of concerns are emerging that will require IAPP members to broaden their issue set and find constructive solutions to the many challenges ahead.” Jump over to the Privacy Perspectives blog to read more.

]]>
http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/feed/ 0
Trouble Ahead for Municipal Broadband http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/ http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/#comments Wed, 14 Jan 2015 21:02:34 +0000 http://techliberation.com/?p=75254

President Obama recently announced his wish for the FCC to preempt state laws that make building public broadband networks harder. Per the White House, nineteen states “have held back broadband access . . . and economic opportunity” by having onerous restrictions on municipal broadband projects.

Much of the White House claims are PR nonsense. Most of these so-called state restrictions on public broadband are reasonable considering the substantial financial risk public networks pose to taxpayers. Minnesota and Colorado, for instance, require approval from local voters before spending money on a public network. Nevada’s “restriction” is essentially that public broadband is only permitted in the neediest, most rural parts of the state. Some states don’t allow utilities to provide broadband because utilities have a nasty habit of raising, say, everyone’s electricity bills because the money-losing utility broadband network fails to live up to revenue expectations. And so on.

It is an abuse of the English language for political activists to say municipal broadband is just a competitor to existing providers. If the federal government dropped over $100 million in a small city to build publicly-owned grocery stores with subsidized food, local grocery stores would, of course, strenuously object that this is patently unfair and harms private grocers. This is what the US government did in Chattanooga, using $100 million to build a public network. The US government has spent billions on broadband, and much of it goes to public broadband networks. The activists’ response to the carriers, who obviously complain about this “competition,” is essentially, “maybe now you’ll upgrade and compete harder.” It’s absurd on its face.

Public networks are unwise and costly. Every dollar diverted to some money-losing public network is one less to use on worthy societal needs. There are serious problems with publicly-funded retail broadband networks. A few come to mind:

1. The economic benefits of municipal broadband are dubious. A recent Mercatus economics paper by researcher Brian Deignan showed disappointing results for municipal broadband. The paper uses 23 years of BLS data from 80 cities that have deployed broadband and analyzes municipal broadband’s effect on 1) quantity of businesses; 2) employee wages; and 3) employment. Ultimately, the data suggest municipal broadband has almost zero effect on the private sector.

On the plus side, municipal broadband is associated with a 3 percent increase in the number of business establishments in a city. However, there is a small, negative effect on employee wages. There is no effect on private employment but the existence of a public broadband network is associated with a 6 percent increase in local government employment. The substantial taxpayer risk for such modest economic benefits leads many states to reasonably conclude these projects aren’t worth the trouble.

2. There are serious federalism problems with the FCC preempting state laws. Matthew Berry, FCC Commissioner Pai’s chief of staff, explains the legal risks. Cities are creatures of state law and states have substantial powers to regulate what cities do. In some circumstances, Congress can preempt state laws, but as the Supreme Court has held, for an agency to preempt state laws, Congress must provide a clear statement that the FCC is authorized to preempt. Absent a clear statement from Congress, it’s unlikely the FCC could constitutionally preempt state laws regulating municipal broadband.

3. Broadband networks are hard work. Tearing up streets, attaching to poles, and wiring homes, condos, apartments is expensive and time-consuming. It costs thousands of dollars per home passed and the take-up rates are uncertain. Truck-rolls for routine maintenance and customer service cost hundreds of dollars per pop. Additionally, broadband network design is growing increasingly complex as several services converge to IP networks. Interconnection requires complex commercial agreements. Further, carriers are starting to offer additional services using software-defined networks and network function virtualization. I’m skeptical that city managers can stay cutting-edge years into the future. The costs for failed networks will fall to taxpayers.

4. City governments are just not very good at supplying triple play services, as the Phoenix Center and others have pointed out. People want phone, Internet, and television in one bill (and don’t forget video-on-demand service). Cities will often find that there is a lack of interest in a broadband connection that doesn’t also provide traditional television as well. Google Fiber (not a public network, obviously) initially intended to offer only broadband service. However, they quickly found out that potential subscribers wanted their broadband and video bundled together into one contract. The Google Fiber team had to scramble to put together TV packages consumers are accustomed to. If the very competent planners at Google Fiber weren’t aware of this consumer habit, the city planners in Moose Lake and Peoria budgeting for municipal broadband may miss it, too. Further, city administrators are not particularly good at negotiating competitive video bundles (municipal cable revealed this) because of their small size and lack of expertise.

5. A municipal network can chase away commercial network expansion and investment. This, of course, is the main complaint of the cable and telco players. If there is a marginal town an ISP is considering serving or upgrading, the presence of a “public competitor” makes the decision easy. Competing against a network with ready access to taxpayer money is senseless.

6. When cities build networks where ISPs already are serving the public, ISPs do not take it laying down, either. ISPs use their considerable size and industry expertise to their advantage, like adding must-have channels to basic cable packages. The economics are particularly difficult for a city entering the market. Broadband networks have high up-front costs but fairly low marginal costs. This makes price reductions by incumbents very attractive in order to limit customer defections to the entrant. Dropping the price or raising the speeds in neighborhoods where the city builds and frustrating city customer acquisition is a common practice. Apparently some cities didn’t learn their lesson in the late-1990s when municipal cable was a (short-lived) popular idea. Cities often hemorrhaged tax dollars when faced with hard-ball tactics and their penetration rates never reached the optimistic projections.

There are other complications that turn public broadband into expensive boondoggles. People often say in surveys they would pay more for ultra-fast broadband but when actually offered it, many refuse to pay higher prices for higher speeds, particularly when the TV channels offered in the bundle are paltry compared to the “slower” existing providers. When cities do lose money, and they often do, a utility-run broadband network will often cross-subsidize the failing broadband service. Electric utility customers’ dollars are then diverted to maintaining broadband. Further, private carriers can drag lawsuits out to prevent city networks. And your run-of-the-mill city contractor corruption and embezzlement are also possibilities.

I can imagine circumstances where municipal broadband makes sense. However, the President and the FCC are doing the public a disservice by promoting widespread publicly-funded broadband in violation of state laws. This political priority, combined with the probable Title II order next month, signals an inauspicious start to 2015.

]]>
http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/feed/ 2
Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/ http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/#comments Tue, 13 Jan 2015 18:07:16 +0000 http://techliberation.com/?p=75238

I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption. When it comes to “Trolley Problem”-like ethical questions, Hagemann and I argue that, “these ethical considerations need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

None of this should be read to suggest that the ethical issues being raised by some philosophers or other pundits are unimportant. To the contrary, they are raising legitimate concerns about how ethics are “baked-in” to the algorithms that control autonomous or semi-autonomous systems. It is vital we continue to debate the wisdom of the choices made by the companies and programmers behind those technologies and consider better ways to inform and improve their judgments about how to ‘optimize the sub-optimal,’ so to speak. After all, when you are making decisions about how to minimize the potential for harm — including the loss of life — there are many thorny issues that must be considered and all of them will have downsides. Smith considers a few when he notes:

Automation does not mean an end to uncertainty. How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

Again, these are all valid questions deserving serious exploration, but we’re not having this discussion in a vacuum. Ivory Tower debates cannot be divorced from real-world realities. Although road safety has been improving for many years, people are still dying at a staggering rate due to vehicle-related accidents. Specifically, in 2012, there were 33,561 total traffic fatalities (92 per day) and 2,362,000 people injured (6,454 per day) in over 5,615,000 reported crashes. And, to reiterate, the bulk of those accidents were due to human error.

That is a staggering toll and anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms. Smith argues, correctly in my opinion, that “a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. … [T]his simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.”

Quite right. Indeed, the next time someone poses an an ethical thought experiment along the lines of the Trolley Problem, do what I do and reverse the equation. Ask them about the ethics of slowing down the introduction of a technology into our society that would result in a (potentially significant) lowering of the nearly 100 deaths and over 6,000 injuries that occur because of vehicle-related fatalities each day in the United States. Because that’s no hypothetical thought experiment; that’s the world we live in right now.

______________

(P.S. The late, great political scientist Aaron Wildavsky crafted a framework for considering these complex issues in his brilliant 1988 book, Searching for Safety. No book has had a more significant influence on my thinking about these and other “risk trade-off” issues since I first read it 25 years ago. I cannot recommend it highly enough. I discussed Wildavsky’s framework and vision in my recent little book on “Permissionless Innovation.” Readers might also be interested in my August 2013 essay, “On the Line between Technology Ethics vs. Technology Policy,” which featured an exchange with ethical philosopher Patrick Lin, co-editor of an excellent collection of essays on Robot Ethics: The Ethical and Social Implications of Robotics. You should add that book to your shelf if you are interested in these issues.

 

]]>
http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/feed/ 0
How the FCC Killed a Nationwide Wireless Broadband Network http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/ http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/#comments Fri, 09 Jan 2015 19:52:27 +0000 http://techliberation.com/?p=75222

Many readers will recall the telecom soap opera featuring the GPS industry and LightSquared and the subsequent bankruptcy of LightSquared. Economist Thomas W. Hazlett (who is now at Clemson, after a long tenure at the GMU School of Law) and I wrote an article published in the Duke Law & Technology Review titled Tragedy of the Regulatory Commons: Lightsquared and the Missing Spectrum Rights. The piece documents LightSquared’s ambitions and dramatic collapse. Contrary to popular reporting on this story, this was not a failure of technology. We make the case that, instead, the FCC’s method of rights assignment led to the demise of LightSquared and deprived American consumers of a new nationwide wireless network. Our analysis has important implications as the FCC and Congress seek to make wide swaths of spectrum available for unlicensed devices. Namely, our paper suggests that the top-down administrative planning model is increasingly harming consumers and delaying new technologies.

Read commentary from the GPS community about LightSquared and you’ll get the impression LightSquared is run by rapacious financiers (namely CEO Phil Falcone) who were willing to flaunt FCC rules and endanger thousands of American lives with their proposed LTE network. LightSquared filings, on the other hand, paint the GPS community as defense-backed dinosaurs who abused the political process to protect their deficient devices from an innovative entrant. As is often the case, it’s more complicated than these morality plays. We don’t find villains in this tale–simply destructive rent-seeking triggered by poor FCC spectrum policy.

We avoid assigning fault to either LightSquared or GPS, but we stipulate that there were serious interference problems between LightSquared’s network and GPS devices. Interference is not an intractable problem, however. Interference is resolved everyday in other circumstances. The problem here was intractable because GPS users are dispersed and unlicensed (including government users), and could not coordinate and bargain with LightSquared when problems arose. There is no feasible way for GPS companies to track down and compel users to use more efficient devices, for instance, if LightSquared compensated them for the hassle. Knowing that GPS mitigation was unfeasible, LightSquared’s only recourse after GPS users objected to the new LTE network was through the political and regulatory process, a fight LightSquared lost badly. The biggest losers, however, were consumers, who were deprived of another wireless broadband network because FCC spectrum assignment prevented win-win bargaining between licensees.

Our paper provides critical background to this dispute. Around 2004, because satellite phone spectrum was underused, the FCC permitted satellite phone licensees flexibility to repurpose some of their spectrum for use in traditional cellular phone networks. (Many people are appalled to learn that spectrum policy still largely resembles Soviet-style command-and-control. The FCC tells the wireless industry, essentially: “You can operate satellite phones only in band X. You can operate satellite TV in band Y. You can operate broadcast TV in band Z.” and assigns spectrum to industry players accordingly.) Seeing this underused satellite phone spectrum, LightSquared acquired some of this flexible satellite spectrum so that LightSquared could deploy a nationwide cellular phone network in competition with Verizon Wireless and AT&T Mobility. LightSquared had spent $4 billion in developing its network and reportedly had plans to spend $10 billion more when things ground to a halt.

In early 2012, the Department of Commerce objected to LightSquared’s network on the grounds that the network would interfere with GPS units (including, reportedly, DOD and FAA instruments). Immediately, the FCC suspended LightSquared’s authorization to deploy a cellular network and backtracked on the 2004 rules permitting cellular phones in that band. Three months later, LightSquared declared bankruptcy. This was a non-market failure, not a market failure. This regulatory failure obtains because virtually any interference to existing wireless operations is prohibited even if the social benefits of a new wireless network are vast.

This analysis is not simply scholarly theory about the nature of regulation and property rights. We provide real-world evidence that supports our notion that, had the FCC assigned flexible, de facto property rights to GPS licensees like the FCC does in some other bands, rather than fragmented unlicensed users, LightSquared might be in operation today serving millions with wireless broadband. Our evidence comes, in fact, from LightSquared’s deals with non-GPS parties. Namely, LightSquared had interference problems with another satellite licensee on adjacent spectrum–Inmarsat.

Inmarsat provides public safety, aviation, and national security applications and hundreds of thousands of devices to government and commercial users. The LightSquared-Inmarsat interference problems were unavoidable but because Inmarsat had de facto property rights to its spectrum, it could internalize financial gains and coordinate with LightSquared. The result was classic Coasian bargaining. The two companies swapped spectrum and activated an agreement in 2010 in which LightSquared would pay Inmarsat over $300 million. Flush with cash and spectrum, Inmarsat could rationalize its spectrum and replace devices that wouldn’t play nicely with LightSquared LTE operations.

These trades avoided the non-market failure the FCC produced by giving GPS users fragmented, non-exclusive property rights. When de facto property rights are assigned to licensees, contentious spectrum border disputes typically give way to private ordering. The result is regular spectrum swaps and sales between competitors. Wireless licensees like Verizon, AT&T, Sprint, and T-Mobile deal with local interference and unauthorized operations daily because they have enforceable, exclusive rights to their spectrum. The FCC, unfortunately, never assigned these kinds of spectrum rights to the GPS industry.

The evaporation of billions of dollars of LightSquared funds was a non-market failure, not a market failure and not a technology failure. The economic loss to consumers was even greater than LightSquared’s. Different FCC rules could have permitted welfare-enhancing coordination between LightSquared and GPS. The FCC’s error was the nature of rights the agency assigned for GPS use. By authorizing the use of millions of unlicensed devices adjacent to LightSquared’s spectrum, the FCC virtually ensured that future attempts to reallocate spectrum in these bands would prove contentious. Going forward, the FCC should think far less about which technologies they want to promote and more about the nature of spectrum rights assigned. For tech entrepreneurs and policy entrepreneurs to create innovative new wireless products, they need well-functioning spectrum markets. The GPS experience shows vividly what to avoid.

]]>
http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/feed/ 8
My Writing on Internet of Things (Thus Far) http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/ http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/#comments Mon, 05 Jan 2015 16:55:41 +0000 http://techliberation.com/?p=75210

I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.

I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquires from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.

That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies.

]]>
http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/feed/ 0
Hack Hell http://techliberation.com/2014/12/31/hack-hell/ http://techliberation.com/2014/12/31/hack-hell/#comments Wed, 31 Dec 2014 19:24:58 +0000 http://techliberation.com/?p=75160

2014 was quite the year for high-profile hackings and puffed-up politicians trying to out-ham each other on who is tougher on cybercrime. I thought I’d assemble some of the year’s worst hits to ring in 2015.

In no particular order:

Home Depot: The 2013 Target breach that leaked around 40 million customer financial records was unceremoniously topped by Home Depot’s breach of over 56 million payment cards and 53 million email addresses in July. Both companies fell prey to similar infiltration tactics: the hackers obtained passwords from a vendor of each retail giant and exploited a vulnerability in the Windows OS to install malware in the firms’ self-checkout lanes that collected customers’ credit card data. Millions of customers became vulnerable to phishing scams and credit card fraud—with the added headache of changing payment card accounts and updating linked services. (Your intrepid blogger was mysteriously locked out of Uber for a harrowing 2 months before realizing that my linked bank account had changed thanks to the Home Depot hack and I had no way to log back in without a tedious customer service call. Yes, I’m still miffed.)

The Fappening: 2014 was a pretty good year for creeps, too. Without warning, the prime celebrity booties of popular starlets like Scarlett Johansson, Kim Kardashian, Kate Upton, and Ariana Grande mysteriously flooded the Internet in the September event crudely immortalized as “The Fappening.” Apple quickly jumped to investigate its iCloud system that hosted the victims’ stolen photographs, announcing shortly thereafter that the “celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions” rather than any flaw in its system. The sheer volume produced and caliber of icons violated suggests this was not the work of a lone wolf, but a chain reaction of leaks collected over time triggered by one larger dump. For what it’s worth, some dude on 4chan claimed the Fappening was the product of an “underground celeb n00d-trading ring that’s existed for years.” While the event prompted a flurry of discussion about online misogyny, content host ethics, and legalistic tugs-of-war over DMCA takedown requests, it unfortunately did not generate a productive conversation about good privacy and security practices like I had initially hoped.

The Snappening: The celebrity-targeted Fappening was followed by the layperson’s “Snappening” in October, when almost 100,000 photos and 10,000 personal videos sent through the popular Snapchat messaging service, some of them including depictions of underage nudity, were leaked online. The hackers did not target Snapchat itself, but instead exploited a third-party client called SnapSave that allowed users to save images and videos that would normally disappear after a certain amount of time on the Snapchat app. (Although Snapchat doesn’t exactly have the best security record anyways: In 2013, contact information for 4.6 million of its users were leaked online before the service landed in hot water with the FTC earlier this year for “deceiving” users about their privacy practices.) The hackers received access to 13GB library of old Snapchat messages and dumped the images on a searchable online directory. As with the Fappening, discussion surrounding the Snappening tended to prioritize scolding service providers over promoting good personal privacy and security practices to consumers.

Las Vegas Sands Corp.: Not all of these year’s most infamous hacks sought sordid photos or privateering profit. 2014 also saw the rise of the revenge hack. In February, Iranian hackers infiltrated politically-active billionaire Sheldon Adelson’s Sands Casino not for profit or data, but for pure punishment. Adelson, a staunchly pro-Israel figure and partial owner of many Israeli media companies, drew intense Iranian ire after fantasizing about detonating an American nuclear warhead in the Iranian desert as a threat during his speech at Yeshiva University. Hackers released crippling malware into the Sands IT infrastructure early in the year, which proceeded to shut down email services, wipe hard drives clean, and destroy thousands of company computers, laptops, and expensive servers. The Sands website was also hacked to display “a photograph of Adelson chumming around with [Israeli Prime Minister] Netanyahu,” along with the message “Encouraging the use of Weapons of Mass Destruction, UNDER ANY CONDITION, is a Crime,” and a data dump of Sands employees’ names, titles, email addresses, and Social Security numbers. Interestingly, Sands was able to contain the damage internally so that guests and gamblers had no idea of the chaos that was ravaging casino IT infrastructure. Public knowledge of the hack did not serendipitously surface until early December, around the time of the Sony hack. It is possible that other large corporations have suffered similar cyberattacks this year in silence.

JP Morgan: You might think that one of the world’s largest banks would have security systems that are near impossible to crack. This was not the case at JP Morgan. From June to August, hackers infiltrated JP Morgan’s sophisticated security system and siphoned off massive amounts of sensitive financial data. The New York Times reports that “the hackers appeared to have obtained a list of the applications and programs that run on JPMorgan’s computers — a road map of sorts — which they could crosscheck with known vulnerabilities in each program and web application, in search of an entry point back into the bank’s systems, according to several people with knowledge of the results of the bank’s forensics investigation, all of whom spoke on the condition of anonymity.” Some security experts suspect that a nation-state was ultimately behind the infiltration due to the sophistication of the attack and the fact that the hackers neglected to immediately sell or exploit the data or attempt to steal funds from consumer accounts. The JP Morgan hack set off alarm bells among influential financial and governmental circles since banking systems were largely considered to be safe and impervious to these kinds of attacks.

Sony: What a tangled web this was! On November 24, Sony employees were greeted by the mocking grin of a spooky screen skeleton informed they had been “Hacked by the #GOP” and that there was more to come. It was soon revealed that Sony’s email and computer systems had been infiltrated and shut down while some 100 terabytes of data had been stolen. The hackers proceeded to leak embarrassing company information, including emails in which executives made racial jokes, compensation data revealing a considerable gender wage disparity, and unreleased studio films like Annie and Mr. Turner. We also learned about “Project Goliath,” a conspiracy among the MPAA, Sony, and five other studios (Universal, Sony, Fox, Paramount, Warner Bros., and Disney) to revise the spirit of SOPA and attack piracy on the web “by working with state attorneys general and major ISPs like Comcast to expand court power over the way data is served.” (Goliath was their not-exactly-subtle codeword for Google.) Somewhere along the way, a few folks got wild notions that North Korea was behind this attack because of the nation’s outrage at the latest Rogen romp, The Interview. Most cybersecurity experts doubt that the hermit nation was behind the attack, although the official KCNA statement enthusiastically “supports the righteous deed.” The absurdity of the official narrative did not prevent most of our world-class journalistic and political establishment from running with the story and beating the drums of cyberwar. Even the White House and FBI goofed. The FBI and State Department still maintain North Korean culpability, even as research compiled by independent security analysts points more and more to a collection of disgruntled former Sony employees and independent lulz-seekers. Troublingly, the Obama administration publicly entertained cyberwar countermeasures against the troubled communist nation on such slim evidence. A few days later, the Internet in North Korea was mysteriously shut down. I wonder what might have caused that? Truly a mess all around.

LizardSquad: Speaking of Sony hacks, the spirit of LulzSec is alive in LizardSquad. On Christmas day, the black hat collective knocked out Sony’s Playstation network and Microsoft’s Xbox servers with a massive distributed denial of service (DDoS) attack to the great vengeance and furious anger of gamers avoiding family gatherings across the country. These guys are not your average script-kiddies. NexusGuard chief scientist Terrence Gareu warns the unholy lizards boast an artillery that far exceeds normal DDoS attacks. This seems right, given the apparent difficulty that giants Sony and Microsoft had in responding to the attacks. For their part, LizardSquad claims the strength of their attack exceeded the previous record against Cloudflare this February. Megaupload Internet lord Kim Dotcom swooped to save gamers’ Christmas festivities with a little bit of information age, uh, “justice.” The attacks were allegedly called off after Dotcom offered the hacking collective 3,000 Mega vouchers (normally worth $99 each) for his content hosting empire if they agreed to cease. The FBI is investigating the lizards for the attacks. LizardSquad then turned their attention to the TOR network, creating thousands of new relays and comprising a worrying portion of the network’s roughly 8,000 relays in an effort to unmask users. Perhaps they mean to publicize the networks’ vulnerabilities? The group’s official Twitter bio reads, “I cry when Tor deserves to die.” Could this be related to the recent Pando-Tor drama that reinvigorated skepticism of Tor? As with any online brouhaha involving clashing numbers of privacy-obsessed computer whizzes with strong opinions, this incident has many hard-to-read layers (sorry!). While the Tor campaign is still developing, LizardSquad has been keeping busy with it’s newly-launched Lizard Stresser, a distributed DDoS tool that anyone can use for a small fee. These lizards appear very intent on making life as difficult as possible for the powerful parties they’ve identified as enemies and will provide some nice justifications for why governments need more power to crack down on cybercrime.

What a year! I wonder what the next one will bring.

One sure bet for 2015 is increasing calls for enhanced regulatory powers. Earlier this year, Eli and I wrote a Mercatus Research paper explaining why top-down solutions to cybersecurity problems can backfire and make us less secure. We specifically analyzed President Obama’s developing Cybersecurity Framework, but the issues we discuss apply to other rigid regulatory solutions as well. On December 11, in the midst of North Korea’s red herring debut in the Sony debacle, the Senate passed the Cybersecurity Act of 2014, which contains many of the same principles outlined in the Framework. The Act, which still needs House approval, strengthens the Department of Homeland Security’s role in controlling cybersecurity policy by directing DHS to create industry cybersecurity standards and begin routine information-sharing with private entities.

Ranking Member of the Senate Homeland Security Committee, Tom Coburn, had this to say: “Every day, adversaries are working to penetrate our networks and steal the American people’s information at a great cost to our nation. One of the best ways that we can defend against cyber attacks is to encourage the government and private sector to work together and share information about the threats we face. ”

While the problems of poor cybersecurity and increasing digital attacks are undeniable, the solutions proposed by politicians like Coburn are dubious. The federal government should probably try to get its own house in order before it undertakes to save the cyberproperties of the nation. The Government Accountability Office reports that the federal government suffered from almost 61,000 cyber attacks and data breaches last year. The DHS itself was hacked in 2012,while a 2013 GAO report criticized DHS for poor security practices, finding that “systems are being operated without authority to operate; plans of action and milestones are not being created for all known information security weaknesses or mitigated in a timely manner; and baseline security configuration settings are not being implemented for all systems.” GAO also reports that when federal agencies develop cybersecurity practices like those encouraged in the Cybersecurity Framework or the Cybersecurity Act of 2014, they are inconsistently and insufficiently implemented.

Given the federal government’s poor track record managing its own system security, we shouldn’t expect miracles when they take a leadership role for the nation.

Another trend to watch will be the development of a more robust cybersecurity insurance market. The Wall Street Journal reports that 2014’s rash of hacking attacks stimulated sales of formerly-obscure cyberinsurance packages.

The industry had suffered in the past due to its novelty and lack of previous data to use to accurately price insurance packages. This year, demand has been sufficiently stimulated and actuaries have been familiar enough with the relevant risks that the practice has finally become mainstream. Policies can cover “the costs of [data breach] investigations, customer notifications and credit-monitoring services, as well as legal expenses and damages from consumer lawsuits” and “reimbursement for loss of income and extra expenses resulting from suspension of computer systems, and provide payments to cover recreation of databases, software and other assets that were corrupted or destroyed by a computer attack.” As the market matures, cybersecurity insurers may start more actively assessing firms’ digital vulnerabilities and recommend improvements to their systems in exchange for a lower premium payment, as is common in other insurance markets.

Still, nothing ever beats good old-fashioned personal responsibility. One of the easiest ways to ensure privacy and security for yourself online is to take the time to learn how to best protect yourself or your business by developing good habits, using the right services, and remaining conscientious about your digital activities. That’s my New Year’s resolution. I think it should be yours, too! :)

Happy New Year’s, all!

]]>
http://techliberation.com/2014/12/31/hack-hell/feed/ 0
The 10 Most-Read Posts of 2014 http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

10. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.

9. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.

8. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

7. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.

6. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”

5. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.

4. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

3. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

2. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 0
Government Surveillance: Is It Time for Another Church Committee? http://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/ http://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/#comments Wed, 17 Dec 2014 21:32:29 +0000 http://techliberation.com/?p=75085

This morning, a group of organizations led by the Center for Responsibility and Ethics in Washington (CREW), R Street, and the Sunlight Foundation released a public letter to House Speaker John Boehner and Minority Leader Nancy Pelosi calling for enhanced congressional oversight of U.S. national security surveillance policies.

The letter—signed by over fifty organizations, ranging from the Electronic Frontier Foundation, the Competitive Enterprise Institute, and the Brennan Center for Justice at the New York University School of Law, and a handful of individuals, including Pentagon Papers whistleblower Daniel Ellsberg—expresses deep concerns about the expansive scope and limited accountability of intelligence activities and agencies, famously exposed by whistleblower Edward Snowden in 2013. The letter states:

Congress is responsible for authorizing, overseeing, and funding these programs. In recent years, however, the House of Representatives has not always effectively performed its duties.

The time for modernization is now. When the House convenes for the 114th Congress in January and adopts rules, the House should update them to enhance opportunities for oversight by House Permanent Select Committee on Intelligence (“HPSCI”) members, members of other committees of jurisdiction, and all other representatives. The House should also consider establishing a select committee to review intelligence activities since 9/11. We urge the following reforms be included in the rules package.

The proposed modernization reforms include:

1) modernizing HPSCI membership to more accurately reflect House interests by allowing chairs and ranking members of other committees with intelligence jurisdiction to select a designee on HPSCI;

2) allowing each HPSCI Member to designate a staff member of his or her choosing to represent their interests on the committee, as is the practice in the Senate;

3) making all unclassified intelligence reports quickly available to the public;

4) improving HPSCI the speed and transparency of responsiveness to member requests for information; and

5) improving general HPSCI transparency by better informing members of relevant activities like upcoming closed hearings, legislative markups, and committee activities

The groups also urge reforms to empower all members of Congress to be informed of and involved with executive intelligence agencies’ activities. They are:

1) making all communications from the executive branch available to all Members unless the sender explicitly indicates otherwise;

2) reaffirming Members’ abilities to access, review, and publicly discuss materials already available to the public that are classified by the executive branch, as is the case with the Snowden leaks. Members should feel comfortable to discuss this kind of information without fear of reprimand;

3) providing Members with at least one staff member with access to classified information through a Top Secret/Special Compartmented Information (TS/SCI) clearance;

4) allowing Members to speak with whistleblowers without fear of reprisal; and

5) improving training for Members and staff on how to handle classified information and conduct effective congressional oversight of classified matters.

Over at the CREW blogDaniel Schuman provides some more context of the problems these groups seek to address:

Members of Congress rely on staff to do a lot of work, but most staff working on intelligence issues are not permitted to hold the necessary security clearances to do their jobs. Sometimes, the Intelligence Committee in the House intercepts mail from the executive branch addressed to all members of Congress. That same committee sits on unclassified reports, refusing to make them available to the public. Briefings provided by the intelligence community are announced for inconvenient times, do not provide enough detailed information, and members of Congress often are not allowed to take notes on what was said.

The executive branch has 666,000 employees with top secret/SCI clearance and 541,000 contractors with top secret/SCI clearance, and yet often times members of Congress are not permitted to talk with one another about their briefings. Members of Congress are not allowed to publicly speak about—and staff may not read—classified information that has been published in the newspaper or on the internet. This makes no sense for the deliberative body that was designed as a check on executive power.

While these proposed reforms aim to improve congressional oversight through common-sense changes or clarifications in House procedure and committee structure, these still only address failures of intelligence oversight that we have gleaned from our current knowledge of the byzantine maze of surveillance agency activities so far. The picture painted by the little knowledge that have right now is not pretty. An associated white paper presenting the reforms in more detail notes:

The last decade-and-a-half has witnessed major intelligence community failures. From the inability to connect the dots on 9/11 to false claims about weapons of mass destruction in Iraq, from the unlawful commission of torture to the inability to predict the Arab spring, from lying to Congress about the NSA to CIA surveillance of Senate staff, the intelligence community has a credibility gap. Moreover, with recent revelations about secret government activities, to the apparent surprise of many members of Congress, it is increasingly clear that Congress has not engaged in effective oversight of the intelligence community .

To get a fuller picture of the extent of the problem, the letter proposes that the House adopt a special committee to conduct a distinct, broad-based review of the activities of the intelligence community after 9/11. Similar committees have been assembled in the past to address previous shortcomings:

The last time so many revelations of government misdeeds came to light in news reports, Congress reacted by forming two special committees to investigate intelligence community activities. The reports by the Church and Pike Committees led to wholesale reforms of the intelligence community , including improving congressional oversight mechanisms.

The magnitude of current revelations and intelligence community failures leads to this conclusion: the House (and Senate) must establish a distinct, broad-based review of the activities of the intelligence community since 9/11. The House should establish a committee modeled after the Church or Pike Committees, provide it adequate staffing and financial support, and give it a broad mandate to review intelligence community activities, engage in public reporting wherever possible, and issue recommendations for reform.

The Church and Pike Committees of the 1970’s were products of a decade of explosive revelations of government surveillance run amok. The white paper cites a 1974 New York Times exclusive report by Seymour Hersh that revealed the CIA had been operationalized to inspect the mail, telephone communications, and residences of tens of thousands of uncharged private citizens since the 1950’s. Earlier that year, allegations that the U.S. Army had been performing illegal surveillance of American citizens were verified and repudiated by Senator Sam Ervin’s Military Surveillance Investigations. In 1975, a bombshell NSA investigation published by the Times reported that the then largely-unknown intelligence unit “eavesdrops on virtually all cable, Telex, and other nontelephone communications leaving and entering the United States” and “uses computers to sort out and obtain intelligence from the contents” in the now-infamous Project Shamrock. The revealed executive abuses of the Nixon administration provided the cherry on top of a growing distrust and anger with surreptitious U.S. surveillance practices.

Today is another era of outrageous whitstleblower reports and rapidly dwindling trust in U.S. surveillance bodies. A mere 24 percent of Americans reported that they trust the government to “do the right thing” most of the time in 2013 Rasmussen poll. (A miniscule 4 percent of your fellow Pollyanna patriots trust Uncle Sam all of the time.) Meanwhile, technological advances have allowed U.S. intelligence agencies a greater degree of potential (and, as Snowden revealed, actual) surveillance than every before. This gap in trust and power simply cannot continue indefinitely.

While not without their problems, the Church and Pike committees are noteworthy milestones in reclaiming congressional accountability over executive intelligence agencies run amok. Creating a new committee to comprehensively assess current surveillance agency activities, warts and all, and recommend accountability measures to address the unknown excesses that likely lurk in the shadows is one step in the right direction toward taming back the tentacles of unlawful government surveillance.

But if there’s one thing we’ve learned from the fruits of the 1970’s committees—namely, the Foreign Foreign Intelligence Surveillance Act (FISA) of 1978—it’s that what once served as a hindrance to government abuses may one day become a party to it. For example, the Foreign Intelligence Surveillance Court (FISC) established by FISA that was intended to provide critical oversight of federal spying programs is today limited by the inadequate tools available to verify whether or not surveillance programs are lawful.

Imposing accountability on agencies whose missions are devoted to secrecy is a tough nut to crack. Our history struggling with this challenge suggests that these proposed reforms are good preliminary actions. But watching the watchers will continue to be an omnipresent duty.

]]>
http://techliberation.com/2014/12/17/government-surveillance-is-it-time-for-another-church-committee/feed/ 0
The Underwhelming Economic Effects of Municipal Broadband http://techliberation.com/2014/12/15/the-underwhelming-economic-effects-of-municipal-broadband/ http://techliberation.com/2014/12/15/the-underwhelming-economic-effects-of-municipal-broadband/#comments Mon, 15 Dec 2014 20:52:10 +0000 http://techliberation.com/?p=75127

The FCC is currently considering ways to make municipal broadband projects easier to deploy, an exercise that has drawn substantial criticism from Republicans, who passed a bill to prevent FCC preemption of state laws. Today the Mercatus Center released a policy analysis of municipal broadband projects, titled Community Broadband, Community Benefits? An Economic Analysis of Local Government Broadband Initiatives. The researcher is Brian Deignan, an alumnus of the Mercatus Center MA Fellowship. Brian wrote an excellent, empirical paper about the economic effects of publicly-funded broadband.

It’s remarkable how little empirical research there is on municipal broadband investment, despite years of federal data and billions of dollars in federal investment (notably, the American Recovery and Reinvestment Act). This dearth of research is in part because muni broadband proponents, as Brian points out, expressly downplay the relevance of economic evidence and suggest that the primary social benefits of muni broadband cannot be measured using traditional metrics. The current “research” about muni broadband, pro- and anti-, tends to be unfalsifiable generalizations based on extrapolations of cherry-picked examples. (There are several successes and failures, depending on your point of view.)

Brian’s paper provides researchers a great starting point when they attempt to answer an increasingly important policy question: What is the economic impact of publicly-funded broadband? Brian uses 23 years of BLS data from 80 cities that have deployed broadband and analyzes muni broadband’s effect on 1) quantity of businesses; 2) employee wages; and 3) employment.

In short, the economic effects of muni broadband appear to be modest. Brian’s economic models show that municipal broadband is associated with a 3 percent increase in the number of business establishments in a city. However, there is a small, negative effect on employee wages (perhaps as firms substitute technology for employee hours?). There is no effect on private employment but the existence of a public broadband network increases local government employment by about 6 percent.

In a research area filled with advocacy, this is a much-needed rigorous analysis and a great update to the research that does exist. The muni broadband fights will continue, but hopefully both sides will make use of the economic data out there. Given the amount of direct federal investment, some positive effects were inevitable and Brian’s paper suggests where those effects show up (quantity of businesses and local government employment). Still, it seems that there are more cost-effective ways of improving local business development and jobs.

I suspect, and the research suggests, that the detrimental effect on private investment (and taxpayers) likely outweighs these ambiguous economic effects. Unlike city-provided utilities, like water and sewer, broadband infrastructure requires regular network upgrades, and consumers often prefer broadband bundled with TV and phone, which cities have a harder time providing. But on this subject, as scholars like to say on difficult issues, more research is needed.

]]>
http://techliberation.com/2014/12/15/the-underwhelming-economic-effects-of-municipal-broadband/feed/ 0
Nominees for The Best & Worst Tech Policy Essays of 2014 http://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/ http://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/#comments Mon, 15 Dec 2014 19:34:54 +0000 http://techliberation.com/?p=74083

Over the course of the year, I collect some of my favorite (and least favorite) tech policy essays and put them together in an end-of-year blog post so I will remember notable essays in the future. (Here’s my list from 2013.) Here are some of the best tech policy essays I read in 2014 (in chronological order).

  • Joel Mokyr – “The Next Age of Invention,” City Journal, Winter 2014. (An absolutely beautiful refutation of the technological pessimism that haunts our age. Mokry concludes by noting that, “technology will continue to develop and change human life and society at a rate that may well dwarf even the dazzling developments of the twentieth century. Not everyone will like the disruptions that this progress will bring. The concern that what we gain as consumers, viewers, patients, and citizens, we may lose as workers is fair. The fear that this progress will create problems that no one can envisage is equally realistic. Yet technological progress still beats the alternatives; we cannot do without it.” Mokyr followed it up with a terrific August 8 Wall Street Journal oped, “What Today’s Economic Gloomsayers Are Missing.“)
  • Michael Moynihan – “Can a Tweet Put You in Prison? It Certainly Will in the UK,” The Daily Beast, January 23, 2014. (Great essay on the right and wrong way to fight online hate. Here’s the kicker: “There is a presumption that ugly ideas are contagious and if the already overburdened police force could only disinfect the Internet, racism would dissipate. This is arrant nonsense.”)
  • Hanni Fakhoury – The U.S. Crackdown on Hackers Is Our New War on Drugs,” Wired, January 23, 2014. (“We shouldn’t let the government’s fear of computers justify disproportionate punishment. . . . It’s time for the government to learn from its failed 20th century experiment over-punishing drugs and start making sensible decisions about high-tech punishment in the 21st century.”)
  • Carole Cadwalladr – “Meet Cody Wilson, Creator of the 3D-gun, Anarchist, Libertarian,” Guardian/Observer, February 8, 2014. (Entertaining profile of one of the modern digital age’s most fascinating characters. “There are enough headlines out there which ask: Is Cody Wilson a terrorist? Though my favourite is the one that asks: ‘Cody Wilson: troll, genius, patriot, provocateur, anarchist, attention whore, gun nut or Second Amendment champion.’ Though it could have added, ‘Or b) all of the above?'”)

And my nominees for Worst Tech Policy Essays of 2014 go to:

 

]]>
http://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/feed/ 0
The MPAA still doesn’t get it http://techliberation.com/2014/12/15/the-mpaa-still-doesnt-get-it/ http://techliberation.com/2014/12/15/the-mpaa-still-doesnt-get-it/#comments Mon, 15 Dec 2014 18:23:02 +0000 http://techliberation.com/?p=75111

Last week, two very interesting events happened in the world of copyright and content piracy. First, the Pirate Bay, the infamous torrent hosting site, was raided by police and removed from the Internet. Pirate Bay co-founder Peter Sunde (who was no longer involved with the project) expressed his indifference to the raid; there was no soul left in the site, he said, and in any case, he is “pretty sure the next thing will pan out.”

Second, a leaked trove of emails from the Sony hack showed that the MPAA continues to pursue their dream of blocking websites that contribute to copyright infringement. With the failure of SOPA in 2012, the lobbying organization has pivoted to trying to accomplish the same ends through other means, including paying for state attorneys-general to attack Google for including some of these sites in their index. Over at TechDirt, Mike Masnick argues that some of this activity may have been illegal.

I’ll leave the illegality of the MPAA’s lobbying strategy for federal prosecutors to sort out, but like some others, I am astonished by the MPAA’s lack of touch with reality. They seem to believe that opposition to SOPA was a fluke, whipped up by Google, who they will be able to neutralize through their “Project Goliath.” And according to a meeting agenda reported on by TorrentFreak, they want to bring “on board ‘respected’ people in the technology sector to agree on technical facts and establish policy support for site blocking.”

The reality is that opposition to SOPA-style controls continues to remain strong in the tech policy community. The only people in Washington who support censoring the Internet to protect copyright are paid by Hollywood. If, through their generous war chest, the MPAA were able to pay a “respected” tech-sector advocate to build policy support for site blocking, that very fact would cause that person to lose respect.

Moreover, on a technical level, the MPAA is fighting a battle it is sure to lose. As Rick Falkvinge notes, the content industry had a unique opportunity in 1999 to embrace and extend Napster. Instead, it got Napster shut down, which eventually led to decentralized piracy over bittorrent. Now, it wants to shut down sites that index torrents, but torrent indexes are tiny amounts of data. The whole Pirate Bay index was only 90MB in 2012, and a magnet link for an individual torrent is only a few bytes. Between Bitmessage and projects like Bitmarkets, it seems extremely unlikely that the content industry will ever be able to shut down distribution of torrent data.

Instead of fighting this inevitable trend, the MPAA and RIAA should be trying to position themselves well in a world in which content piracy will always be possible. They should make it convenient for customers to access their paid content through bundling deals with companies like Netflix and Spotify. They should accept some background level of content piracy and embrace at least its buzz-generating benefits. They should focus on soft enforcement through systems like six strikes, which more gently nudge consumers to pay for content. And they should explicitly disavow any effort to censor the web—without such a disavowal, they are making enemies not just of tech companies, but of the entire community of tech enthusiasts and policy wonks.

]]>
http://techliberation.com/2014/12/15/the-mpaa-still-doesnt-get-it/feed/ 0
Global Innovation Arbitrage: Genetic Testing Edition http://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/ http://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/#comments Sat, 13 Dec 2014 03:48:50 +0000 http://techliberation.com/?p=75086

Earlier this week I posted an essay entitled, “Global Innovation Arbitrage: Commercial Drones & Sharing Economy Edition,” in which I noted how:

Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.

That essay focused on how actions by U.S. policymakers and regulatory agencies threatened to disincentivize homegrown innovation in the commercial drone and sharing economy sectors. But there are many other troubling examples of how America risks losing its competitive advantage in sectors where we should be global leaders as innovators looks offshore. We can think of this as “global innovation arbitrage,” as venture capitalist Marc Andreessen has aptly explained:

Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.

One of the more vivid recent examples of global innovation arbitrage involves the well-known example of 23andMe, which sells mail-order DNA-testing kits to allow people to learn more about their genetic history and predisposition to various diseases. Unfortunately, the Food and Drug Administration (FDA) is actively thwarting innovation on this front, as SF Gate reporter Stephanie Lee notes in her recent article, “23andMe’s health DNA kits now for sale in U.K., still blocked in U.S.“:

A little more than a year ago, 23andMe, the Google-backed startup that sells mail-order DNA-testing kits, was ordered by U.S. regulators to stop telling consumers about their genetic health risks. The Mountain View company has since tried to regain favor with the Food and Drug Administration, but it’s also started to expand outside the country. As of Tuesday, United Kingdom consumers can buy 23andMe’s saliva kits and learn about their inherited risks of diseases and responses to drugs.

While the FDA drags its feet on this front, however, other countries are ready to open their doors to innovators and their life-enriching products and services:

A spokesperson for the United Kingdom’s Medicines and Healthcare Products Regulatory Agency said the [23andMe] test can be used with caution. […]  “The U.K. is a world leader in genomics and we are very excited to offer a product specifically for U.K. customers,” Anne Wojcicki, 23andMe’s co-founder and CEO, told the BBC. Mark Thomas, a professor of evolutionary genetics at University College London, said in a statement, ”For better or worse, direct-to-the-consumer genetic testing companies are here to stay. One could argue the rights and wrongs of such companies existing, but I suspect that ship has sailed.”

That’s absolutely right, even if the FDA wants to bury it’s head in the sand and pretend it can turn back the clock. The problem is that the longer the FDA pretends it can play by the old command-and-control playbook, the more likely it is that American innovators like 23andMe will look to move offshore and find more hospitable homes or their innovative endevours.

This is a central lesson that my Mercatus Center colleague Dr. Robert Graboyes stressed in his recent study, Fortress and Frontier in American Health Care. Graboyes noted that if America failed to embrace the “frontier” spirit of innovation — i.e., a policy disposition that embraces creative destruction and disruptive, “permissionless” innovation — then our global competitive advantage in this space is at risk:

Moving health care from the Fortress to the Frontier may be more a matter of necessity than of choice. We are entering a period of rapid technological advances that will radically alter health care. Many of these advances require only modest capital and labor inputs that governments cannot easily control or prohibit. If US law obstructs these technologies here, it will be feasible for Americans to obtain them by Internet, by mail, or by travel. (p. 41-2)

Graboyes highlighted several areas in which this issue will play out going forward beyond genomic information, including: personalized medicine, 3-D printing, artificial intelligence, information sharing via social media, wearable technology, and telemedicine.

As Larry Downes and Paul Nunes noted in a recent Wired edtorial, “Regulating 23andMe Won’t Stop the New Age of Genetic Testing“:

The information flood is coming. If not this Christmas season, then one in the near future. Before long, $100 will get you sequencing of not just the million genes 23andMe currently examines, but all of them. Regulators and medical practitioners must focus their attention not on raising temporary obstacles, but on figuring out how they can make the best use of this inevitable tidal wave of information.

American policymakers must accept that reality and adjust their attitudes and policies accordingly or else we can expect to see even more global innovation arbitrage — and a correspondingly loss of national competitiveness — in coming years.

[Note: Our friends over at TechFreedom launched a Change.org petition awhile back to call for a reversal of the FDA’s actions.]

Additional Reading:

 

 

 

]]>
http://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/feed/ 0
A Nonpartisan Policy Vision for the Internet of Things http://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/ http://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/#comments Thu, 11 Dec 2014 20:07:11 +0000 http://techliberation.com/?p=75076

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair.

Sen. Deb Fischer

In her opening remarks at the CDI event last week, Sen. Deb Fischer explained how “the Internet of Things can be a game changer for the U.S. economy and for the American consumer.” “It gives people more information and better tools to analyze data to make more informed choices,” she noted.

After outlining some of the potential benefits associated with the Internet of Things, Sen. Fischer continued on to explain why it is essential we get public policy incentives right first if we hope to unlock the full potential of these new technologies. Specifically, she argued that:

In order for Americans to receive the maximum benefits from increased connectivity, there are two things the government must avoid. First, policymakers can’t bury their heads in the sand and pretend this technological revolution isn’t happening only to wake up years down the road and try to micromanage a fast-changing, dynamic industry.

Second, the federal government must also avoid regulation just for the sake of regulation. We need thoughtful, pragmatic responses and narrow solutions to any policy issues that arise. For too long, the only “strategy” in Washington policy-making has been to react to crisis after crisis. We should dive into what this means for U.S. global competitiveness, consumer welfare, and economic opportunity before the public policy challenges overwhelm us, before legislative and executive branches of government – or foreign governments – react without all the facts.

Fischer concluded by noting that, “it’s entirely appropriate for the U.S. government to think about how to modernize its regulatory frameworks, consolidate, renovate, and overhaul obsolete rules. We’re destined to lose to the Chinese or others if the Internet of Things is governed in the United States by rules that pre-date the VCR.”

Sen. Kelly Ayotte

Like Sen. Fischer, Ayotte similarly stressed the many economic opportunities associated with IoT technologies for both consumers and producers alike. [Note: Sen. Ayotte did not publish her remarks on her website, but you can watch her speech from the CDI event beginning around the 17-minute mark of the event video.]

Ayotte also noted that IoT is going to be a major topic for the Senate Commerce Committee and that there will be an upcoming hearing on the issue. She said that the role of the Committee will be to ensure that the various agencies looking into IoT issues are not issuing “conflicting regulatory directives” and “that what is being done makes sense and allows for future innovation that we can’t even anticipate right now.” Among the agencies she cited that are currently looking into IoT issues: FTC (privacy & security), FDA (medical device apps), FCC (wireless issues), FAA (commercial drones), NHTSA (intelligent vehicle technology), NTIA (multistakeholder privacy reviews), as well as state lawmakers and regulatory agencies.

Sen. Ayotte then explained what sort of policy framework America needed to adopt to ensure that the full potential of the Internet of Things could be realized. She framed the choice lawmakers are confronted with as follows:

we as policymakers we can either create an environment that allows that to continue to grow, or one that thwarts that. To stay on the cutting edge, we need to make sure that our regulatory environment is conducive to fostering innovation.” […] “we’re living in the Dark Ages in the ways the some of the regulations have been framed. Companies must be properly incentivized to invest in the future, and government shouldn’t be a deterrent to innovation and job-creation.

Ayotte also stressed that “technology continues to evolve so rapidly there is no one-size-fits-all regulatory approach” that can work for a dynamic environment like this. “If legislation drives technology, the technology will be outdated almost instantly,” and “that is why humility is so important,” she concluded.

The better approach, she argued was to let technology evolve freely in a “permissionless” fashion and then see what problems developed and then address them accordingly. “[A] top-down, preemptive approach is never the best policy” and will only serve to stifle innovation, she argued. “If all regulators looked with some humility at how technology is used and whether we need to regulate or not to regulate, I think innovation would stand to benefit.”

FTC Commissioner Maureen K. Ohlhausen

Fischer and Ayotte’s remarks reflect a vision for the Internet of Things that FTC Commissioner Maureen K. Ohlhausen has articulated in recent months. In fact, Sen. Ayotte specifically cited Ohlhausen in her remarks.

Ohlhausen has actually delivered several excellent speeches on these issues and has become one of the leading public policy thought leaders on the Internet of Things in the United States today. One of her first major speeches on these issues was her October 2013 address entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” In that speech, Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.”

She also issued a wise word of caution to her fellow regulators:

It is . . . vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

In this and other speeches, Ohlhausen has highlighted the various other remedies that already exist when things do go wrong, including FTC enforcement of “unfair and deceptive practices,” common law solutions (torts and class actions), private self-regulation and best practices, social pressure, and so on. (Note: Inspired by Ohlhausen’s approach, I devoted the final section of my big law review article on IoT issues to a deeper exploration of all those “bottom-up” solutions to privacy and security concerns surrounding the IoT and wearable tech.)

The Clinton Administration Vision

These three women have articulated what I regard as the ideal vision for fostering the growth of the Internet of Things. It should be noted, however, that their framework is really just an extension of the Clinton Administration’s outstanding vision for the Internet more generally.

In the 1997 Framework for Global Electronic Commerce, the Clinton Administration outlined its approach toward the Internet and the emerging digital economy. As I’ve noted many times before, the Framework was a succinct and bold market-oriented vision for cyberspace governance that recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. Specifically, it stated that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” “[G]overnments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

Sen. Ayotte specifically cited those Clinton principles in her speech and said, “I think those words, given twenty years ago at the infancy of the Internet, are today even more relevant as we look at the challenges and the issues that we continue to face as regulators and policymakers.”

I completely agree. This is exactly the sort of vision that we need to keep innovation moving forward to benefit consumers and the economy, and this also illustrates how IoT policy can be a nonpartisan effort.

Why does this matter so much? As I noted in this recent essay, thanks to the Clinton Administration’s bold vision for the Internet:

This policy disposition resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. . . . The result of this freedom to experiment was an outpouring of innovation. America’s info-tech sectors thrived thanks to permissionless innovation, and they still do today. An annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing, software, and digital technology.

In other words, America got policy right before and we can get policy right again to ensure we are again global innovation leaders. Patience, flexibility, and forbearance are the key policy virtues that nurture an environment conducive to entrepreneurial creativity, economic progress, and greater consumer choice.

Other policymakers should endorse the vision originally sketched out by the Clinton Administration and now so eloquently embraced and extended by Sen. Fischer, Sen. Ayotte, and Commissioner Ohlhausen. This is the path forward if we hope to realize the full potential of the Internet of Things.

]]>
http://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/feed/ 0
Global Innovation Arbitrage: Commercial Drones & Sharing Economy Edition http://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/ http://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/#comments Tue, 09 Dec 2014 21:02:44 +0000 http://techliberation.com/?p=75060

Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity. I was reminded of that fact today while reading two different reports about commercial drones and the sharing economy and the global competition to attract investment on both fronts. First, on commercial drone policy, a new Wall Street Journal article notes that:

Amazon.com Inc., which recently began testing delivery drones in the U.K., is warning American officials it plans to move even more of its drone research abroad if it doesn’t get permission to test-fly in the U.S. soon. The statement is the latest sign that the burgeoning drone industry is shifting overseas in response to the Federal Aviation Administration’s cautious approach to regulating unmanned aircraft.

According to the Journal reporters, Amazon has sent a letter to the FAA warning that, “Without the ability to test outdoors in the United States soon, we will have no choice but to divert even more of our [drone] research and development resources abroad.” And another report in the U.K. Telegraph notes that other countries are ready and willing to open their skies to the same innovation that the FAA is thwarting in America. Both the UK and Australia have been more welcoming to drone innovators recently. Here’s a report from an Australian newspaper about Google drone services testing there. (For more details, see this excellent piece by Alan McQuinn, a research assistant with the Information Technology and Innovation Foundation: “Commercial Drone Companies Fly Away from FAA Regulations, Go Abroad.”) None of this should be a surprise, as I’ve noted in recent essays and filings. With the FAA adopting such a highly precautionary regulatory approach, innovation has been actively disincentivized. America runs the risk of driving still more private drone innovation offshore in coming months since all signs are that the FAA intends to drag its feet on this front as long as it can, even though Congress has told to agency to take steps to integrate these technologies into national airspace. 

Meanwhile, innovation in the sharing economy is at risk because of incessant bureaucratic meddling at the state and especially the local level across the United States.  My colleagues Matt Mitchell, Christopher Koopman, and I released a new Mercatus Center white paper on these issues yesterday (“The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change“) and argued that most of the rules and regulations holding back the sharing economy are counter-productive and desperately in need of immediate reform. If policymakers don’t take steps to liberalize the layers of red tape that encumber new sharing economy start-ups, it’s possible that some of these companies will also start to look for opportunities offshore. Plenty of countries will be eager to embrace them, which I realized as I was reading through another report recently. The UK’s Department for Business, Innovation & Skills recently published a white paper called, “Unlocking the Sharing Economy,” which discussed how the British government intended to embrace the many innovations that could flow from this space. The preface to the report opened with this telling passage from Rt. Hon. Matthew Hancock, MP and Minister of State for Business, Enterprise, and Energy:

The UK is embracing new, disruptive business models and challenger businesses that increase competition and offer new products and experiences for consumers. Where other countries and cities are closing down consumer choice, and limiting people’s freedom to make better use of their possessions, we are embracing it.

That really says it all, doesn’t it!  If other countries, including the US, don’t clean up their act and create an more welcoming environment for sharing economy innovation, then the UK will be all too happy to invite them to come set up operations there.The offshoring option is just as real in countless other sectors of the modern tech economy. As Marc Andreessen, co-founder of the venture capital firm Andreessen Horowitz, noted in Politico oped this summer:

Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.

Similar opportunities for such “global arbitrage” exist for the Internet of Things and wearable techintelligent vehicle technologyadvanced medical device techrobotics, Bitcoin, and so on. The links I have embedded here point back to other essays I have written recently about the choice we face in each of these fields, namely, will we embrace “permissionless innovation” or “precautionary principle” thinking. This matters because — as I noted in recent essays (1,2) as well as a book on these issues — economic growth depends upon policymakers promoting the right values when it comes to entrepreneurial activity. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I noted in a recent essay on the importance of “Embracing a Culture of Permissionless Innovation.” Or, as the great historian of technological progress Joel Mokyr has concluded: “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” To sum up in two words, incentives matter.  “[E]conomic and social institutions have to encourage potential innovators by presenting them with the right incentive structure,” Mokyr notes. Thus, when the economic and social incentive structure discourages risk-taking and experimentation in a given country or even entire continent, we can expect that global innovation arbitrage will accelerate as entrepreneurs look to find more hospitable investment climates.

 

 

Additional Reading:

 

]]>
http://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/feed/ 0
New Paper on The Sharing Economy and Consumer Protection Regulation http://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/ http://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/#comments Mon, 08 Dec 2014 15:06:54 +0000 http://techliberation.com/?p=75035

Sharing Economy paper from MercatusI’ve just released a short new paper, co-authored with my Mercatus Center colleagues Christopher Koopman and Matthew Mitchell, on “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.” The paper is being released to coincide with a Congressional Internet Caucus Advisory Committee event that I am speaking at today on “Should Congress be Caring About Sharing? Regulation and the Future of Uber, Airbnb and the Sharing Economy.”

In this new paper, Koopman, Mitchell, and I discuss how the sharing economy has changed the way many Americans commute, shop, vacation, borrow, and so on. Of course, the sharing economy “has also disrupted long-established industries, from taxis to hotels, and has confounded policymakers,” we note. “In particular, regulators are trying to determine how to apply many of the traditional ‘consumer protection’ regulations to these new and innovative firms.” This has led to a major debate over the public policies that should govern the sharing economy.

We argue that, coupled with the Internet and various new informational resources, the rapid growth of the sharing economy alleviates the need for much traditional top-down regulation. These recent innovations are likely doing a much better job of serving consumer needs by offering new innovations, more choices, more service differentiation, better prices, and higher-quality services. In particular, the sharing economy and the various feedback mechanism it relies upon helps solve the tradition economic problem of “asymmetrical information,” which is often cited as a rationale for regulation. We conclude, therefore, that “the key contribution of the sharing economy is that it has overcome market imperfections without recourse to traditional forms of regulation. Continued application of these outmoded regulatory regimes is likely to harm consumers.”

We note that this is especially likely to be the case when the failure of traditional regulatory models is taken into account. As we document in the paper, all too often, well-intentioned “public interest” regulation is often captured by industry and used to to serve their interests:

by limiting entry, or by raising rivals’ costs, regulations can be useful to the regulated firms. Though regulations often make consumers worse off, they are often sustained by political pressure from consumer advocates because they can be disguised as “consumer protection.”

We provide evidence of the problem of regulatory capture and note it has been a particular problem in many of the sectors that are now being disrupted by sharing economy innovators–such as taxi and transportation services. It is evident that regulation has not lived up to its lofty expectations in many sectors. Accordingly, when market circumstances change dramatically—or when new technology or competition alleviate the need for regulation—then public policy should evolve and adapt to accommodate these new realities.

Of course, many bad laws and regulations that policymakers remain on the books and have constituencies who will defend them vociferously. Our paper concludes with some recommendations for how to “level the regulatory playing field” in a pro-consumer, pro-innovation fashion. We note that while differential regulatory treatment of incumbents and new entrants does represent a potential problem, there’s a sensible, pro-consumer and pro-innovation way to solve that problem:

such regulatory asymmetries represent a legitimate policy problem. But the solution is not to punish new innovations by simply rolling old regulatory regimes onto new technologies and sectors. The better alternative is to level the playing field by “deregulating down” to put everyone on equal footing, not by “regulating up” to achieve parity. Policymakers should relax old rules on incumbents as new entrants and new technologies challenge the status quo. By extension, new entrants should only face minimal regulatory requirements as more onerous and unnecessary restrictions on incumbents are relaxed.

Download this new paper on the Mercatus website or via SSRN or ResearchGate. Incidentally, we plan to release a much longer Mercatus Center white paper early next year that will explore reputational feedback mechanisms in far greater detail and explain how these systems help address the problem of “asymmetrical information” in these and other contexts.

______________

Also see:The Debate over the Sharing Economy: Talking Points & Recommended Reading,” which includes the following video of me on the Stossel Show discussing these issues recently.

]]>
http://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/feed/ 0
Europe’s Choice on Innovation http://techliberation.com/2014/12/03/europes-choice-on-innovation/ http://techliberation.com/2014/12/03/europes-choice-on-innovation/#comments Wed, 03 Dec 2014 18:26:18 +0000 http://techliberation.com/?p=75006

Writing last week in The Wall Street Journal, Matt Moffett noted how many European countries continue to struggle with chronic unemployment and general economic malaise.  (“New Entrepreneurs Find Pain in Spain“) It’s a dismal but highly instructive tale about how much policy incentives matter when it comes to innovation and job creation–especially the sort of entrepreneurial activity from small start-ups that is so essential for economic growth. Here’s the key takeaway:

Scarce capital, dense bureaucracy, a culture deeply averse to risk and a cratered consumer market all suppress startups in Europe. The Global Entrepreneurship Monitor, a survey of startup activity, found the percentage of the adult population involved in early stage entrepreneurial activity last year was just 5% in Germany, 4.6% in France and 3.4% in Italy. That compares with 12.7% in the U.S. Even once they are established, European businesses are, on average, smaller and slower growing than those in the U.S.  The problems of entrepreneurs are one reason Europe’s economy continues to struggle after six years of crisis. The European Union this month cut its growth forecasts for the region for this year and next, citing weaker than expected performance in the eurozone’s biggest economies, Germany, France and Italy. This week, the Organization for Economic Cooperation and Development delivered its own pessimistic appraisal, with chief economist Catherine Mann saying, “The eurozone is the locus of the weakness in the global economy.”

[…]
Europe’s unemployment crisis may be eroding a deeply ingrained fear of failure that is a bigger impediment to entrepreneurship on the Continent than in other regions, according to academic surveys. “Fear of failure is less of an issue because the whole country is a failure, and most of us are out of business or have a hard time paying our bills,” said Nick Drandakis of Athens, who in 2011 founded Taxibeat, an app that provides passenger ratings on taxi drivers.

I found Moffett’s article interesting because I write a lot about entrepreneurialism, innovation, long-term economic growth, and the public policies that facilitate all these things. This has also been the subject of an excellent Cato Institute online forum about “Reviving Economic Growth,” which asked leading economists and policy experts to answer the following question: “If you could wave a magic wand and make one or two policy or institutional changes to brighten the U.S. economy’s long-term growth prospects, what would you change and why?”

Many of the entries in that forum dealt with the importance of removing barriers to new start-ups so that entrepreneurs can help spark new innovations and spur economic growth. My entry, which was entitled, “Embracing a Culture of Permissionless Innovation,” kicked off with a quote from the great Joel Mokyr: “Why does economic growth… occur in some societies and not in others?” I noted that “debate has raged among generations of economists, historians, and business theorists about that question and the specific forces and policies that prompt long-term growth.” Generally speaking, however, there actually exists a great deal of consensus about the importance of small business entrepreneurship and the need for openness to change if an economy is going to grow. (See the studies from Ian Hathaway and Robert E. Litan that I cite in my essay among many others.)

Which brings us back to the situation in Europe. It seems clear that strong cultural and legal impediments to change exist in many European countries and that they discourage risk-taking and prevent the formation of new ventures. Many of us here in the United States worry about similar impediments and their impact on entrepreneurialism, but as those statistics in Moffett’s article make clear, the situation in Europe is far more grim. While some European policymakers seem willing to acknowledge that the deck has been stacked against innovators across the continent, few seem willing to embrace a comprehensive liberalization agenda to begin clearing away the legal and regulatory impediments that are negatively affecting startups and creating economic stagnation there. The primary reason for that goes back to the values and attitudes problem that Moffett highlighted in his article: When a country or continent’s culture is so deeply averse to risk and the possibility of disruptions or failures, then the exact sort of risk-taking that is so essential to economic growth will become increasingly difficult.

This was the focus of my Cato essay and it is what I meant by embracing a culture of permissionless innovation. As I noted in my essay, “many scholars and policymakers [often] speak of innovation policy as if it is simply a Goldilocks-like formula that entails tweaking various policy dials to get innovation just right,” which leads them to propose an endless litany of programs and policies to jump-start innovation and economic growth. But this puts the cart before the horse. Getting values right first is what really matters. Here is how I put it in my essay:

For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things. We can think of this disposition as permissionless innovation and if there was one thing every policymaker could do to help advance long-term growth, it is to first commit themselves to advancing this ethic and making it the lodestar for all their future policy pronouncements and decisions.

While there are limits to how much policymakers can influence these attitudes and values, any serious effort to foster the positive factors that give rise to expanded entrepreneurial opportunities must begin with an appreciation of how growth-oriented innovation policy begins with the proper policy disposition toward risk-taking and the possibility of significant economic and cultural disruption. As I put it in my recent book on the importance Permissionless Innovation as a vision for innovation and growth, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

But let’s be clear about what the “permissionless innovation” vision is all about, because it is not the same as anarchy. As I noted in the Cato essay:

Permissionless innovation is not an absolutist position that rejects any role for government. Rather, it is an aspirational goal that stresses the benefit of “innovation allowed” as the default position to begin policy debates. It switches the burden of proof to those who favor preemptive regulation and asks them to explain why ongoing trial-and-error experimentation with new technologies or business models should be disallowed.

Again, it’s about getting attitudes and incentives right. Specifically, it’s about being willing to embrace risk-taking and even failure, because that is the only way you get growth. As the old adage goes, “Nothing ventured, nothing gained.”  And our recent experience with the Internet and the Information Revolution offers the perfect case study of why getting values right and embracing a culture of permissionless innovation matters so much. As I noted in my Cato essay,

permissionless innovation powered the explosive growth of the Internet and America’s information technology sectors (computing, software, Internet services, etc.) over the past two decades. Those sectors have ushered in a generation of innovations and innovators that are now the envy of the world. This happened because the default position for the digital economy was permissionless innovation. No one had to ask anyone for the right to develop these new technologies and platforms.

The U.S. got policy right by getting our values right first. Thanks to a series of very smart pronouncements and decisions in the early and mid-1990s (all detailed in my essay and this Medium essay), digital age entrepreneurs were given a clear green light to take risks without fear of a political backlash.

Unfortunately for European innovators, a different message was sent from the start, with layers of “data directives” and other red tape encumbering new ventures. As a result, it’s hard today to name many innovators in this arena which originated in Europe. Instead, Europe’s household Internet names are mostly American companies. Europe is hoping to reverse that with the rise of the Internet of Things, since many European companies appear poised to become global leaders on that front. For that happen, however, the continent’s attitudes toward risk-taking will have to evolve to accommodate these highly disruptive technologies.

In particular, the Internet of Things will raise a variety of privacy and security-related concern (see my new 93-page paper on this), as well as economic-related fears associated with automation and job disruption. These are serious issues that deserve serious consideration and constructive solutions. But if Europe decides to put the Internet of Things revolution on hold in an attempt to preemptively plan for every theoretical downside, then they will miss the boat again and potentially lose many of the amazing benefits that will accompany these new innovations. Again, if you live in fear of the the future, then an innovative future won’t happen. And looking backwards and holding onto the past is no way to grow an economy or achieve long-term prosperity.

]]>
http://techliberation.com/2014/12/03/europes-choice-on-innovation/feed/ 0
Will Europe’s ‘Right to Be Forgotten’ Become an Unprecedented Global Censorship Regime? http://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/ http://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/#comments Wed, 26 Nov 2014 17:10:16 +0000 http://techliberation.com/?p=74995

Yesterday, the Article 29 Data Protection Working Party issued a press release providing more detailed guidance on how it would like to see Europe’s so-called “right to be forgotten” implemented and extended. The most important takeaway from the document was that, as Reuters reported, “European privacy regulators want Internet search engines such as Google and Microsoft’s Bing to scrub results globally.” Moreover, as The Register reported, the press release made it clear that “Europe’s data protection watchdogs say there’s no need for Google to notify webmasters when it de-lists a page under the so-called “right to be forgotten” ruling.” (Here’s excellent additional coverage from Bloomberg: Google.com Said to Face EU Right-to-Be-Forgotten Rules“). These actions make it clear that European privacy regulators hope to expand the horizons of the right to be forgotten in a very significant way.

The folks over at Marketplace radio asked me to spend a few minutes with them today discussing the downsides of this proposal. Here’s the quick summary of what I told them:

  • European privacy regulators are basically calling for an unprecedented global censorship regime that would impose their speech preferences and controls on the entire planet.
  • Europe has no right to tell the rest of the world how to structure their policies governing online freedom of speech, yet they are trying to strong-arm major American tech companies like Google to do so indirectly.
  • This is a grave threat to freedom of speech, freedom of expression, and Internet openness.
  • This move sends a horrible signal to oppressive regimes worldwide. It could lead to a race to the bottom with governments in other countries attempting to export their own speech preferences to the rest of the global. You can kiss global Internet freedom goodbye if that happens.
  • Relatedly, if European policymakers persist in these efforts, it could lead to future trade wars, even among friendly countries. Layers of speech controls like this could become formidable non-tariff barriers to trade and limit the growth of cross-border electronic commerce in the process.
  • This certainly doesn’t help competition. Ironically, this news comes during the same week that we have learned some European policymakers want to break up Google on antitrust grounds. But the more that European regulators push Google to enforce global speech controls like this, the more market power those policymakers give the company! Google is one of the few companies that might be able to hire enough lawyers and engineers to comply with such a regulatory regime. Few other tech companies – and certainly no small startups – could ever hope to comply with this ruling. In essence, it’s a new regulatory barrier to entry that diminishes digital entrepreneurialism.
  • Correspondingly, it’s another innovation-killer for Europe. If Europeans wonder why they fell so far behind in terms of Internet innovation over the past decade, they might consider looking at the wisdom of overly-restrictive data controls and speech regulations like this.
  • Privacy is certainly an important value, and more could be done to protect it. But what European regulators are proposing here is completely over the top. It is like trying to kill a fly with an elephant gun. There are more sensible ways to encourage privacy protection.
  • Instead of trying to export their speech controls and bully global innovators, European policymakers should just consider creating their own, government-funded search engines and then force their own citizens to use them. Let them try to create their own anti-free speech fortress and see how their citizens feel about living inside it.

Stay tuned, more to come on this front. In the meantime, here’s another response worth reading from of David Meyer of GigaOm.

]]>
http://techliberation.com/2014/11/26/will-europes-right-to-be-forgotten-become-an-unprecedented-global-censorship-regime/feed/ 0
How Many Accidents Could Be Averted This Holiday if More Intelligent Cars Were on the Road? http://techliberation.com/2014/11/26/how-many-accidents-could-be-averted-this-holiday-if-more-intelligent-cars-were-on-the-road/ http://techliberation.com/2014/11/26/how-many-accidents-could-be-averted-this-holiday-if-more-intelligent-cars-were-on-the-road/#comments Wed, 26 Nov 2014 13:53:04 +0000 http://techliberation.com/?p=74992

This Thanksgiving holiday season, an estimated 39 million people plan on traveling by car. Sadly, according to the National Safety Council, some 418 Americans may lose their lives on the roads over the next few days, in addition to over 44,000 injuries from car crashes.

In a new oped for the Orange County Register, Ryan Hagemann and I argue that many of these accidents and fatalities could be averted if more “intelligent” vehicles were on the road. That’s why it is so important that policymakers clear away roadblocks to intelligent vehicle technology (including driverless cars) as quickly as possible. The benefits would be absolutely enormous.

Read our oped, and for more details check out our recent Mercatus Center white paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.”

]]>
http://techliberation.com/2014/11/26/how-many-accidents-could-be-averted-this-holiday-if-more-intelligent-cars-were-on-the-road/feed/ 0
New Paper on Privacy & Security Implications of the Internet of Things & Wearable Technology http://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/ http://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/#comments Fri, 21 Nov 2014 15:23:31 +0000 http://techliberation.com/?p=74973

IoT paperThe Mercatus Center at George Mason University has just released my latest working paper, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation.” The “Internet of Things” (IoT) generally refers to “smart” devices that are connected to both the Internet and other devices. Wearable technologies are IoT devices that are worn somewhere on the body and which gather data about us for various purposes. These technologies promise to usher in the next wave of Internet-enabled services and data-driven innovation. Basically, the Internet will be “baked in” to almost everything that consumers own and come into contact with.

Some critics are worried about the privacy and security implications of the Internet of Things and wearable technology, however, and are proposing regulation to address these concerns. In my new 93-page article, I explain why preemptive, top-down regulation would derail the many life-enriching innovations that could come from these new IoT technologies. Building on a recent book of mine, I argue that “permissionless innovation,” which allows new technology to flourish and develop in a relatively unabated fashion, is the superior approach to the Internet of Things.

As I note in the paper and my earlier book, if we spend all our time living in fear of the worst-case scenarios — and basing public policies on them — then best-case scenarios can never come about. As the old saying goes: nothing ventured, nothing gained. Precautionary principle-based regulation paralyzes progress and must be avoided.  We instead need to find constructive, “bottom-up” solutions to the privacy and security risks accompanying these new IoT technologies instead of top-down controls that would limit the development of life-enriching IoT innovations.

The better alternative is to deal with concerns creatively as they develop, using a balanced, layered approach  involving many different solutions, including: educational efforts, technological empowerment tools, social norms, public and watchdog pressure, industry best practices and self-regulation, transparency, torts and products liability law, and targeted enforcement of existing legal standards as needed.

Generally speaking, patience, humility, and forbearance by policymakers is crucial to allowing greater innovation and consumer choice in this arena. Importantly, policymakers should not forget that societal and individual adaptation will play a role here, just as it has during so many other turbulent technological transformations.

This article can be downloaded on my Mercatus Center page, on SSRN, or at Research Gate. I am hoping to find a law or policy journal interested in publishing this paper soon. If you with a journal and are interested, please contact me. [UPDATE 12/3/14: This paper has been accepted for publication in the Richmond Journal of Law & Technology, Vol. 21, Issue 6 (2015).]

Finally, if you are interested in this topic, you might want to flip through these slides I prepared for a presentation on this topic that I made at the Federal Communications Commission in September:

Additional reading:
]]>
http://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/feed/ 0
The Myth That Title II Regulation of Broadband and Wireless Would Be Comparable http://techliberation.com/2014/11/19/the-myth-that-title-ii-regulation-of-broadband-and-wireless-would-be-comparable/ http://techliberation.com/2014/11/19/the-myth-that-title-ii-regulation-of-broadband-and-wireless-would-be-comparable/#comments Wed, 19 Nov 2014 23:08:22 +0000 http://techliberation.com/?p=74965

Supporters of Title II reclassification for broadband Internet access services point to the fact that some wireless services have been governed by a subset of Title II provisions since 1993.  No one is complaining about that.  So what, then, is the basis for opposition to similar regulatory treatment for broadband?

Austin Schlick, the former FCC general counsel, outlined the so-called “Third Way” legal framework for broadband in a 2010 memo that proposed Title II reclassification along with forbearance of all but six of Title II’s 48 provisions.  He noted that “this third way is a proven success for wireless communications.”  This is the model that President Obama is backing.  Title II reclassification “doesn’t have to be a big deal,” Harold Feld reminds us, since the wireless industry seems to be doing okay despite the fact mobile phone service was classified as a Title II service in 1993.

To be clear, only mobile voice services are subject to Title II, since the FCC classified broadband access to the Internet over wireless networks as an “information” service (and thus completely exempt from Title II) in March of 2007.

Sec. 6002(c) of the Omnibus Budget Reconciliation Act of 1993 (Public Law 103-66) modified Sec. 332 of the Communications Act so commercial mobile services would be treated “as a common carrier … except for such provisions of title II as the Commission may specify by regulation as inapplicable…”

The FCC commendably did forbear.  Former Chairman Reed E. Hundt would later boast in his memoir that the commission “totally deregulated the wireless industry.” He added that this was possible thanks to a Democratic Congress and former Vice President Al Gore’s tie-breaking Senate vote.

Lest there be any doubt whether there was widespread bipartisan support for regulating mobile wireless services under Title II so the FCC could deregulate them, the fact is not a single Republican in either chamber voted for the Omnibus Budget Reconciliation Act of 1993.  In the Senate, the vote was 50-50, with six Democrats voting with the Republicans.  The vote was 218-216 in the House of Representatives, with 41 Democrats joining the Republicans.

This convoluted regulatory framework—under which the FCC is not specifically prohibited from changing its mind whenever it wants, reversing course and un-forbearing—was enacted because one party jammed the other.

There was no appetite for regulating wireless services in 1993, since the FCC would be conducting competitive auctions for the first time for assigning four new licenses on top of the two existing licenses in every trading area.  Applying Title II—even though that meant forbearing from applying 45 out of 48 of Title II’s provisions—was a clever manipulation of deregulatory sentiment.

Although in theory limited Title II regulation “doesn’t have to be a big deal,” let’s be clear that’s not what Feld and others are advocating.

The Democrats reserved only three of Title II’s 48 provisions (sections 201, 202 and 208) when they applied Title II to wireless and authorized the FCC forbear from applying everything else.  In comments filed with the FCC, Feld and company clearly oppose what we’re calling doesn’t-have-to-be-a-big-deal regulatory treatment (“blanket forbearance”) of broadband.  In fact, they’ve identified a total of only four of Title II’s 48 provisions that they believe are candidates for forbearance (sections 223, 226, 228 and 260).

Given the forbearance framework and public interest concerns discussed above, and mindful that the existing broadband market is neither as nascent nor as competitive as the wireless market was in 1994, when the Commission engaged in blanket forbearance, Commenters provide this list of specific statutes the Commission should not simply forbear from on the assumption that doing so meets the statutory criteria. As a general matter, these involve Commission authority over interconnection and shut down of service (Sections 251(a), 256, and portions of 214(c)), discretionary authority to compel production of information (Sections 211, 213, 215, and 218-20), provisions which provide explicit power for the Commission to hold parties accountable and prescribe adequate remedies (Sections 205-07, 209, 212, and 216), provisions designed to protect consumers (Sections 203 and 222), or provisions designed to ensure affordable deployment and the benefits of broadband access to all Americans (Sections 214(e), 225, 254, 255, and 257).  These statutes are in addition to the bare minimum recognized in Section 332(c) as the minimum needed to protect consumers—Sections 201, 202, and 208.

On the other hand, it would appear that forbearance from some provisions would serve the public interest, either because they create barriers to deployment and improvement of capacity, or because it is unclear what these provisions would mean in the context of broadband access service—assuming they applied at all (such as Sections 223, 226, 228, and 260).  Commenters express no opinion on statutes not specifically addressed, beyond urging the Commission to apply the general framework discussed above. (references omitted.)

For the proponents of net neutrality regulation, Title II reclassification is not simply about applying sections 201, 202 and 208 of the Communications Act to broadband, and that’s why the wireless analogy is irrelevant and one reason why there’s so much opposition.

Another reason has to do with the basic purpose of the 1996 Telecommunications Act.  As Reed Hundt also pointed out in his memoir, “our policy was to introduce competition and then to deregulate,” and the “purpose of pro-competitive rulemaking ultimately would be the elimination of rules.”  The competition between telephone carriers, wireless providers and cable operators that a few visionaries tried to persuade a skeptical Congress in 1994-96 was just around the corner has come to pass, and with it the justification for the 1934 Title II regulatory framework is gone.

]]>
http://techliberation.com/2014/11/19/the-myth-that-title-ii-regulation-of-broadband-and-wireless-would-be-comparable/feed/ 0
3 takeaways from the Plenipot http://techliberation.com/2014/11/13/3-takeaways-from-the-plenipot/ http://techliberation.com/2014/11/13/3-takeaways-from-the-plenipot/#comments Thu, 13 Nov 2014 14:45:13 +0000 http://techliberation.com/?p=74962

Last week marked the conclusion of the ITU’s Plenipotentiary Conference, the quadrennial gathering during which ITU member states get together to revise the treaty that establishes the Union and conduct other high-level business. I had the privilege of serving as a member of the US delegation, as I did for the WCIT, and to see the negotiations first hand. This year’s Plenipot was far less contentious than the WCIT was two years ago. For other summaries of the conference, let me recommend to you Samantha Dickinson, Danielle Kehl, and Amb. Danny Sepulveda. Rather than recap their posts or the entire conference, I just wanted to add a couple of additional observations.

We mostly won on transparent access to documents

Through my involvement with WCITLeaks, I have closely followed the issue of access to ITU documents, both before and during the Plenipot. My assessment is that we mostly won.

Going forward, most inputs and outputs to ITU conferences and assemblies will be available to the public from the ITU website. This excludes a) working documents, b) documents related to other meetings such as Council Working Groups and Study Groups, and c) non-meeting documents that should be available to the public.

However, in February, an ITU Council Working Group will be meeting to develop what is likely to be a more extensive document access policy. In May, the whole Council will meet to provisionally approve an access policy. And in 2018, the next Plenipot will permanently decide what to do about this provisional access policy.

There are no guarantees, and we will need to closely monitor the outcomes in February and May to see what policy is adopted—but if it is a good one, I would be prepared to shut down WCITLeaks as it would become redundant. If the policy is inadequate, however, WCITLeaks will continue to operate until the policy improves.

I was gratified that WCITLeaks continued to play a constructive role in the discussion. For example, in the Arab States’ proposal on ITU document access, they cited us, considering “that there are some websites on the Internet which are publishing illegally to the public ITU documents that are restricted only to Member States.” In addition, I am told that at the CEPT coordination meeting, WCITLeaks was thanked for giving the issue of transparency at the ITU a shot in the arm.

A number of governments were strong proponents of transparency at the ITU, but I think special thanks are due to Sweden, who championed the issue on behalf of Europe. I was very grateful for their leadership.

The collapse of the WCIT was an input into a harmonious Plenipot

We got through the Plenipot without a single vote (other than officer elections)! That’s great news—it’s always better when the ITU can come to agreement without forcing some member states to go along.

I think it’s important to recognize the considerable extent to which this consensus agreement was driven by events at the WCIT in 2012. At the WCIT, when the US (and others) objected and said that we could not agree to certain provisions, other countries thought we were bluffing. They decided to call our bluff by engineering a vote, and we wisely decided not to sign the treaty, along with 54 other countries.

In Busan this month, when we said that we could not agree to certain outcomes, nobody thought we were bluffing. Our willingness to walk away at the WCIT gave us added credibility in negotiations at the Plenipot. While I also believe that good diplomacy helped secure a good outcome at the Plenipot, the occasional willingness to walk the ITU off a cliff comes in handy. We should keep this in mind for future negotiations—making credible promises and sticking to them pays dividends down the road.

The big question of the conference is in what form will the India proposal re-emerge

At the Plenipot, India offered a sweeping proposal to fundamentally change the routing architecture of the Internet so that a) IP addresses would be allocated by country, like telephone numbers, with a country prefix and b) domestic Internet traffic would never be routed out of the country.

This proposal was obviously very impractical. It is unlikely, in any case, that the ITU has the expertise or the budget to undertake such a vast reengineering of the Internet. But the idea would also be very damaging from the perspective of individual liberty—it would make nation-states, even more than the are now, mediators of human communication.

I was very proud that the United States not only made the practical case against the Indian proposal, it made a principled one. Amb. Sepulveda made a very strong statement indicating that the United States does not share India’s goals as expressed in this proposal, and that we would not be a part of it. This statement, along with those of other countries and subsequent negotiations, effectively killed the Indian proposal at the Plenipot.

The big question is in what form this proposal will re-emerge. The idea of remaking the Internet along national lines is unlikely to go away, and we will need to continue monitoring ITU study groups to ensure that this extremely damaging proposal does not raise its head.

]]>
http://techliberation.com/2014/11/13/3-takeaways-from-the-plenipot/feed/ 0
Thinking about Innovation Policy Debates: 4 Related Paradigms http://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/ http://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/#comments Tue, 11 Nov 2014 21:09:02 +0000 http://techliberation.com/?p=74915

In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out.  It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.

As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau.

Stasis vs. Dynamism – Virginia Postrel (1998)

Future and Its EnemiesIn her 1998 book, The Future and Its Enemies, Virginia Postrel contrasted the conflicting worldviews of “dynamism”and “stasis” and showed how the tensions between these two visions would affect the course of future human progress. Postrel made the case for embracing dynamism — “a world of constant creation, discovery, and competition” — over the “regulated, engineered world” of the stasis mentality. She argued that we should “see technology as an expression of human creativity and the future as inviting” and reject the idea “that progress requires a central blueprint.” Dynamism defines progress as “a decentralized, evolutionary process” in which mistakes aren’t viewed as permanent disasters but instead as “the correctable by-products of experimentation.” (p. xiv)

Postrel argued that our dynamic modern world and the amazing technologies that drive it have united diverse “stasis”-minded forces in opposition to its continued, unfettered evolution:

[It] has united two types of stasists who would have once been bitter enemies: reactionaries, whose central value is stability, and technocrats, whose central value is control. Reactionaries seek to reverse change, restoring the literal or imagined past and holding it in place. . . . Technocrats, for their part, promise to manage change, centrally directing “progress” according to a predictable plan. . . . They do not celebrate the primitive or traditional. Rather, they worry about the government’s inability to control dynamism. (p. 7-8)

Preservationists vs. Modernizers – Robert D. Atkinson (2004)

Past & Future of Economy - AtkinsonRobert D. Atkinson, President, Information Technology and Innovation Foundation, presented another useful way of looking at innovation policy divides in his 2004 book, The Past and Future of America’s Economy. In Chapter 6 on “The New Economy and Its Discontents,” Atkinson noted how “American history is rife with resistance to change,” as he recounted some of the heated battles over previous industrial / technological revolutions. He argued:

This conflict between stability and progress, security and prosperity, dynamism and stasis, has led to the creation of a major political fault line in American politics. On one side are those who welcome the future and look at the New Economy as largely positive. On the other are those who resist change and see only the risks of new technologies and the New Economy.  As a result, a political divide is emerging between preservationists who want to hold onto the past and modernizers who recognize that new times require new means. (p. 201)

Precautionary Principle vs. Permissionless Innovation – Adam Thierer (2014)

book cover (small)In my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” I argued that the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? I argued that we are today witnessing a grand clash of visions between two competing mindsets about how that question should be answered for a wide variety of new inventions:

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

Fortress vs. Frontier – Robert Graboyes (2014)

GraboyesIn his new white paper, Fortress and Frontier in American Health Care, Robert Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution that can better inform health care policy. Graboyes defines “Fortress and Frontier” as follows:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper:

the Frontier encourages creative destruction and disruptive innovation. Undreamed-of products arise and old, revered ones vanish. New production processes sweep away old ones. This is a place where unknown innovators in garages destroy titans of industry. The Frontier celebrates and rewards risk, and there is a brutal egalitarianism to the creative process. In contrast, the Fortress discourages creative destruction and disruptive innovation. Insiders are protected from competition by government or by private organizations (such as insurers and medical societies) acting in quasigovernmental fashion. In the Fortress, insiders preserve the existing order. Innovation comes from well-established, credentialed insiders who, it is presumed, have the wisdom and motives and competence to identify opportunities for innovation. (p. 13)

The Common Themes

There are several themes that unify these four frameworks. Most notably, they all seek to escape the traditional “Left vs. Right,” “Conservative vs. Liberal,” and “Democrat vs. Republican” labels and models. Postrel’s book noted that, although there are differences at the margin, “reactionaries” (who tend to be more politically and socially “conservative”) and “technocrats” (who tend to identify as politically “progressive”) are united by their desire for greater control over the pace and shape of technological innovation. They both hope that sagacious, noble-minded public officials can set us on a “better path,” or return us to an old path from which we have drifted.

Similarly, Atkinson’s “preservationists versus modernizers” dichotomy identified the “small-c” conservatism that animates the preservationist mindset, regardless of which party or political movement they belong to. Graboyes and I identify this same tendency of those with a precautionary, Fortress mindset to be deeply suspicious of change, and sometimes even being quite openly hostile to it–regardless of their political affiliation. Moreover, all four authors note that, at a minimum, Stasis/Preservationist/Fortress/Precautionary vision is unified by a general gloominess about the prospect for technological change to really better our economy or culture.

From a policy perspective, the competing visions outlined in each of these four paradigms are unified by their preferred policy default for new innovation. Generally speaking, those subscribing to the Dynamist/Modernizer/Frontier/Permissionless Innovation vision believe that innovators should have a clear green light to experiment without fear of prior restraint. By contrast, those adhering to the Stasis/Preservationist/Fortress/Precautionary vision are more risk-adverse and tend to opt for “better to be safe than sorry” policy defaults.

Here’s a little table I put together to highlight the “conflict of visions” over innovation policy identified in these works.

Innovation Policy: The Conflict of Visions
“Stasis” “Dynamism”
“Preservationists” “Modernizers”
“Precautionary principle” “Permissionless innovation”
“Fortress” “Frontier”
progress should be carefully guided progress should free-wheeling
fear of risk & uncertainty embrace of risk & uncertainty
stability/safety first spontaneity first
equilibrium experimentation
wisdom through better planning wisdom through trial & error
anticipation & regulation adaptation & resiliency
ex ante solutions ex post solutions
“better to be safe than sorry” “nothing ventured, nothing gained”

A Problem with These Paradigms

An astute reader will notice a potential problem with these four paradigms: They were crafted by people (including myself) who were much more favorably disposed to one vision than the other. In fact, each of the authors listed here (including me) firmly embraced a common “positive” or “optimistic” vision about the potential for innovation and technological change to generally boost human welfare. We were all writing defenses of visions that, generally speaking, encourage the adoption of attitudes and public policies that are generally welcoming toward new innovations. Postrel, for example, was seeking to articulate and defend the superiority of the dynamist vision over the stasis mentality. Atkinson defended modernizers and bashed preservationists. Graboyes embraced the Frontier mentality and warned of the dangers of the Fortress mentality. Finally, in my own work, I have vociferously defended the notion of permissionless innovation while repeatedly criticizing precautionary principle-based thinking.

I will proudly defend my own work as well as the visions sketched out by Postrel, Atkinson, and Graboyes, which are all very much in league with my own. Nonetheless, some readers or critics might claim that we have stacked the deck in our favor by framing innovation policy debates in the ways we have. We each had a polemical purpose in mind when writing these books; we were hoping to convince others to embrace our way of thinking about technological progress and the future. As a result, that influenced our choice of language and labels. Some critics might even claim that the words we chose to describe the alternative vision are too simplistic or unfairly derogatory. After all, who wants to be labeled a “stasis”-minded “preservationist” who is trapped in a “fortress” mentality advocating hopelessly “precautionary” policies?! By contrast, it is relatively easy for many of us to say we are “modernizers” who embrace “dynamism” and the “frontier” spirit in defense of “permissionless innovation.”

Technological critics have penned a wide variety of polemics making their views on these matters clear, but what is interesting is how few of them attempt to describe the opposing positions in clear detail, or even bother trying to label them. Nor do they usually bother labeling their own positions or perspectives. I suspect that many of them would claim their visions or critiques cannot be succinctly summarized in a mere word or phrase, and that trying to craft conflicting “visions” about innovation policy over-simplifies very complex matters. I actually appreciate that point more than you think. When I am writing about these matters, I try not to over-generalize the very nuanced, sensitive issues in play in here, such as the privacy, safety, and security implications associated with various new innovations. These are profound matters and they deserve to be analyzed carefully and respectfully.

That being said, I still believe that there is a role for visions when thinking about the past, the present, and the future of technological change. Labels and classifications can help us unpack the philosophical differences between different people and organizations and then also evaluate their preferred policy solutions. This allows us to better understand what animates the opposing forces that are pushing for specific policy changes.

Nonetheless, I welcome alternative framings of these proposals and the personalities behind them. Moreover, I would very much like to see others — either those who take opposing views, or analysts with no stake in the fight — suggest other ways of looking at the conflict of visions that animates debates over technological innovation and the future of progress.

A Note on Joel Garreau’s Framing

Radical EvolutionI want to close with a quick postscript related to my point about over-simplifying “visions” about technological change.  In 2010, I penned an essay that got a fair amount of attention entitled, “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society.” As the title implied, it was an attempt to divide the history of thinking about technological innovation into two camps: “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

I was never really satisfied with the “optimist vs. pessimist” breakdown, and I got an earful from some people about it. I always thought there must be somebody who had figured out a better way of reviewing the long arc of history and human thinking about technological change and coming up with better labels or “visions.” And there was!

When I wrote that earlier piece, I was unfortunately not aware of a similar (and much better) framing of this divide that was developed by science and technology writer Joel Garreau in his outstanding 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

After discussing the “Heaven” and “Hell” scenarios cast about by countless tech writers throughout history, Garreau outlined a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.” As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154)

That pretty much sums up my own perspective on things, as I noted in this essay earlier this year, “Muddling Through: How We Learn to Cope with Technological Change.”  I think the “prevail” or “muddling through” notion offers the best explanation for how we learn to cope with technological disruption and prosper in the process. (I also wrote a lengthy law review article on this and discussed this issue more in my recent book.) In any event, I chose not to include Garreau’s framework in the above discussion because Garreau — a former reporter and editor at The Washington Post – tries to be somewhat more objective in discussing the various “Heaven” vs. “Hell” scenarios and the personalities behind them (even though in the concluding chapter he seems to be aligning himself with the “Prevail” crowd.) So, it doesn’t quite align perfectly with the more polemical visions I described above. But I continue to think it is the single best thing penned in recent years on the nature of these debates. I cannot recommend it strongly enough.

In closing, I want to reiterate that I would very much welcome suggestions from others about alternative framings and paradigms for thinking about the future of technological change and progress. I imagine I will spend the rest of my life researching and writing about these issues, so I’d love to get more input.  As you can tell, I find these debates terrifically interesting!

]]>
http://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/feed/ 0
Robert Graboyes on What the Internet Can Teach Us about Health Care Innovation http://techliberation.com/2014/11/10/robert-graboyes-on-what-the-internet-can-teach-us-about-health-care-innovation/ http://techliberation.com/2014/11/10/robert-graboyes-on-what-the-internet-can-teach-us-about-health-care-innovation/#comments Mon, 10 Nov 2014 18:56:06 +0000 http://techliberation.com/?p=74900

Robert-GraboyesI want to bring to everyone’s attention an important new white paper by Dr. Robert Graboyes, a colleague of mine at the Mercatus Center at George Mason University who specializes in the economics of health care. His new 67-page study, Fortress and Frontier in American Health Care, seeks to move away from the tired old dichotomies that drive health care policy discussions: Left versus Right, Democrat versus Republican, federal versus state, and public versus private, and so on. Instead, Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution when considering health care policy.

What does Graboyes mean by “Fortress and Frontier”? Here’s how he explains this conflict of visions:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper:

the Frontier encourages creative destruction and disruptive innovation. Undreamed-of products arise and old, revered ones vanish. New production processes sweep away old ones. This is a place where unknown innovators in garages destroy titans of industry. The Frontier celebrates and rewards risk, and there is a brutal egalitarianism to the creative process.

In contrast, the Fortress discourages creative destruction and disruptive innovation. Insiders are protected from competition by government or by private organizations (such as insurers and medical societies) acting in quasigovernmental fashion. In the Fortress, insiders preserve the existing order. Innovation comes from well-established, credentialed insiders who, it is presumed, have the wisdom and motives and competence to identify opportunities for innovation.

In framing the debate in this fashion, Graboyes hopes that we will start paying more attention to the supply side of health care policy debates:

The debate over coverage (and over related issues concerning how health care providers are paid) has focused attention almost exclusively on the demand side of health care markets—who pays how much to whom for which currently offered services. The debate underplays questions of supply—how innovation can alter the very nature of the health care delivery system. (p. 3-4)

This is where Graboyes brings the Internet and information technology into the story to illustrate a powerful point: We could unlock many important life-enriching and potentially life-saving innovations by embracing the same vision we applied to the Internet and IT sectors. Graboyes is kind enough to cite my work on permissionless innovation and the importance of not letting public policy be dictated by excessive fear of worst-case scenarios regarding new technological innovations. As I noted in my book on the topic, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Had fear of potential worst-case outcomes driven policy for the Net, we might have never seen many of the life-enriching innovations that we enjoy today, as Graboyes explains eloquently in this passage:

Knowing what we know today, it would not be hard to persuade a cautious observer in 1989 to radically slow the pace of IT innovation. IT arguably poses personal risks as grave as those that health care poses. Cell phones have been essential components of improvised explosive devices in war zones. The 9/11 atrocities would have been difficult or impossible to carry out without cell phones. Thieves have used the Internet to steal. Stalkers have used the Internet to terrify their prey. Child predators find their victims on the web. People have been murdered by strangers they met in chatrooms. IT has allowed individuals and governments to violate others’ privacy in countless ways. Drug dealers and terrorist networks organize their efforts via cell phone and Internet. The Internet has greatly reduced the cost of destroying another’s reputation, and news accounts tell of suicides following cyberbullying.

Our laws demand terribly high standards of safety and efficacy for drugs. We require no such standards for computers, cell phones, and software, but given the nefarious uses to which they are sometimes put, decades ago one could easily have argued for doing so. Had we done so, we would now be living in a much poorer, less interesting world—and perhaps one with even greater risks to life and limb than we have now. No online predators or improvised explosive devices, but also no OnStar to save you after an automobile crash or smartphone to alert police to your life-threatening situation and geographic location. (p. 41)

In other words–and this is another lesson I stress at length in my work–precautionary policies create profound trade-offs that are not always well understood upon enactment of new laws or regulations. As I noted in my book, “When commercial uses of an important resource or technology are arbitrarily prohibited or curtailed, the opportunity costs of such exclusion may not always be immediately evident. Nonetheless, those ‘unseen’ effects are very real and have profound consequences for individuals, the economy, and society.”

What Graboyes does so well in his new paper is prove that these trade-offs are already at work in the American health care system and that we had better get serious about acknowledging them before real damage is done. And what makes Fortress and Frontier such an enjoyable read is that Graboyes is a gifted story-teller who explains in clear terms how expanded health care innovation opportunities could improve the lives of real people. It’s not just abstract, textbook talk. We hear stories of real-world innovators and the patients who need their inventions. For example, Graboyes tells of “an unheralded doctor who pioneered stem-cell therapy in a small-town hospital, a carpenter and puppet-maker who invented functional prosthetic hands costing one-thousandth the price of professionally made devices (aided by an evolutionary biologist who started a worldwide consortium of amateur prosthetists), and college students who devised a low-cost treatment for clubfoot.” (p. 4) And much, much more.

“The most important thing to understand about disruptive innovation is that it often comes (perhaps usually comes) from strange and unexpected places,” Graboyes notes. (p. 20) “[A] shift from Fortress to Frontier would benefit the health and finances of Americans,” he argues, and “the task begins by easing limits on the supply of health care services, thereby clearing the way for innovators to take health care in directions we cannot yet imagine.” (p. 39)

Importantly, Graboyes also offers another reason why America should embrace the “frontier” spirit: Our global competitive advantage in this space is at risk if we don’t:

Moving health care from the Fortress to the Frontier may be more a matter of necessity than of choice. We are entering a period of rapid technological advances that will radically alter health care. Many of these advances require only modest capital and labor inputs that governments cannot easily control or prohibit. If US law obstructs these technologies here, it will be feasible for Americans to obtain them by Internet, by mail, or by travel. (p. 41-2)

He highlights several areas in which this debate will play out going forward including (and notice the intersection with the modern digital technologies and tech policy debates we often discuss here): genomic knowledge and personalized medicine, 3-D printing, artificial intelligence, information sharing via social media, wearable technology, and telemedicine.

To make sure that America can capitalize on the same innovative spirit that gave us the Information Revolution, Graboyes concludes his study with a laundry list of needed policy reforms. These include:

  • reform of FDA drug & device approval process to expedite reviews.
  • ensure that Americans have a “right to know” about themselves and their health (i.e., that individuals have a right to possess their own genetic information and to receive information about how to interpret the results.)
  • abolish state certificate-of-need laws, which unnecessarily “require that hospital developers obtain government permission before building a new facility, or expanding an existing one, or even adding a specific piece of medical equipment.”
  • reform state-based licensing laws, which “put barriers in the way of doctors moving from other states” and create physician shortages. Also need to reform state laws to allow nurse practitioners, optometrists, and others to practice independently of physicians.
  • reform tort law by capping noneconomic damages, instituting a “loser pays” rule to discourage frivolous lawsuits, establishing safe harbors for vaccine developers, and more.
  • revising tax laws to make sure medical devices are not hit with discriminatory tax burdens that discourage innovation, and then also revising other taxes that skew incentives in the health insurance marketplace.

Graboyes itemizes dozens of other potential reforms to give policymakers a smorgasbord of options from which to choose. It is unlikely that all the reforms he lists will be adopted, but even if policymakers would just pick a few of those proposed action items, it could provide a real boost to medical innovation in the short term. Importantly, most of these proposed reforms could be implemented without stirring up contentious debate over the future of the Affordable Care Act (ACA).

Needless to say, I highly recommend Fortress and Frontier and I very much hope that the vision that Graboyes articulates in it comes to influence public thinking and future policymaking in the health care arena. In a follow-up post, I will also discuss how Fortress versus Frontier provides us with another “innovation paradigm” that can help us frame future innovation policy debates in many other contexts.

]]>
http://techliberation.com/2014/11/10/robert-graboyes-on-what-the-internet-can-teach-us-about-health-care-innovation/feed/ 0