Technology Liberation Front http://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 25 Mar 2015 12:22:58 +0000 en-US hourly 1 Autonomous Vehicles Under Attack: Cyber Dashboard Standards and Class Action Lawsuits http://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/ http://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/#comments Sat, 14 Mar 2015 13:06:08 +0000 http://techliberation.com/?p=75511

In a recent Senate Commerce Committee hearing on the Internet of Things, Senators Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) “announced legislation that would direct the National highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure our cars and protect drivers’ privacy.” Spurred by a recent report from his office (Tracking and Hacking: Security and Privacy Gaps Put American Drivers at Risk) Markey argued that Americans “need the equivalent of seat belts and airbags to keep drivers and their information safe in the 21st century.”

Among the many conclusions reached in the report, it says, “nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” This comes across as a tad tautological given that everything from smartphones and computers to large-scale power grids are prone to being hacked, yet the Markey-Blumenthal proposal would enforce a separate set of government-approved, and regulated, standards for privacy and security, displayed on every vehicle in the form of a “Cyber Dashboard” decal.

Leaving aside the irony of legislators attempting to dictate privacy standards, especially in the post-Snowden world, it would behoove legislators like Markey and Blumenthal to take a closer look at just what it is they are proposing and ask whether such a law is indeed necessary to protect consumers. For security in particular, there may be concerns that require redress, but if one looks at the report, it becomes apparent that it lacks a very important feature:: no specific examples of real car hacking are mentioned. The only examples illustrated in the report are described in brief detail:

An application was developed by a third party and released for Android devices that could integrate with a vehicle through the Bluetooth connection. A security analysis did not indicate any ability to introduce malicious code or steal data, but the manufacturer had the app removed from the Google Play store as a precautionary measure.

Great! The company solved the problem. What about the other instance cited in the report?

Some individuals have attempted to reprogram the onboard computers of vehicles to increase engine horsepower or torque through the use of “performance chips”. Some of these devices plug into the mandated onboard diagnostic port or directly into the under-the-hood electronics system.

So the only two examples of “car hacking” described in the Markey report are essentially duds. The first is a non-issue, since the company (1) determined there was little security risk involved and (2) removed the item from the market anyways, just to be sure. The second is, in a sense, hacking, but it is individual car owners doing it to their own cars. Neither of these cases appears to be sufficient grounds for imposing a set of arbitrary and, in many cases, capriciously anti-innovation approaches to privacy and data security in cars.

In the wake of the report’s release, this past Tuesday, March 10, General Motors, Toyota, and Ford were all hit with a nationwide class action lawsuit, alleging that the companies concealed “dangers posed by a lack of electronic security in a vast swath of vehicles.” Specifically, the lawsuit is aimed at the presence of controller area network (CAN) buses, which act as data hubs between the various electronic systems in a car. These systems are, indeed, susceptible to hacking, but no more than any personal computer that is connected to the Internet.

The trouble with this lawsuit, brought by the Stanley Law Group, is that it has not cited any specific harms that have occurred as a result of this “defect” (as a side note, saying a computer being susceptible to hacking constitutes a defect in design is the equivalent of saying an airplane that is susceptible to lightning strikes is fundamentally defective). Rather, the plaintiffs argue that “[w]e shouldn’t need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect.”

As Adam Thierer and I pointed out in our 2014 paper, Removing Roadblocks to Intelligent Vehicles and Driverless Cars:

Manufacturers have powerful reputational incentives at stake here, which will encourage them to continuously improve the security of their systems. Companies like Chrysler and Ford are already looking into improving their telematics systems to better compartmentalize the ability of hackers to gain access to a car’s controller-area-network bus. Engineers are also working to solve security vulnerabilities by utilizing two-way data-verification schemes (the same systems at work when purchasing items online with a credit card), routing software installs and updates through remote servers to check and double-check for malware, adopting of routine security protocols like encrypting files with digital signatures, and other experimental treatments. (pg. 40-41)

It’s always easy to see the potential for abuse and harm with any new emerging technology, but optimism and fortitude in the face of the uncertain is what helps society, and individuals, grow and progress. Car hacking, while certainly a viable concern, is not so ubiquitous that it necessitates a heavy-handed regulatory approach. Rather, we should permit various standards to emerge and attempt to deal with possible harms. In this way, we can experiment to properly determine what approaches work and what do not. Federal standards imposed from on high assume that firms and individuals are not capable of working through these murky issues. We should be a bit more optimistic about the human capacity for ingenuity and adaptability.

To end on something of a more optimistic note, Tom Vanderbilt of Wired magazine gives keen insight into the reality of regulating based on hypothetical scenarios:

Every scenario you can spin out of computer error – what if the car drives the wrong way – already exists in analog form, in abundance. Yes, computer-guidance systems and the rest will require advances in technology, not to mention redundancy and higher standards of performance, but at least these are all feasible, and capable of quantifiable improvement. On the other hand, we’ll always have lousy drivers.

 


 

Additional Reading 

]]>
http://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/feed/ 0
Bipartisan Internet of Things Resolution Introduced in Senate http://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/ http://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/#comments Wed, 04 Mar 2015 21:08:24 +0000 http://techliberation.com/?p=75493

A new bipartisan “sense of the Senate” resolution was introduced today calling for “a national strategy for the Internet of Things to promote economic growth and consumer empowerment.” [PDF is here.] The resolution was cosponsored by U.S. Senators Deb Fischer (R-Neb.), Cory A. Booker (D-N.J.), Kelly Ayotte (R-N.H.), and Brian Schatz (D-Hawaii), who are all members of the Senate Commerce Committee, which oversees these issues. Just last month, on February 11th, the full Commerce Committee held a hearing titled “The Connected World: Examining the Internet of Things,” which examined the policy issues surrounding this exciting new space.

[Update: The U.S. Senate unanimously approved the resolution on the evening of March 24th, 2015.]

The new Senate resolution begins by stressing the many current or potential benefits associate with the Internet of Things (IoT), which, it notes, “currently connects tens of billions of devices worldwide and has the potential to generate trillions of dollars in economic opportunity.” It continues on to note how average consumers will benefit because “increased connectivity can empower consumers in nearly every aspect of [our] daily lives, including in the fields of agriculture, education, energy, healthcare, public safety, security, and transportation, to name just a few.” And then the resolution also discussed the commercial benefits, noting, “businesses across our economy can simplify logistics, cut costs in supply chains, and pass savings on to consumers because of the Internet of Things and innovations derived from it.” More generally, the Senators argue “the United States should strive to be a world leader in smart cities and smart infrastructure to ensure its citizens and businesses, in both rural and urban parts of the country, have access to the safest and most resilient communities in the world.”

In light of those amazing potential benefits, the resolution continues on to argue that while “the United States is the world leader in developing the Internet of Things technology,” an even more focused and dedicated policy vision is needed to promote continued success. “[W]ith a national strategy guiding both public and private entities,” it argues, “the United States will continue to produce breakthrough technologies and lead the world in innovation.” 

Toward that end, the resolution says that it is the sense of the Senate that:

(1) the United States should develop a national strategy to incentivize the development of the Internet of Things in a way that maximizes the promise connected technologies hold to empower consumers, foster future economic growth, and improve our collective social well-being;

(2) the United States should prioritize accelerating the development and deployment of the Internet of Things in a way that recognizes its benefits, allows for future innovation, and responsibly protects against misuse;

(3) the United States should recognize the importance of consensus-based best practices and communication among stakeholders, with the understanding that businesses can play an important role in the future development of the Internet of Things;

(4) the United States Government should commit itself to using the Internet of Things to improve its efficiency and effectiveness and cut waste, fraud, and abuse whenever possible; and,

(5) using the Internet of Things, innovators in the United States should commit to improving the quality of life for future generations by developing safe, new technologies aimed at tackling the most challenging societal issues facing the world.

This is a pretty solid statement from this group of Senators, who appear committed to advancing a pro-innovation, pro-growth approach to the emerging Internet of Things universe of technologies. This is exciting because this reflects the strong bipartisan approach American policymakers adopted two decades ago for the Internet more generally. America’s unified, “light-touch” Internet policy vision worked wonders for consumers and our economy before, and it can happen again thanks to a vision like the one these four Senators floated today.

As I explained in more detail when I testified at the February 11th Senate Commerce hearing on IoT issue:

America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation” — the idea that experimentation with new technologies and business models should generally be permitted without prior approval. Congress embraced permissionless innovation by passing the Telecommunications Act of 1996 and rejecting archaic Analog Era command-and-control regulations for this exciting new medium. The Clinton administration embraced permissionless innovation with its 1997 “Framework for Global Electronic Commerce,” which outlined a clear vision for Internet governance that relied on civil society, voluntary agreements, and ongoing marketplace experimentation. This nonpartisan blueprint sketched out almost two decades ago for the Internet is every bit as sensible today as we begin crafting a policy paradigm for the Internet of Things

I view this new Senate resolution on the Internet of Things as an effort to freshen up and extend that original vision that lawmakers crafted for the Internet back in the mid-1990s.  As I documented in my recent essay, “Why Permissionless Innovation Matters,” that vision has worked wonders for American consumers and our modern economy. Meanwhile, our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

We got policy right once before in the United States, and we can get it right again with a policy vision like that found in this new Senate resolution for the Internet of Things.

____________________________

Additional Reading

]]>
http://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/feed/ 0
Initial Thoughts on Obama Administration’s “Privacy Bill of Rights” Proposal http://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/ http://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/#comments Fri, 27 Feb 2015 21:28:30 +0000 http://techliberation.com/?p=75488

The Obama Administration has just released a draft “Consumer Privacy Bill of Rights Act of 2015.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.

Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.

No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.

Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.

That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Instead of imposing these FIPPs in a rigid regulatory fashion, privacy and security best practices will need to evolve gradually to new marketplace realities and be applied in a more organic and flexible fashion, often outside the realm of public policy.

Regulatory approaches, like the Obama Administration’s latest proposal, will instead impose significant costs on consumers and the economy. Data is the fuel that powers our information economy. Privacy-related mandates that curtail the use of data to better target or personalize new services could raise costs for consumers. There is no free lunch. Something has to pay for all the wonderful free sites and services we enjoy today. If data can’t be used to cross-subsidize those services, prices will go up.

Data regulations could also indirectly cost consumers by diminishing the abundance of content and culture now supported by the data-driven economy. In other words, even if prices and paywalls don’t go up, quantity or quality could suffer if data collection is restricted.

Data regulations could also hurt the competitiveness of domestic markets and the global competitive advantage that America’s tech sector has in this space. That regulatory burden would fall hardest on smaller operators and new start-ups. Today’s “app economy” has given countless small innovators a chance to compete on even footing with the biggest players. Burdensome data collection restrictions could short-circuit the engine that drives entrepreneurial innovation among mom-and-pop companies if ad dollars get consolidated in the hands of only the larger companies that can afford to comply with new rules.

We don’t want to go down the path the European Union charted in the 1990s with heavy-handed data directives. That suffocated high-tech entrepreneurialism and innovation there. America’s Internet sector came to be the envy of the world because our more flexible, light-touch regulatory regime leaves more breathing room for competition and innovation compared to Europe’s top-down regime. We should not abandon that approach now.

Finally, the Obama Administration’s proposal deals exclusively with private sector data collection and has nothing to say about government surveillance activities. The Administration would be wise to channel its energies into that far more significant privacy problem first.

________________________

Additional Reading from Adam Thierer of the Mercatus Center

Law Review Articles:

Testimony / Filings

 

]]>
http://techliberation.com/2015/02/27/initial-thoughts-on-obama-administrations-privacy-bill-of-rights-proposal/feed/ 0
Mercatus Center Scholars Contributions to Cybersecurity Research http://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/ http://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/#comments Mon, 23 Feb 2015 16:46:00 +0000 http://techliberation.com/?p=75476

by Adam Thierer & Andrea Castillo

Cybersecurity policy is a big issue this year, so we thought it be worth reminding folks of some contributions to the literature made by Mercatus Center-affiliated scholars in recent years. Our research, which can be found here, can be condensed to these five core points:

1)         Institutions, societies, and economies are more resilient than we give them credit for and can deal with adversity, even cybersecurity threats.

See: Sean Lawson, “Beyond Cyber-Doom: Assessing the Limits of Hypothetical Scenarios in the Framing of Cyber-Threats,” December 19, 2012.

2)         Companies and organizations have a vested interest in finding creative solutions to these problems through ongoing experimentation and they are pursing them with great vigor.

See: Eli Dourado, “Internet Security Without Law: How Service Providers Create Order Online,” June 19, 2012.

3)         Over-arching, top-down “cybersecurity frameworks” threaten to undermine dynamism in cybersecurity and Internet governance, and could promote rent-seeking and corruption. Instead, the government should foster continued dynamic cybersecurity efforts through the development of a robust private-sector cybersecurity insurance market.

See: Eli Dourado and Andrea Castillo, “Why the Cybersecurity Framework Will Make Us Less Secure,” April 17, 2014.

4)         The language sometimes used to describe cybersecurity threats sometimes borders on “techno-panic” rhetoric that is based on “threat inflation.

See the Lawson paper already cited as well as: Jerry Brito & Tate Watkins “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” April 10, 2012; and Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” January 25, 2013.

5)         Finally, taking these other points into account, our scholars have conclude that academics and policymakers should be very cautious about how they define “market failure” in the cybersecurity context. Moreover, to the extent they propose new regulatory controls to address perceived problems, those rules should be subjected to rigorous benefit-cost analysis.

See: Eli Dourado, “Is There a Cybersecurity Market Failure,” January 23, 2012.

 

C2-Spending-and-Breaches_0Developing cybersecurity policies—like the White House’s “Securing Cyberspace” proposal and the Senate Intelligence Committee’s risen-from-the-grave Cybersecurity Information Sharing Act (CISA) of 2015—prioritize government-led “information-sharing” among federal agencies and private organizations as a one-stop technocratic solution to the dynamic problem of cybersecurity provision. But, as Eli and Andrea pointed out in a Mercatus chart series from this year, the federal government’s own success with internal information-sharing policies has been abysmal for decades.

The Federal Information Security Management Act of 2002 compelled federal investment in IT security infrastructure along with internal information-sharing of system breaches and proactive responses among agencies. Apparently, this has not worked like a charm. The chart shows that reported federal breaches have risen by over 1000% since 2006 despite spending billions of dollars on agency systems and information sharing capabilities over the same time.

Many of the same agencies who would be imbued with power to coordinate information-sharing among private and government entities through CISA and other cybersecurity proposals were responsible for coordinating threat-sharing on the federal level. These are the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS). Are we to believe these bodies will become magically efficient once they have more power to cajole the private sector?

Government Accountability Office (GAO) reports analyzing the failure of federal information security practices and threat coordination find that the technocratic solutions that look so perfectly rational and controlled on paper break down when imposed from above on employees that have no buy-in. The report concludes, “As we and inspectors general have long pointed out, federal agencies continue to face challenges in effectively implementing all elements of their information security programs.” Repeating the same failed policies in the private sector is unlikely to result in success.

Cybersecurity provision is too important of an issue to be left to brittle, technocratic policies with proven track records of failure. Rather, good cybersecurity policy will be grounded in an understanding of the incentives and norms that have allowed the Internet to develop and thrive as the system that it is today to target specific sources of failure.

Industry analyses find again and again that with cybersecurity, the problem exists between chair and keyboard—“human error,” not insufficient government meddling, is responsible for the vast majority of cyber incidents. Introducing more error-prone humans to the equation, as government cybersecurity plans seek to do, will only complicate the problem while neglecting the underlying factors that need addressing.

Cybersecurity will be an issue we continue to cover closely at the Mercatus Center Technology Policy Program.

]]>
http://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/feed/ 0
Initial Thoughts on New FAA Drone Rules http://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/ http://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/#comments Mon, 16 Feb 2015 20:08:55 +0000 http://techliberation.com/?p=75465

Yesterday afternoon, the Federal Aviation Administration (FAA) finally released its much-delayed rules for private drone operations. As The Wall Street Journal points out, the rules “are about four years behind schedule,” but now the agency is asking for expedited public comments over the next 60 days on the whopping 200-page order. (You have to love the irony in that!) I’m still going through all the details in the FAA’s new order — and here’s a summary of what the major provisions — but here are some high-level thoughts about what the agency has proposed.

Opening the Skies…

  • The good news is that, after a long delay, the FAA is finally taking some baby steps toward freeing up the market for private drone operations.
  • Innovators will no longer have to operate entirely outside the law in a sort of drone black market. There’s now a path to legal operation. Specifically, small unmanned aircraft systems (UAS) operators (for drones under 55 lbs.) will be able to go through a formal certification process and, after passing a test, get to operate their systems.

… but Not Without Some Serious Constraints

  • The problem is that the rules only open the skies incrementally for drone innovation.
  • You can’t read through these 200 pages of regulations without getting sense that the FAA still wishes that private drones would just go away.
  • For example, the FAA still wants to keep a bit of a leash around drones by (1) limiting their use to being daylight-only flights (2) that are in the visual line-of-sight of the operators at all times. And (3) the agency also says that drones cannot be flown over people.
  • Those three limitations will hinder some obvious innovations, such as same-day drone delivery for small packages, which Amazon has suggested they are interested in pursuing. (Amazon isn’t happy about these restrictions.)

Impact on Small Innovators?

  • But what I worry about more are all the small ‘Mom-and-Pop’ drone entrepreneur, who want to use airspace as a platform for open, creative innovation. These folks are out there but they don’t have the name or the resources to weather these restrictions the way that Amazon can. After all, if Amazon has to abandon same-day drone delivery because of the FAA rules, the company will still have a thriving commercial operation to fall back on. But all those small, nameless drone innovators currently experimenting with new, unforeseeable innovations may not be so lucky.
  • As a result, there’s a real threat here of drone entrepreneurs bolting the U.S. and offering their services in more hospitable environments if the FAA doesn’t take a more flexible approach.
  • [For more discussion of this problem, see my recent essay on “global innovation arbitrage.”]

Impact on News-Gathering?

  • It’s also worth asking how these rules might limit legitimate news-gathering operations by both journalistic enterprises and average citizens. If we can never fly a drone over a crowd of people, as the rules stipulate, that places some rather serious constraints on our ability to capture real-time images and video from events of societal importance (such as political protests or even just major events like sporting events or concerts).
  • [For more discussion about this, see this September 2014 Mercatus Center working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.”]

Still Time to Reconsider More Flexible Rules

  • Of course, these aren’t final rules and the agency still has time to relax some of these restrictions to free the skies for less fettered private drone operation.
  • I suspect that drone innovators will protest the three specific limitations I identified above and ask for a more flexible approach to enforcing those rules.
  • But it’s good that the FAA has finally taken the first step toward decriminalizing private drone operations in the United States.

___________________________

Additional Reading

]]>
http://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/feed/ 0
What Cory Booker Gets about Innovation Policy http://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/ http://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/#comments Mon, 16 Feb 2015 15:32:43 +0000 http://techliberation.com/?p=75460

Cory BookerLast Wednesday, it was my great pleasure to testify at a Senate Commerce Committee hearing entitled, “The Connected World: Examining the Internet of Things.” The hearing focused “on how devices… will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

But the session went well beyond the Internet of Things and became a much more wide-ranging discussion about how America can maintain its global leadership for the next-generation of Internet-enabled, data-driven innovation. On both sides of the aisle at last week’s hearing, one Senator after another made impassioned remarks about the enormous innovation opportunities that were out there. While doing so, they highlighted not just the opportunities emanating out of the IoT and wearable device space, but also many other areas, such as connected cars, commercial drones, and next-generation spectrum.

I was impressed by the energy and nonpartisan vision that the Senators brought to these issues, but I wanted to single out the passionate statement that Sen. Cory Booker (D-NJ) delivered when it came his turn to speak because he very eloquently articulated what’s at stake in the battle for global innovation supremacy in the modern economy. (Sen. Booker’s remarks were not published, but you can watch them starting at the 1:34:00 mark of the hearing video.)

Embrace the Opportunity

First, Sen. Booker stressed the enormous opportunity with the Internet of Things. “This is a phenomenal opportunity for a bipartisan, profoundly patriotic approach to an issue that can explode our economy. I think that there are trillions of dollars, creating countless jobs, improving quality of life, [and] democratizing our society,” he said. “We can’t even imagine the future that this portends of, and we should be embracing that.”

Sen. Booker has it exactly right. And for more details about the enormous innovation opportunities associated with the Internet of Things, see Section 2 of my new law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” which provides concrete evidence.

Protect America’s Competitive Advantage in the Innovation Age

Second, Sen. Booker highlighted the importance of getting our policy vision right to achieve those opportunities. He noted that “a lot of my concerns are what my Republican colleagues also echoed, which is we should be doing everything possible to encourage this and nothing to restrict it.”

America right now is the net exporter of technology and innovation in the globe, and we can’t lose that advantage,” he said and “we should continue to be the global innovators on these areas.” He continued on to say:

And so, from copyright issues, security issues, privacy issues… all of these things are worthy of us wrestling and grappling with, but to me we cannot stop human innovation and we can’t give advantages in human innovation to other nations that we don’t have. America should continue to lead.

This is something I have been writing actively about now for many years and I agree with Sen. Booker that America needs to get our policy vision right to ensure we don’t lose ground in the international competition to see who will lead the next wave of Internet-enabled innovation. As I noted in my testimony, “If America hopes to be a global leader in the Internet of Things, as it has been for the Internet more generally over the past two decades, then we first have to get public policy right. America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation”—the idea that experimentation with new technologies and business models should generally be permitted without prior approval.”

Meanwhile, as I documented in my longer essay, “Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others?” our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

Reject Fear-Based Policymaking

Third, and perhaps most importantly, Sen. Booker stressed how essential it was that we reject a fear-based approach to public policymaking. As he noted at the hearing about these new information technologies, “there’s a lot of legitimate fears, but in the same way of every technological era, there must have been incredible fears.”

He cited, for example, the rise of air travel and the onset of humans taking flight. Sen. Booker correctly noted that while that must have been quite jarring at first, we quickly came to realize the benefits of that new innovation. The same will be true for new technologies such as the Internet of Things, connected cars, and private drones, Booker argued. In each case, some early fears about these technologies could lead to overly-precautionary approach to policy. “But for us to do anything to inhibit that leap in humanity to me seems unfortunate,” he said.

Once again, the Senator has it exactly right. As I noted in my law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as my recent essay, “Muddling Through: How We Learn to Cope with Technological Change,” humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. More often than not, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Booker gets that and understands why we need to be patient to allow that process to unfold once again so that we can enjoy the abundance of riches that will accompany a more innovative economy.

Avoiding Global Innovation Arbitrage

Sen. Booker also highlighted how some existing government legal and regulatory barriers could hold back progress. On the wireless spectrum front he noted that “the government hoards too much spectrum and there is a need for more spectrum out there. Everything we are talking about,” he argued, “is going to necessitate more spectrum.” Again, 100% correct. Although some spectrum reform proposals (licensed vs. unlicensed, for example) will still prove contentious, we can at least all agree that we have to work together to find ways to open up more spectrum since the coming Internet of Things universe of technologies is going to demand lots of it.

Booker also noted that another area where fear undermines American leadership is the issue of private drone use. He noted that, “the potential possibilities for drone technology to alleviate burdens on our infrastructure, to empower commerce, innovation, jobs… to really open up unlimited opportunities in this country is pretty incredible to me.”

The problem is that existing government policies, enforced by the Federal Aviation Administration (FAA), have been holding back progress. And that has had consequences in terms of global competitiveness. “As I watch our government go slow in promulgating rules holding back American innovation,” Booker said, “what happened as a result of that is that innovation has spread to other countries that don’t have these rules (or have) put in place sensible regulations. But now we seeing technology exported from America and going other places.”

Correct again! I wrote about this problem in a recent essay on “global innovation arbitrage,” in which I noted how “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

That’s already happening with drone innovation, as I documented in that piece. Evidence suggests that the FAA’s heavy-handed and overly-precautionary approach to drones has encouraged some innovators to flock overseas in search of more hospitable regulatory environment.

Luckily, just this weekend, the FAA finally announced its (much-delayed) rules for private drone operations. (Here’s a summary of those rules.) Unfortunately, the rules are a bit of mixed bag, with some greater leeway being provided for very small drones, but the rules will still be too restrictive to allow for other innovative applications, such as widespread drone delivery (which has Amazon angry, among others.)

Bottom line: if our government doesn’t take a more flexible, light-touch approach to these and other cutting-edge technologies, than some of our most creative minds and companies are going to bolt.

I dealt with all of these innovation policy issues in far more detail in my latest little book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, which I condensed further still into this essay on, “Embracing a Culture of Permissionless Innovation.” But Sen. Booker has offered us an even more concise explanation of just what’s at stake in the battle for innovation leadership in the modern economy. His remarks point the way forward and illustrate, as I have noted before, that innovation policy can and should be a nonpartisan issue.

 

____________________________

Additional Reading

 

]]>
http://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/feed/ 0
My Testimony for Senate Internet of Things Hearing http://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/ http://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/#comments Wed, 11 Feb 2015 14:31:34 +0000 http://techliberation.com/?p=75444

This morning at 9:45, the Senate Committee on Commerce, Science, and Transportation is holding a full committee hearing entitled, “The Connected World: Examining the Internet of Things.” According to the Committee press release, the hearing “will focus on how devices — from home heating systems controlled by users online, to wearable devices that track health and activity with the help of Internet-based analytics — will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

It is my pleasure to have been invited to testify at this hearing. I’ve long had an interest in the policy issues surrounding the Internet of Things. All my relevant research products can be found online here, including my latest law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation.

My testimony, which can be found on the Mercatus Center website here, begins by highlighting the three general conclusions of my work:

  1. First, the Internet of Things offers compelling benefits to consumers, companies, and our country’s national competitiveness that will only be achieved by adopting a flexible policy regime for this fast-moving space.
  2. Second, while there are formidable privacy and security challenges associated with the Internet of Things, top-down or one-size-fits-all regulation will limit innovative opportunities.
  3. Third, with those first two points in mind, we should seek alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing legal mechanisms. Long-term privacy and security protection requires a multifaceted approach incorporating many flexible solutions.

I continue on to elaborate on each point and then conclude my testimony on a note of optimism:

we should also never forget that, no matter how disruptive these new technologies may be in the short term, we humans have an extraordinary ability to adapt to technological change and bounce back from adversity. That same resilience will be true for the Internet of Things. We should remain patient and continue to embrace permissionless innovation to ensure that the Internet of Things thrives and American consumers and companies continue to be global leaders in the digital economy.

My testimony also includes 7 appendices offering more detail for those interested.  Two of those appendices focus on defining the parameters of the Internet of Things as then documenting the projected economic impact associated with this rapidly-growing market.  The other appendices reproduce essays I have published here before, including articles about the Federal Trade Commission’s recent Internet of Things report as well as my thoughts on how to craft a nonpartisan policy vision for the Internet of Things.

Finally, here’s a list of most of my recent work the Internet of Things and wearable technology policy issues for those interested in reading even more about the topic:

]]>
http://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/feed/ 0
Don’t Hit the (Techno-)Panic Button on Connected Car Hacking & IoT Security http://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/ http://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/#comments Tue, 10 Feb 2015 20:15:02 +0000 http://techliberation.com/?p=75425

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl.

Never mind that we are told that the “hackers” who “hacked” into this car had been directly working on its systems for years—a luxury scarcely available to the shadowy malicious hackers about whom DARPA Dan and his team so hoped to frighten us. The careful setup, editing, and Lesley Stahl’s squeals made for convincing theater.

Then there’s the Markey report. On the surface, the findings appear grim. For instance, we are warned that “Nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” Nearly 100%? We’re practically naked out there! But digging through the report, we learn that the basis for this claim is that most of the 16 manufacturers surveyed responded that 100% of their vehicles are equipped with wireless entry points (WEPs)—like Bluetooth, Wi-Fi, navigation, and anti-theft features. Because these features “could pose vulnerabilities,” they are listed as a threat—one that lurks in nearly 100% of the cars on the market, at that.

Much of the report is similarly panicky and sometimes humorous (complaint #3: “many manufacturers did not seem to understand the questions posed by Senator Markey.”) The report concludes that the “alarmingly inconsistent and incomplete state of industry security and privacy practice,” warrants recommendations that federal regulators — led by the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) — “promulgate new standards that will protect the data, security and privacy of drivers in the modern age of increasingly connected vehicles.”

Take a Deep Breath

As we face an uncertain future full of rapidly-evolving technologies, it’s only natural that some might feel a little anxiety about how these new machines and devices operate. Despite the exaggerated and sometimes silly nature of techno-panic reports like these, they reflect many people’s real and understandable concerns about new technologies.

But the problem with these reports is that they embody a “panic-first” approach to digital security and privacy issues. It is certainly true that our cars are become rolling computers, complete with an arsenal of sensors and networking technologies, and the rise of the Internet of Things means almost everything we own or come into contact with will possess networking capabilities. Consequently, just as our current generation of computing and communications technologies are vulnerable to some forms of hacking, it is likely that our cars and IoT devices will be as well.

But don’t you think that automakers and IoT developers know that? Are we really to believe that journalists, congressmen, and DARPA Dan have a greater incentive to understand these issues than the manufacturers whose companies and livelihoods are on the line? And wouldn’t these manufacturers only take on these risks if consumer demand and expected value supported them? Watching the 60 Minutes spot and reading through the Markey report, one is led to think that innovators in this space are completely oblivious to these threats, simply don’t care enough to address them, and don’t have any plans in motion. But that is lunacy.

No Mention of Liability?

To begin, neither report even mentions the possibility of massive liability for future hacking attacks on connected cars or IoT devices. That is amazing considering how the auto industry already attracts an absolutely astonishing amount of litigation activity. (Ambulance-chasing is a full-time legal profession, after all.) Thus, to the extent that some automakers don’t want to talk about everything they are doing to address security issues, it’s likely because they are still figuring out how to address the various vulnerabilities out there without attracting the attention of either enterprising hackers or trial lawyers.

Nonetheless, contrary to the absurd statement by Mr. Kaufman that “There is no real security going on” for connected cars or the Internet of Things, the reality is that these are issues that developers are actively studying and trying to address. Manufacturers of connected devices know that: (1) nobody wants to own or use devices that are fundamentally insecure or dangerous; and (2) if they sell such devices to the public, they are in for a world of hurt once the trial lawyers see the first headlines about it.

It also still quite unclear how big the threat is here. Writing over at Forbes yesterday, Doug Newcomb notes that “the threat of car hacking has largely been overblown by the media – there’s been only one case of a malicious car hack, and that was an inside job by a disgruntled former car dealer employee. But it’s a surefire way to get the attention of the public and policymakers,” he correctly observes. Newcomb also interviewed Damon McCoy, an assistant professor of computer science at George Mason University and a car security researcher, who noted that car hacking hasn’t become prevalent and that “Given the [monetary] motivation of most hackers, the chance of [automotive hacking] is very low.”

Security is a Dynamic, Evolving Process

Regardless, the notion that we can just clean this whole device security situation up with a single set of federal standards, as the Markey report suggests, is appealing but fanciful. “Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts,” observed my Mercatus Center colleagues Eli Dourado and Andrea Castillo in their recent white paper on “Why the Cybersecurity Framework Will Make Us Less Secure.” “By prioritizing a set of rigid, centrally designed standards, policymakers are neglecting potent threats that are not yet on their radar,” Dourado and Castillo note elsewhere.

We are at the beginning of a long process. There is no final destination when it comes to security; it’s a never-ending process of devising and refining policies to address vulnerabilities on the fly. The complex problem of cybersecurity readiness requires dynamic solutions that properly align incentives, improve communication and collaboration, and encourage good personal and organizational stewardship of connected systems. Implementing the brittle bureaucratic standards that Markey and others propose could have the tragic unintended consequence of rendering our devices even less secure.

Standards Are Developing Rapidly

Meanwhile, the auto industry has already come up with privacy standards that go above and beyond what most other digital innovators apply to their own products today. Here are the Auto Alliance’s “Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services,” which 23 major automobile manufacturers agreed to abide by. And, according to a press release yesterday, “automakers are currently working to establish an Information Sharing Analysis Center (or “Auto-ISAC”) for sharing vehicle cybersecurity information among industry stakeholders.”

Again, progress continues and standards are evolving. This needs to be a flexible, evolutionary process, instead of a static, top-down, one-size-fits-all bureaucratic political proceeding.

We can’t set down security and privacy standards in stone for fast-moving technologies like these for another reason, and one I am constantly stressing in my work on “Why Permissionless Innovation Matters.” If we spend all our time worrying about hypothetical worst-case scenarios — and basing our policy interventions on a parade of hypothetical horribles — then we run the risk that best-case scenarios will never come about.  As analysts at the Center for Data Innovation correctly argue, policymakers should only intervene to address specific, demonstrated harms. “Attempting to erect precautionary regulatory barriers for purely speculative concerns is not only unproductive, but it can discourage future beneficial applications of the Internet of Things.” And the same is true for connected cars.

Trade-Offs Matter

Technopanic indulgence isn’t always merely silly or annoying—it can be deadly.

“During the four deadliest wars the United States fought in the 20th century, 39 percent more Americans were dying in motor vehicles” than on the battlefield. So writes Washington Post reporter Matt McFarland in a powerful new post today. The ongoing toll associated with human error behind the wheel is falling but remains absolutely staggering, with almost 100 people losing their lives and almost 6,500 people injured every day.

We must never fail to appreciate the trade-offs at work when we are pondering precautionary regulation. Ryan Hagemann and I wrote about these issues in our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption.

When it comes to the various security, privacy, and ethical considerations related to intelligent vehicles, Hagemann and I argue that they “need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

As I noted here in another recent essay, “anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms.”

No Mention of Alternative Solutions

Finally, it is troubling that neither the 60 Minutes segment nor the Markey report spend any time on alternative solutions to these problems. In my forthcoming law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” I devote the second half of the 90-page paper to constructive solutions to the sort of complex challenges raised in the 60 Minutes segment and the Markey report.

Many of the solutions I discuss in that paper — such as education and awareness-building efforts, empowerment solutions, the development of new social norms, and so on – aren’t even touched on by the reports. That’s a real shame because those methods could go a long way toward helping to alleviate many of the issues the reports identify.

We need a better public dialogue than this about the future of connected cars and Internet of Things security. Political scare tactics and techno-panic journalism are not going to help make the world a safer place. In fact, by whipping up a panic and potentially discouraging innovation, reports such as these can actually serve to prevent critical, life-saving technologies that could change society for the better.

________________________________

Additional Reading

 

]]>
http://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/feed/ 0
My State of the Net panel on Bitcoin http://techliberation.com/2015/02/10/my-state-of-the-net-panel-on-bitcoin/ http://techliberation.com/2015/02/10/my-state-of-the-net-panel-on-bitcoin/#comments Tue, 10 Feb 2015 16:19:50 +0000 http://techliberation.com/?p=75436

A couple weeks ago at State of the Net, I was on a panel on Bitcoin moderated by Coin Center’s Jerry Brito. The premise of the panel was that the state of Bitcoin is like the early Internet. Somehow we got policy right in the mid-1990s to allow the Internet to become the global force it is today. How can we reprise this success with Bitcoin today?

In my remarks, I recall making two basic points.

First, in my opening remarks, I argued that on a technical level, the comparison between Bitcoin and the Internet is apt.

What makes the Internet different from the telecommunications media that came before is the separation of an application layer from a transport layer. The transport layer (and the layers below it) does the work of getting bits to where they need to go. This frees anybody up to develop new applications on a permissionless basis, taking this transport capability basically for granted.

Earlier telecom systems did not function this way. The applications were jointly defined with the transport mechanism. Phone calls are defined in the guts of the network, not at the edges.

Like the Internet, Bitcoin separates out not a transport layer, but a fiduciary layer, from the application layer. The blockchain gives applications access to a fiduciary mechanism that they can take basically for granted.

No longer will fiduciary applications (payments, contracts, asset exchange, notary services, voting, etc.) and fiduciary mechanisms need to be developed jointly. Unwieldy fiduciary mechanisms (banks, legal systems, oversight) will be able to be replaced with computer code.

Second, in the panel’s back and forth, particularly with Chip Poncy, I argued that technological change may necessitate a rebalancing of our laws and regulations on financial crimes.

We have payment systems because they improve human welfare. We have laws against certain financial activities because those activities harm human welfare. Ideally, we would balance the gains against the losses to come up with the optimal, human-welfare-maximizing level of regulation.

However, when a new technology like the blockchain comes along, the gains from payment freedom increase. People in a permissionless environment will be able to accomplish more than before. This means that we have to redo our balancing calculus. Because the benefits of unimpeded payments are higher, we need to tolerate more harms from unsavory financial activities if our goal remains to maximize human welfare.

Thanks to my co-panelists for a great discussion.

]]>
http://techliberation.com/2015/02/10/my-state-of-the-net-panel-on-bitcoin/feed/ 0
Wanted: talented, gritty libertarians who are passionate about technology http://techliberation.com/2015/02/09/wanted-talented-gritty-libertarians-who-are-passionate-about-technology/ http://techliberation.com/2015/02/09/wanted-talented-gritty-libertarians-who-are-passionate-about-technology/#comments Mon, 09 Feb 2015 18:07:04 +0000 http://techliberation.com/?p=75423

Ten or fifteen years ago, when I sat around and thought about what I would do with my life, I never considered directing the technology policy program at Mercatus. It’s not exactly a career track you can get on — not like being a lawyer, a doctor, a professor.

One of the things I loved about Peter Thiel’s book Zero to One is that it is self-consciously anti-track. The book is a distillation of Thiel’s 2012 Stanford course on startups. In the preface, he writes,

“My primary goal in teaching the class was to help my students see beyond the tracks laid down by academic specialties to the broader future that is theirs to create.”

I think he is right. The modern economy provides unprecedented opportunity for people with talent and grit and passion to do unique and interesting things with their lives, not just follow an expected path.

This is great news if you are someone with talent and grit and passion. Average is Over. What you have is valuable. You can do amazing things. We want to work with you, invest in you—maybe even hire you—and unleash you upon the world.

The biggest problem we have is finding you.

There is no technology policy career track, nor would we want there to be one. Frankly, we don’t want someone who needs the comfort and safety of a future that someone else designed for him.

Unfortunately, this also means that there is no defined pool of talented, gritty libertarians who are passionate about technology for Mercatus or our tech policy allies to hire from.

So how are we supposed to find you? We need your help. You need to do two things.

First, get started now.

Just start doing technology policy.

Write about it every day. Say unexpected things; don’t just take a familiar side in a drawn-out debate. Do something new. What is going to be the big tech policy issue two years from now? Write about that. Let your passion show.

The tech policy world is small enough — and new ideas rare enough — that doing this will get you a following in our community.

It also sends a very strong signal come interview time. Anybody can say that they are talented, or gritty, or passionate. You’ll be able to show it.

I literally got hired because of a blog post. There were other helpful inputs, of course — credentials, references, some contract work that turned out well. But what initially got me on Mercatus’s radar screen was a single post.

Second, get in touch.

Everyone on the Mercatus tech policy team is highly Googleable (on Twitter, here’s me, Adam, Brent, and Andrea). We want to know who you are, what you are doing, and what your plans are.

There is almost no downside to this.

Best case scenario: we create a position for you. No one on our team was hired to fill a vacancy. Instead, we hire people because it’s too good of an opportunity for us to pass up.

Alternatively, maybe we’ll pay you to write a paper or a book.

If for some reason you’re not a great fit for Mercatus, we can connect you with allied groups in tech policy. My discussions with people running other tech policy programs confirms that finding talent is an ever-present problem for them, too.

And at a minimum, we’ll know who you are when we see your work online.

We are serious about winning the battle of ideas over technology, but we can’t do it alone. As technology policy eats the world, the opportunities in our field are going to grow. Let us know if you want to get in on this.

]]>
http://techliberation.com/2015/02/09/wanted-talented-gritty-libertarians-who-are-passionate-about-technology/feed/ 0
This Is Not How We Should Ensure Net Neutrality http://techliberation.com/2015/02/05/this-is-not-how-we-should-ensure-net-neutrality/ http://techliberation.com/2015/02/05/this-is-not-how-we-should-ensure-net-neutrality/#comments Fri, 06 Feb 2015 00:30:30 +0000 http://techliberation.com/?p=75407

Chairman Thomas E. Wheeler of the Federal Communications Commission unveiled his proposal this week for regulating broadband Internet access under a 1934 law. Since there are three Democrats and two Republicans on the FCC, Wheeler’s proposal is likely to pass on a party-line vote and is almost certain to be appealed.

Free market advocates have pointed out that FCC regulation is not only unnecessary for continued Internet openness, but it could lead to years of disruptive litigation and jeopardize investment and innovation in the network.

Writing in WIRED magazine, Wheeler argues that the Internet wouldn’t even exist if the FCC hadn’t mandated open access for telephone network equipment in the 1960s, and that his mid-1980s startup either failed or was doomed because the phone network was open whereas the cable networks (on which his startup depended) were closed. He also predicts that regulation can be accomplished while encouraging investment in broadband networks, because there will be “no rate regulation, no tariffs, no last-mile unbundling.”  There are a number of problems with Chairman Wheeler’s analysis. First, let’s examine the historical assumptions that underlie the Wheeler proposal.

The FCC had to mandate open access for network equipment in the late 1960s only because of the unintended consequences of another regulatory objective—that of ensuring that basic local residential phone service was “affordable.” In practice, strict price controls required phone companies to set local rates at or below cost. The companies were permitted to earn a profit only by charging high prices for all of their other services including long-distance. Open access threatened this system of cross-subsidies, which is why the FCC strongly opposed open access for years. The FCC did not seriously rethink this policy until it was forced to do so by a federal appeals court ruling in the 1950s. That court decision set the stage for the FCC’s subsequent open access rules. Wheeler is trying to claim credit for a heroic achievement, when actually all the commission did was clean up a mess it created.

The failure of Wheeler’s Canadian government-subsidized startup in 1985 had nothing to do with open access, according to Wikipedia. NABU Network was attempting to sell up to 6.4 Mbps broadband service over Canadian cable networks notwithstanding the extremely limited capabilities of the network at the time. For one thing, most cable networks of that era were not bi-directional. The reason Wheeler’s startup didn’t choose to offer broadband over open telephone networks is because under-investment rendered those networks unsuitable. The copper loop simply didn’t offer the same bandwidth as coaxial cable. Why was there under-investment? Because of over-regulation.

Next, let’s examine Chairman Wheeler’s prediction that new regulation won’t discourage investment because there will be “no rate regulation, no tariffs, no last-mile unbundling.” Let’s be real. Wheeler simply cannot guarantee there will be no rate regulation, no tariffs, no last-mile unbundling nor other inappropriate regulation in the future. Anyone can petition the FCC to impose more regulation at any time, and nothing will prevent the commission from going down that road. The FCC will become a renewed target for special-interest pleading if Chairman Wheeler’s proposal is adopted by the commission and upheld by the courts.

Wheeler’s proposal would reclassify broadband as a “telecommunications” service notwithstanding the fact that the commission has previously found that broadband is an “information” service and the Supreme Court upheld that determination. These terms are clearly defined in the In the 1996 telecom act, in which bipartisan majorities in Congress sought to create a regulatory firewall. Communications services would continue to be regulated until they became competitive. Services that combine communications and computing (“information” services) would not be regulated at all. Congress wanted to create appropriate incentives for firms that provide communications service to invest and innovate by adding computing functionality. Congress was well aware that the commission tried over many years to establish a bright-line separation between communications and computing, and it failed. It’s an impossible task, because communications and computing are becoming more integrated all the time. The solution was to maintain legacy regulation for legacy network services, and open the door to competition for advanced services. The key issue now is whether or not broadband is a competitive industry. If the broadband offerings of cable operators, telephone companies and wireless providers are all taken into account, the answer is clearly yes.

In the view of Chairman Wheeler and others, regulation is needed to ensure the Internet is fast, fair and open. In reality, the Internet wants to be fast, fair and open. So called “walled garden” experiments of the past have all ended in failure. Before broadband, the open telephone network was significantly more profitable than the closed cable network. Now, broadband either is or soon will become more profitable than cable. Since open networks are more profitable than closed networks, legacy regulation is more than likely to be unnecessary and almost certain to be counter-productive.  Internet openness is chiefly a function not of regulation but of innovation and investment in bandwidth abundance.  With sufficient bandwidth, all packets travel at the speed of light.

Then again, this debate isn’t really about open networks. Republican leaders in Congress are offering to pass a bill that would prevent blocking and paid prioritization, and they can’t find any Democratic co-sponsors. That’s because the bill would prohibit reclassification of broadband as a “telecommunications” service, which would give the FCC a green light to regulate like it’s 1934. The idea that we need to give the commission unfettered authority so it can enact a limited amount of “smart” regulation that can be accomplished while encouraging private investment–and that we can otherwise rely on the FCC to practice regulatory restraint and not abuse its power–sounds a lot like the sales pitch for the Affordable Care Act, i.e., that we can have it all, there are no trade-offs. Right.

]]>
http://techliberation.com/2015/02/05/this-is-not-how-we-should-ensure-net-neutrality/feed/ 0
Permissionless Innovation & Commercial Drones http://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/ http://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/#comments Wed, 04 Feb 2015 23:20:57 +0000 http://techliberation.com/?p=75392

Farhad Manjoo’s latest New York Times column, “Giving the Drone Industry the Leeway to Innovate,” discusses how the Federal Aviation Administration’s (FAA) current regulatory morass continues to thwart many potentially beneficial drone innovations. I particularly appreciated this point:

But perhaps the most interesting applications for drones are the ones we can’t predict. Imposing broad limitations on drone use now would be squashing a promising new area of innovation just as it’s getting started, and before we’ve seen many of the potential uses. “In the 1980s, the Internet was good for some specific military applications, but some of the most important things haven’t really come about until the last decade,” said Michael Perry, a spokesman for DJI [maker of Phantom drones]. . . . He added, “Opening the technology to more people allows for the kind of innovation that nobody can predict.”

That is exactly right and it reflects the general notion of “permissionless innovation” that I have written about extensively here in recent years. As I summarized in a recent essay: “Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

The reason that permissionless innovation is so important is that innovation is more likely in political systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. We don’t know what the future holds. Only incessant experimentation and trial-and-error can help us achieve new heights of greatness. If, however, we adopt the opposite approach of “precautionary principle”-based reasoning and regulation, then these chances for serendipitous discovery evaporate. As I put it in my recent book, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

In this regard, the unprecedented growth of the Internet is a good example of how permissionless innovation can significantly improve consumer welfare and our nation’s competitive status relative to the rest of the world. And this also holds lessons for how we treat commercial drone technologies, as Jerry Brito, Eli Dourado, and I noted when filing comments with the FAA back in April 2013. We argued:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Manjoo builds on that same point in his new Times essay when he notes:

[drone] enthusiasts see almost limitless potential for flying robots. When they fantasize about our drone-addled future, they picture not a single gadget, but a platform — a new class of general-purpose computer, as important as the PC or the smartphone, that may be put to use in a wide variety of ways. They talk about applications in construction, firefighting, monitoring and repairing infrastructure, agriculture, search and response, Internet and communications services, logistics and delivery, filmmaking and wildlife preservation, among other uses.

If only the folks at the FAA and in Congress saw things this way. We need to open up the skies to the amazing innovative potential of commercial drone technology, especially before the rest of the world seizes the opportunity to jump into the lead on this front.

___________________________

Additional Reading

]]>
http://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/feed/ 0
New FCC rules will kick at least 4.7 million households offline http://techliberation.com/2015/02/03/new-fcc-rules-will-kick-at-least-6-million-households-offline/ http://techliberation.com/2015/02/03/new-fcc-rules-will-kick-at-least-6-million-households-offline/#comments Tue, 03 Feb 2015 18:11:09 +0000 http://techliberation.com/?p=75386

This month, the FCC is set to issue an order that will reclassify broadband under Title II of the Communications Act. As a result of this reclassification, broadband will suddenly become subject to numerous federal and local taxes and fees.

How much will these new taxes reduce broadband subscribership? Nobody knows for sure, but using the existing economic literature we can come up with a back-of-the-envelope calculation.

According to a policy brief by Brookings’s Bob Litan and the Progressive Policy Institute’s Hal Singer, reclassification under Title II will increase fixed broadband costs on average by $67 per year due to both federal and local taxes. With pre-Title II costs of broadband at $537 per year, this represents a 12.4 percent increase.

[I have updated these estimates at the end of this post.]

How much will this 12.4 percent increase in broadband costs reduce the number of broadband subscriptions demanded? For that, we must turn to the literature on the elasticity of demand for broadband.

As is often the case, the literature on this subject does not give one clear answer. For example, Austan Goolsbee, who was chairman of President Obama’s Council of Economic Advisors in 2010 and 2011, estimated in 2006 that broadband elasticity ranged from -2.15 to -3.76, with an average of around -2.75.

A 2014 study by two FCC economists and their coauthors estimates the elasticity of demand for marginal non-subscribers. That is, they use survey data of people who are not currently broadband subscribers, exclude the 2/3 of respondents who say they would not buy broadband at any price, and estimate their demand elasticity at -0.62.

Since the literature doesn’t settle the matter, let’s pick the more conservative number and use it as a lower bound.

With 84 million fixed broadband subscribers facing a 12.4 percent increase in prices, with an elasticity of -0.62, there will be a 7.7 percent reduction in broadband subscribers, or a decline of 6.45 million households.

Obviously, this is a terrible result.

A question for my friends in the tech policy world who support reclassification: How many households do you think will lose broadband access due to new taxes and fees? Please show your work.

UPDATE: Looks like I missed this updated post from Singer and Litan, which notes that due to the extension of the Internet Tax Freedom Act, the total amount of new taxes from reclassification will be only about $49/year, not $67/year as stated above.

This represents a 9.1 percent increase in costs, so the number of households with broadband will decline by only 5.6 percent, or 4.7 million.

While I regret the oversight, this is still a very high number that deserves attention.

]]>
http://techliberation.com/2015/02/03/new-fcc-rules-will-kick-at-least-6-million-households-offline/feed/ 0
Network Neutrality’s Watershed Moment http://techliberation.com/2015/02/03/network-neutralitys-watershed-moment/ http://techliberation.com/2015/02/03/network-neutralitys-watershed-moment/#comments Tue, 03 Feb 2015 14:31:16 +0000 http://techliberation.com/?p=75382

After some ten years, gallons of ink and thousands of megabytes of bandwidth, the debate over network neutrality is reaching a climactic moment.

Bills are expected to be introduced in both the Senate and House this week that would allow the Federal Communications Commission to regulate paid prioritization, the stated goal of network neutrality advocates from the start. Led by Sen. John Thune (R-S.D.) and Rep. Fred Upton (R-Mich.), the legislation represents a major compromise on the part of congressional Republicans, who until now have held fast against any additional Internet regulation. Their willingness to soften on paid prioritization has gotten the attention of a number of leading Democrats, including Sens. Bill Nelson (D-Fla.) and Cory Booker (D-N.J.). The only question that remains is if FCC Chairman Thomas Wheeler and President Barack Obama are willing to buy into this emerging spirit of partisanship.

Obama wants a more radical course—outright reclassification of Internet services under Title II of the Communications Act, a policy Wheeler appears to have embraced in spite of reservations he expressed last year. Title II, however, would give the FCC the same type of sweeping regulatory authority over the Internet as it does monopoly phone service—a situation that stands to create a “Mother, may I” regime over what, to date, has been an wildly successful environment of permissionless innovation.

Important to remember is that Title II reclassification is a response to repeated court decisions preventing the FCC from enforcing certain provisions against paid prioritization. Current law, the courts affirmed, classifies the Internet as an information service, a definition that limits the FCC’s regulatory control over it. Using reclassification, the FCC hopes to give itself the necessary legal cover.

But the paid prioritization matter can addressed easily, elegantly and, most important, constitutionally, through Congress.

As a libertarian, I question the value of any regulation on the Internet on principle. And practically speaking, there’s been no egregious abuse of paid prioritization that justifies unilateral reclassification. It’s not in an ISPs interest to block any websites. And, contrary to being a consumer problem, allowing major content companies like Netflix to purchase network management services that improve the quality of video delivery while reducing network congestion for other applications might actually serve the market.

But if paid prioritization is the concern, then Thune-Upton addresses it. It would allow the FCC to investigate and impose penalties on ISPs that throttle traffic, or demand payment for quality delivery. On the other hand, Thune-Upton would also create carve outs for certain types of applications that require prioritization to work, like telemedicine and emergency services, and would allow for the reasonable network management that is necessary for optimum performance—answering criticisms that come not only from center-right policy analysts, but from network engineers.

Legislation also gives the FCC specific instructions, whereas Title II reclassification opens the door to large-scale, open-ended regulation. Here’s where I do indulge my libertarian leanings. Giving the government vague, unspecified powers asks for trouble. All we have to do is look at the National Security Agency’s widespread warrantless wiretapping and the Drug Enforcement Administration’s tracking of private vehicle movements around the country. Disturbing as they are to all citizens who value liberty and privacy, these practices are technically legal because there are no laws setting rules of due process with contemporary communications technology (a blog for another day). As much as the FCC promises to “forbear” more extensive Internet regulation, it’s better for all if specific limits are written in.

At the same time, the addition of regulatory powers invites corporate rent-seeking whereby companies turn to the government to protect them in the marketplace. Even as the FCC was drafting its Title II proposal, BlackBerry’s CEO, John Chen, were complaining that applications developers were only focusing on the iPhone and Android platforms. Chen seeks “app neutrality,” essentially a law to require any applications that work on iPhone and Android platforms to work on BlackBerry’s operating system, too, despite the low marker penetration of the devices.

Also, forcing the FCC to work inside narrow parameters means it can more readily ease up or even reverse itself in case a ban on paid prioritization leads to intended consequences, like a significant uptick in bandwidth congestion and measureable degradation in applications performance.

Finally, successful bi-partisan legislation can put net neutrality to bed. If the White House remains stubborn and instead pushes the FCC to reclassify, it almost assures a lengthy court case that not only would drag out the debate, but likely end with another decision against the FCC. But even if the court rulings go the FCC’s way, Title II is no guarantee against paid prioritization. Allowing Congress to give the FCC the necessary authority is constitutionally sound approach and has a better chance of meeting the desired objectives. Congress is offering a bipartisan solution that is reasonable and workable. The Obama administration has been banging the drum for network neutrality since Day 1. This is its moment to seize.

]]>
http://techliberation.com/2015/02/03/network-neutralitys-watershed-moment/feed/ 1
Money for graduate students who love liberty http://techliberation.com/2015/02/02/money-for-graduate-students-who-love-liberty/ http://techliberation.com/2015/02/02/money-for-graduate-students-who-love-liberty/#comments Mon, 02 Feb 2015 15:45:05 +0000 http://techliberation.com/?p=75378

My employer, the Mercatus Center, provides ridiculously generous funding (up to $40,000/year) for graduate students. There are several opportunities depending on your goals, but I encourage people interested in technology policy to particularly consider the MA Fellowship, as that can come with an opportunity to work with the tech policy team here at Mercatus. Mind the deadlines!

The PhD Fellowship is a three-year, competitive, full-time fellowship program for students who are pursuing a doctoral degree in economics at George Mason University. Our PhD Fellows take courses in market process economics, public choice, and institutional analysis and work on projects that use these lenses to understand global prosperity and the dynamics of social change. Successful PhD Fellows have secured tenure track positions at colleges and universities throughout the US and Europe.

It includes full tuition support, a stipend, and experience as a research assistant working closely with Mercatus-affiliated Mason faculty. It is a total award of up to $120,000 over three years. Acceptance into the fellowship program is dependent on acceptance into the PhD program in economics at George Mason University. The deadline for applications is February 1, 2015.

The Adam Smith Fellowship is a one-year, competitive fellowship for graduate students attending PhD programs at any university, in a variety of fields, including economics, philosophy, political science, and sociology. The aim of this fellowship is to introduce students to key thinkers in political economy that they might not otherwise encounter in their graduate studies. Smith Fellows receive a stipend and spend three weekends during the academic year and one week during the summer participating in workshops and seminars on the Austrian, Virginia, and Bloomington schools of political economy.

It includes a quarterly stipend and travel and lodging to attend colloquia hosted by the Mercatus Center. It is a total award of up to $10,000 for the year. Acceptance into the fellowship program is dependent on acceptance into a PhD program at an accredited university. The deadline for applications is March 15, 2015.

The MA Fellowship is a two-year, competitive, full-time fellowship program for students pursuing a master’s degree in economics at George Mason University who are interested in gaining advanced training in applied economics in preparation for a career in public policy. Successful fellows have secured public policy positions as Presidential Management Fellows, economists and analysts with federal and state governments, and policy analysts at prominent research institutions.

It includes full tuition support, a stipend, and practical experience as a research assistant working with Mercatus scholars. It is a total award of up to $80,000 over two years. Acceptance into the fellowship program is dependent on acceptance into the MA program in economics at George Mason University. The deadline for applications is March 1, 2015.

The Frédéric Bastiat Fellowship is a one-year competitive fellowship program for graduate students interested in pursuing a career in public policy. The aim of this fellowship is to introduce students to the Austrian, Virginia, and Bloomington schools of political economy as academic foundations for pursuing contemporary policy analysis. They will explore how this framework is utilized to analyze policy implications of a variety of topics, including the study of American capitalism, state and local policy, regulatory studies, technology policy, financial markets, and spending and budget.

It includes a quarterly stipend and travel and lodging to attend colloquia hosted by the Mercatus Center. It is a total award of up to $5,000 for the year. Acceptance into the fellowship program is dependent on acceptance into a graduate program at an accredited university. The deadline for applications is April 1, 2015.

]]>
http://techliberation.com/2015/02/02/money-for-graduate-students-who-love-liberty/feed/ 0
The LAPD versus the First Amendment http://techliberation.com/2015/01/30/the-lapd-versus-the-first-amendment/ http://techliberation.com/2015/01/30/the-lapd-versus-the-first-amendment/#comments Fri, 30 Jan 2015 19:35:57 +0000 http://techliberation.com/?p=75374

Last month, my Mercatus Center colleague Brent Skorup published a major scoop: police departments around the country are scanning social media to assign people individualized “threat ratings” — green, yellow, or red. This week, police are complaining that the public is using social media to track them back.

LAPD Chief Charlie Beck has expressed concerns that Waze, the social traffic app owned by Google, could be used to target police officers. The National Sherriff’s Association has also complained about the app.

To be clear, Waze does not allow anybody to track individual officers. Users of the app can drop a pin on a map letting drivers know that there is police activity (or traffic jams, accidents, or traffic enforment cameras) in the area.

That’s it.

And police departments around the country frequently publicize their locations. They are essentially required to do so for sobriety checkpoints bySupreme Court order and NHTSA guidelines.

But in a letter to Google CEO Larry Page, Beck writes breathlessly that Waze “poses a danger to the lives of police officers in the United States.” The letter also (falsely) states that the app was used by Ismaaiyl Brinsley to kill two NYPD officers. The Associated Press notes that “Investigators do not believe he used Waze to ambush the officers, in part because police say Brinsley tossed his cellphone more than two miles from where he shot the officers.”

It’s somewhat rich of the LAPD to cite fear for its officers’ lives while the department is in possession of some 3408 assault rifles, 7 armored vehicles, and 3 grenade launchers.

In fact, what Waze poses a danger to is police department revenue. Drivers are using the app as a crowdsourced radar detector, as a means of avoiding traffic tickets. But unlike radar detectors, which have been outlawed in my home state of Virginia, Waze benefits from First Amendment protection.

The fundamental activity that Waze users are engaging in is speech. “Hey, there is a cop over there,” is protected speech under the First Amendment. As all LAPD officers must swear an oath affirming that they “will support and defend the Constitution of the United States,” it seems reasonable to expect the police chief not to stifle, by lobbying private corporations, the First Amendment rights of those citizens who choose to engage in this protected activity.

The Waze kerfuffle is a symptom of a longer-term breakdown in trust between police departments around the country and the publics they are sworn to protect and serve. This is a widely recognized problem, and some in the law enforcement community are working on strategies to remedy it.

But as long as departments continue to view the public as the enemy or even as a passive revenue source, not as the rightful recipients of their service and protection, we will continue to see the public respond by introducing technologies that protect users from the police’s arbitrary powers.

Fortunately, police complaints about Waze have backfired. Many smartphone users had no idea there was an app for avoiding speeding tickets until Beck and the Sherriff’s Association made it national news. As a result of the publicity, downloads of Waze have skyrocketed.

This is how the modern world works, and it gives me great hope for the future.

]]>
http://techliberation.com/2015/01/30/the-lapd-versus-the-first-amendment/feed/ 0
DRM for Drones Will Fail http://techliberation.com/2015/01/28/drm-for-drones-will-fail/ http://techliberation.com/2015/01/28/drm-for-drones-will-fail/#comments Wed, 28 Jan 2015 22:00:18 +0000 http://techliberation.com/?p=75358

I suppose it was inevitable that the DRM wars would come to the world of drones. Reporting for the Wall Street Journal today, Jack Nicas notes that:

In response to the drone crash at the White House this week, the Chinese maker of the device that crashed said it is updating its drones to disable them from flying over much of Washington, D.C.SZ DJI Technology Co. of Shenzhen, China, plans to send a firmware update in the next week that, if downloaded, would prevent DJI drones from taking off within the restricted flight zone that covers much of the U.S. capital, company spokesman Michael Perry said.

Washington Post reporter Brian Fung explains what this means technologically:

The [DJI firmware] update will add a list of GPS coordinates to the drone’s computer telling it where it can and can’t go. Here’s how that system works generally: When a drone comes within five miles of an airport, Perry explained, an altitude restriction gets applied to the drone so that it doesn’t interfere with manned aircraft. Within 1.5 miles, the drone will be automatically grounded and won’t be able to fly at all, requiring the user to either pull away from the no-fly zone or personally retrieve the device from where it landed. The concept of triggering certain actions when reaching a specific geographic area is called “geofencing,” and it’s a common technology in smartphones. Since 2011, iPhone owners have been able to create reminders that alert them when they arrive at specific locations, such as the office.

This is complete overkill and it almost certainly will not work in practice. First, this is just DRM for drones, and just as DRM has failed in most other cases, it will fail here as well. If you sell somebody a drone that doesn’t work within a 15-mile radius of a major metropolitan area, they’ll be online minutes later looking for a hack to get it working properly. And you better believe they will find one.

Second, other companies or even non-commercial innovators will just use such an opportunity to promote their DRM-free drones, making the restrictions on other drones futile.

Perhaps, then, the government will push for all drone manufacturers to include DRM on their drones, but that’s even worse. The idea that the Washington, DC metro area should be a completely drone-free zone is hugely troubling. We might as well put up a big sign at the edge of town that says, “Innovators Not Welcome!”

And this isn’t just about commercial operators either. What would such a city-wide restriction mean for students interested in engineering or robotics in local schools? Or how about journalists who might want to use drones to help them report the news?

For these reasons, a flat ban on drones throughout this or any other city just shouldn’t fly.

Moreover, the logic behind this particular technopanic is particularly silly. It’s like saying that we should install some sort of kill switch in all automobile ignitions so that they will not start anywhere in the DC area on the off chance that one idiot might use their car to drive into the White House fence. We need clear and simple rules for drone use; not technically-unworkable and unenforceable bans on all private drone use in major metro areas.

[Update 1/30: Washington Post reporter Matt McFarland was kind enough to call me and ask for comment on this matter. Here’s his excellent story on “The case for not banning drone flights in the Washington area,” which included my thoughts.]

]]>
http://techliberation.com/2015/01/28/drm-for-drones-will-fail/feed/ 0
Some Initial Thoughts on the FTC Internet of Things Report http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/ http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/#comments Wed, 28 Jan 2015 14:54:30 +0000 http://techliberation.com/?p=75351

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be considered synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

]]>
http://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/feed/ 0
Television is competitive. Congress should end mass media industrial policy. http://techliberation.com/2015/01/27/television-is-competitive/ http://techliberation.com/2015/01/27/television-is-competitive/#comments Tue, 27 Jan 2015 18:41:46 +0000 http://techliberation.com/?p=75340

Congress is considering reforming television laws and solicited comment from the public last month. On Friday, I submitted a letter encouraging the reform effort. I attached the paper Adam and I wrote last year about the current state of video regulations and the need for eliminating the complex rules for television providers.

As I say in the letter, excerpted below, pay TV (cable, satellite, and telco-provided) is quite competitive, as this chart of pay TV market share illustrates. In addition to pay TV there is broadcast, Netflix, Sling, and other providers. Consumers have many choices and the old industrial policy for mass media encourages rent-seeking and prevents markets from evolving.

Pay TV Market Share

Dear Chairman Upton and Chairman Walden:

Thank you for the opportunity to respond to the Committee’s December 2014 questions on video regulation.

…The labyrinthine communications and copyright laws governing video distribution are now distorting the market and therefore should be made rational. Congress should avoid favoring some distributors at the expense of free competition. Instead, policy should encourage new entrants and consumer choice.

The focus of the committee’s white paper on how to “foster” various television distributors, while understandable, was nonetheless misguided. Such an inquiry will likely lead to harmful rules that favor some companies and programmers over others, based on political whims. Congress and the FCC should get out of “fostering” the video distribution markets completely. A light-touch regulatory approach will prevent the damaging effects of lobbying for privilege and will ensure the primacy of consumer choice.

Some of the white paper’s questions may actually lead policy astray. Question 4, for instance, asks how we should “balance consumer welfare and the rights of content creators” in video markets. Congress should not pursue this line of inquiry too far. Just consider an analogous question: how do we balance consumer welfare and the interests of content creators in literature and written content? The answer is plain: we don’t. It’s bizarre to even contemplate.

Congress does not currently regulate the distribution markets of literature and written news and entertainment. Congress simply gives content producers copyright protection, which is generally applicable. The content gets aggregated and distributed on various platforms through private ordering via contract. Congress does not, as in video, attempt to keep competitive parity between competing distributors of written material: the Internet, paperback publishers, magazine publishers, books on tape, newsstands, and the like. Likewise, Congress should forego any attempt at “balancing” in video content markets. Instead, eliminate top-down communications laws in favor of generally applicable copyright laws, antitrust laws, and consumer protection laws.

As our paper shows, the video distribution marketplace has changed drastically. From the 1950s to the 1990s, cable was essentially consumers’ only option for pay TV. Those days are long gone, and consumers now have several television distributors and substitutes to choose from. From close to 100 percent market share of the pay TV market in the early 1990s, cable now has about 50 percent of the market. Consumers can choose popular alternatives like satellite- and telco-provided television as well as smaller players like wireless carriers, online video distributors (such as Netflix and Sling), wireless Internet service providers (WISPs), and multichannel video and data distribution service (MVDDS or “wireless cable”). As many consumers find Internet over-the-top television adequate, and pay TV an unnecessary expense, “free” broadcast television is also finding new life as a distributor.

The New York Times reported this month that “[t]elevision executives said they could not remember a time when the competition for breakthrough concepts and creative talent was fiercer” (“Aiming to Break Out in a Crowded TV Landscape,” January 11, 2015). As media critics will attest, we are living in the golden age of television. Content is abundant and Congress should quietly exit the “fostering competition” game. Whether this competition in television markets came about because of FCC policy or in spite of it (likely both), the future of television looks bright, and the old classifications no longer apply. In fact, the old “silo” classifications stand in the way of new business models and consumer choice.

Therefore, Congress should (1) merge the FCC’s responsibilities with the Federal Trade Commission or (2) abolish the FCC’s authority over video markets entirely and rely on antitrust agencies and consumer protection laws in television markets. New Zealand, the Netherlands, Denmark, and other countries have merged competition and telecommunications regulators. Agency merger streamlines competition analyses and prevents duplicative oversight.

Finally, instead of fostering favored distribution channels, Congress’ efforts are better spent on reforms that make it easier for new entrants to build distribution infrastructure. Such reforms increase jobs, increase competition, expand consumer choice, and lower consumer prices.

Thank you for initiating the discussion about updating the Communications Act. Reform can give America’s innovative telecommunications and mass-media sectors a predictable and technology neutral legal framework. When Congress replaces industrial planning in video with market forces, consumers will be the primary beneficiaries.

Sincerely,

Brent Skorup
Research Fellow, Technology Policy Program
Mercatus Center at George Mason University

]]>
http://techliberation.com/2015/01/27/television-is-competitive/feed/ 0
The government sucks at cybersecurity http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/ http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/#comments Tue, 20 Jan 2015 21:19:11 +0000 http://techliberation.com/?p=75327

Originally posted at Medium.

The federal government is not about to allow last year’s rash of high-profile security failures of private systems like Home Depot, JP Morgan, and Sony Entertainment to go to waste without expanding its influence over digital activities.

Last week, President Obama proposed a new round of cybersecurity policies that would, among other things, compel private organizations to share more sensitive information about information security incidents with the Department of Homeland Security. This endeavor to revive the spirit of CISPA is only the most recent in a long line of government attempts to nationalize and influence private cybersecurity practices.

But the federal government is one of the last organizations that we should turn to for advice on how to improve cybersecurity policy.

Don’t let policymakers’ talk of getting tough on cybercrime fool you. Their own network security is embarrassing to the point of parody and has been getting worse for years despite spending billions of dollars on the problem.

C2-Spending-and-Breaches_0

The chart above comes from a new analysis on federal information security incidents and cybersecurity spending by me and my colleague Eli Dourado at the Mercatus Center.

The chart uses data from the Congressional Research Service and the Government Accountability Office to display total federal cybersecurity spending required by the Federal Information Security Management Act of 2002 displayed by the green bars and measured on the left-hand axis along with the total number of reported information security incidents of federal systems displayed by the blue line and measured by the right-hand axis from 2006 to 2013. The chart shows that the number of federal cybersecurity failures has increased every year since 2006, even as investments in cybersecurity processes and systems have increased considerably.

In 2002, the federal government created an explicit goal for itself to modernize and strengthen its cybersecurity infrastructure by the end of that decade with the passage of the Federal Information Security Management Act (FISMA). FISMA required agency leaders to develop and implement information security protections with the guidance of offices like the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS)—some of the same organizations tasked with coordinating information-sharing about cybersecurity threats with the private sector in Obama’s proposal, by the way—and authorized robust federal investments in IT infrastructure to meet these goals.

The chart is striking, but a quick data note on the spending numbers is in order. Both the dramatic increase in FISMA spending from $7.4 billion in FY 2009 to $12.8 billion in FY 2010 and the dramatic decrease in FISMA spending from $14.8 billion in FY 2012 to $10.3 billion in FY 2013 are partially attributable to OMB’s decision to change its FISMA spending calculation methodology in those years.

Even with this caveat on inter-year spending comparisons, the chart shows that the federal government has invested billions of dollars to improve its internal cybersecurity defenses in recent years. Altogether, the OMB reports that the federal government spent $78.8 billion on FISMA cybersecurity investments from FY 2006 to FY 2013.

(And this is just cybersecurity spending authorized through FISMA. When added to the various other authorizations on cybersecurity spending tucked in other federal programs, the breadth of federal spending on IT preparedness becomes staggering indeed.)

However, increased federal spending on cybersecurity is not reflected in the rate of cyberbreaches of federal systems reported by the GAO. The number of reported federal cybersecurity incidents increased by an astounding 1012% over the selected years, from 5,503 in 2006 to 61,214 in 2013.

Yes, 1012%. That’s not a typo.

C3b-Breaches-blue

What’s worse, a growing number of these federal cybersecurity failures involve the potential exposure of personally identifiable information—private data about individuals’ contact information, addresses, and even Social Security numbers and financial accounts.

The second chart displays the proportion of all reported federal information security incidents that involved the exposure of personally identifiable information from 2009 to 2013. By 2013, over 40 percent of all reported cybersecurity failures involved the potential exposure of private data to outside groups.

It is hard to argue that these failures stem from lack of adequate security investments. This is as much a problem of scale as it is of an inability to follow one’s own directions. In fact, the government’s own Government Accountability Office has been sounding the alarm about poor information security practices since 1997. After FISMA was implemented to address the problem, government employees promptly proceeding to ignore or undermine the provisions that would improve security—rendering the “solution” merely another checkbox on the bureaucrat’s list of meaningless tasks.

The GAO reported in April of 2014 that federal agencies systematically fail to meet federal security standards due to poor implementation of key FISMA practices outlined by the OMB, NIST, and DHS. After more than a decade of billion dollar investments and government-wide information sharing, in 2013 “inspectors general at 21 of the 24 agencies cited information security as a major management challenge for their agency, and 18 agencies reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting.”

This weekend’s POLITICO report on lax federal security practices makes it easy to see how ISIS could hack into the CENTCOM Twitter account:

Most of the staffers interviewed had emailed security passwords to a colleague or to themselves for convenience. Plenty of offices stored a list of passwords for communal accounts like social media in a shared drive or Google doc. Most said they individually didn’t think about cybersecurity on a regular basis, despite each one working in an office that dealt with cyber or technology issues. Most kept their personal email open throughout the day. Some were able to download software from the Internet onto their computers. Few could remember any kind of IT security training, and if they did, it wasn’t taken seriously.

“It’s amazing we weren’t terribly hacked, now that I’m thinking back on it,” said one staffer who departed the Senate late this fall. “It’s amazing that we have the same password for everything [like social media.]”

Amazing, indeed.

What’s also amazing is the gall that the federal government has in attempting to butt its way into assuming more power over cybersecurity policy when it can’t even get its own house in order.

While cybersecurity vulnerabilities and data breaches remain a considerable problem in the private sector as well as the public sector, policies that failed to protect the federal government’s own information security are unlikely to magically work when applied to private industry. The federal government’s own poor track record of increasing data breaches and exposures of personally identifiable information render its systems a dubious safehouse for the huge amounts of sensitive data affected by the proposed legislation.

President Obama is expected to make cybersecurity policy a key platform issue in tonight’s State of the Union address. Given his own shop’s pathetic track record in protecting its own network security, one has to ponder the efficacy and reasoning in his intentions. The federal government should focus on properly securing its own IT systems before trying to expand its control over private systems.

]]>
http://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/feed/ 0
Striking a Sensible Balance on the Internet of Things and Privacy http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/ http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/#comments Fri, 16 Jan 2015 21:08:39 +0000 http://techliberation.com/?p=75274

FPF logoThis week, the Future of Privacy Forum (FPF) released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” which I believe can help us find policy consensus regarding the privacy and security concerns associated with the Internet of Things (IoT) and wearable technologies. I’ve been monitoring IoT policy developments closely and I recently published a big working paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will appear shortly in the Richmond Journal of Law & Technology. I have also penned several other essays on IoT issues. So, I will be relating the FPF report to some of my own work.

The new FPF report, which was penned by Christopher Wolf, Jules Polonetsky, and Kelsey Finch, aims to accomplish the same goal I had in my own recent paper: sketching out constructive and practical solutions to the privacy and security issues associated with the IoT and wearable tech so as not to discourage the amazing, life-enriching innovations that could flow from this space. Flexibility is the key, they argue. “Premature regulation at an early stage in wearable technological development may freeze or warp the technology before it achieves its potential, and may not be able to account for technologies still to come,” the authors note. “Given that some uses are inherently more sensitive than others, and that there may be many new uses still to come, flexibility will be critical going forward.” (p. 3)

That flexible approach is at the heart of how the FPF authors want to see Fair Information Practice Principles (FIPPs) applied in this space. The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. The FPF authors correctly note that,

The FIPPs do not establish specific rules prescribing how organizations should provide privacy protections in all contexts, but rather provide high-level guidelines. Over time, as technologies and the global privacy context have changed, the FIPPs have been presented in different ways with different emphases. Accordingly, we urge policymakers to enable the adaptation of these fundamental principles in ways that reflect technological and market developments. (p. 4)

They continue on to explain how each of the FIPPS can provide a certain degree of general guidance for the IoT and wearable tech, but also caution that: “A rigid application of the FIPPs could inhibit these technologies from even functioning, and while privacy protections remain essential, a degree of flexibility will be key to ensuring the Internet of Things can develop in ways that best help consumer needs and desires.” (p. 4) And throughout the report, the FPF authors stress the need for the FIPPS to be “practically applied” and they nicely explain how the appropriate application of any particular one of the FIPPS “will depend on the circumstances.”  For those reasons, they conclude by saying, “we urge policymakers to adopt a forward-thinking, flexible application of the FIPPs.” (p. 11)

The approach that Wolf, Polonetsky, and Finch set forth in this new FPF report is very much consistent with the policy framework I sketched out in my forthcoming law review article. “The need for flexibility and adaptability will be paramount if innovation is to continue in this space,” I argued. In essence, best practices need to remain just that: best practicesnot fixed, static, top-down regulatory edicts. As I noted:

Regardless of whether they will be enforced internally by firms or by ex post FTC enforcement actions, best practices must not become a heavy-handed, quasi-regulatory straitjacket. A focus on security and privacy by design does not mean those are the only values and design principles that developers should focus on when innovating. Cost, convenience, choice, and usability are all important values too. In fact, many consumers will prioritize those values over privacy and security — even as activists, academics, and policymakers simultaneously suggest that more should be done to address privacy and security concerns.

Finally, best practices for privacy and security issues will need to evolve as social acceptance of various technologies and business practices evolve. For example, had “privacy by design” been interpreted strictly when wireless geolocation capabilities were first being developed, these technologies might have been shunned because of the privacy concerns they raised. With time, however, geolocation technologies have become a better understood and more widely accepted capability that consumers have come to expect will be embedded in many of their digital devices.  Those geolocation capabilities enable services that consumers now take for granted, such as instantaneous mapping services and real-time traffic updates.

This is why flexibility is crucial when interpreting the privacy and security best practices.

The only thing I think that was missing from the FPF report was a broader discussion of other constructive privacy and security solutions that involve education, etiquette, and empowerment-based solutions. I would have also liked to have seen some discussion of how other existing legal mechanisms — privacy torts, contractual enforcement mechanisms, property rights, state “peeping Tom” law, and existing privacy statutes — might cover some of the hard cases that could develop on this front. I discuss those and other “bottom-up” solutions in Section IV of my law review article and note that they can contribute to the sort of “layered” approach we need to address privacy and security concerns for the IoT and wearable tech.

In any event, I encourage everyone to check out the new Future of Privacy Forum report as well as the many excellent best practice guidelines they have put together to help innovators adopt sensible privacy and security best practices. FPF has done some great work on this front.

Additional Reading

]]>
http://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/feed/ 0
Again, We Humans Are Pretty Good at Adapting to Technological Change http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/ http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/#comments Fri, 16 Jan 2015 16:58:19 +0000 http://techliberation.com/?p=75292

Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. I elaborated on these issues in a lengthy essay last summer entitled,  “Muddling Through: How We Learn to Cope with Technological Change.” I borrowed the term “muddling through” from Joel Garreau’s terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human.  Garreau argued that history can be viewed “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

Garreau associated this with what he called the “Prevail” scenario and he contrasted it with the “Heaven” scenario, which believes that technology drives history relentlessly, and in almost every way for the better, and the “Hell” scenario, which always worries that “technology is used for extreme evil, threatening humanity with extinction.” Under the “Prevail” scenario, Garreau argued, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he concluded. (p. 154) Or, as John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

In my essay last summer, I sketched out the reasons why I think this “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process. Again, it comes down to the fact that people and institutions learned to cope with technological change and become more resilient over time. It’s a learning process, and we humans are good at rolling with the punches and finding new baselines along the way. While “muddling through” can sometimes be quite difficult and messy, we adjust to most of the new technological realities we face and, over time, find constructive solutions to the really hard problems.

So, while it’s always good to reflect on the challenges of life in an age of never-ending, rapid-fire technological change, there’s almost never cause for panic. Read my old essay for more discussion on why I remain so optimistic about the human condition.

]]>
http://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/feed/ 0
Regulatory Capture: FAA and Commercial Drones Edition http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/ http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/#comments Fri, 16 Jan 2015 14:02:54 +0000 http://techliberation.com/?p=75279

FAA sealRegular readers know that I can get a little feisty when it comes to the topic of “regulatory capture,” which occurs when special interests co-opt policymakers or political bodies (regulatory agencies, in particular) to further their own ends. As I noted in my big compendium, “Regulatory Capture: What the Experts Have Found“:

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity.  Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.

Indeed, the more I highlight the problem of regulatory capture and offer concrete examples of it in practice, the more push-back I get from true believers in the idea of “independent” agencies. Even if I can get them to admit that history offers countless examples of capture in action, and that a huge number of scholars of all persuasions have documented this problem, they will continue to persist that, WE CAN DO BETTER! and that it is just a matter of having THE RIGHT PEOPLE! who will TRY HARDER!

Well, maybe. But I am a realist and a believer in historical evidence. And the evidence shows, again and again, that when Congress (a) delegates broad, ambiguous authority to regulatory agencies, (b) exercises very limited oversight over that agency, and then, worse yet, (c) allows that agency’s budget to grow without any meaningful constraint, then the situation is ripe for abuse. Specifically, where unchecked power exists, interests will look to exploit it for their own ends.

In any event, all I can do is to continue to document the problem of regulatory capture in action and try to bring it to the attention of pundits and policymakers in the hope that we can start the push for real agency oversight and reform. Today’s case in point comes from a field I have been covering here a lot over the past year: commercial drone innovation.

Yesterday, via his Twitter account, Wall Street Journal reporter Christopher Mims brought this doozy of an example of regulatory capture to my attention, which involves Federal Aviation Administration officials going to bat for the pilots who frequently lobby the agency and want commercial drone innovations constrained. Here’s how Jack Nicas begins the WSJ piece that Mims brought to my attention:

In an unfolding battle over U.S. skies, it’s man versus drone. Aerial surveyors, photographers and moviemaking pilots are increasingly losing business to robots that often can do their jobs faster, cheaper and better. That competition, paired with concerns about midair collisions with drones, has made commercial pilots some of the fiercest opponents to unmanned aircraft. And now these aviators are fighting back, lobbying regulators for strict rules for the devices and reporting unauthorized drone users to authorities. Jim Williams, head of the Federal Aviation Administration’s unmanned-aircraft office, said many FAA investigations into commercial-drone flights begin with tips from manned-aircraft pilots who compete with those drones. “They’ll let us know that, ’Hey, I’m losing all my business to these guys. They’re not approved. Go investigate,’” Mr. Williams said at a drone conference last year. “We will investigate those.”

Well, that pretty much says it all. If you’re losing business because an innovative new technology or pesky new entrant has the audacity to come onto your turf and compete, well then, just come on down to your friendly neighborhood regulator and get yourself a double serving of tasty industry protectionism!

And so the myth of “agency independence” continues, and perhaps it will never die. It reminds me of a line from those rock-and-roll sages in Guns N’ Roses: ” I’ve worked too hard for my illusions just to throw them all away!”

]]>
http://techliberation.com/2015/01/16/regulatory-capture-faa-and-commercial-drones-edition/feed/ 0
Dispatches from CES 2015 on Privacy Implications of New Technologies http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/ http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/#comments Thu, 15 Jan 2015 19:22:30 +0000 http://techliberation.com/?p=75266

Over at the International Association of Privacy Professionals (IAPP) Privacy Perspectives blog, I have two “Dispatches from CES 2015″ up. (#1 & #2) While I was out in Vegas for the big show, I had a chance to speak on a panel entitled, “Privacy and the IoT: Navigating Policy Issues.” (Video can be found here. It’s the second one on the video playlist.) Federal Trade Commission (FTC) Chairwoman Edith Ramirez kicked off that session and stressed some of the concerns she and others share about the Internet of Things and wearable technologies in terms of the privacy and security issues they raise.

Before and after our panel discussion, I had a chance to walk the show floor and take a look at the amazing array of new gadgets and services that will soon hitting the market. A huge percentage of the show floor space was dedicated to IoT technologies, and wearable tech in particular. But the show also featured many other amazing technologies that promise to bring consumers a wealth of new benefits in coming years. Of course, many of those technologies will also raise privacy and security concerns, as I noted in my two essays for IAPP. The first of my dispatches focuses primarily on the Internet of Things and wearable technologies that I saw at CES.  In my second dispatch, I discuss the privacy and security implications of the increasing miniaturization of cameras, drone technologies, and various robotic technologies (especially personal care robots).

I open the first column by noting that “as I was walking the floor at this year’s massive CES 2015 tech extravaganza, I couldn’t help but think of the heartburn that privacy professionals and advocates will face in coming years.” And I close the second dispatch by concluding that, “The world of technology is changing rapidly and so, too, must the role of the privacy professional. The technologies on display at this year’s CES 2015 make it clear that a whole new class of concerns are emerging that will require IAPP members to broaden their issue set and find constructive solutions to the many challenges ahead.” Jump over to the Privacy Perspectives blog to read more.

]]>
http://techliberation.com/2015/01/15/dispatches-from-ces-2015-on-privacy-implications-of-new-technologies/feed/ 0
Trouble Ahead for Municipal Broadband http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/ http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/#comments Wed, 14 Jan 2015 21:02:34 +0000 http://techliberation.com/?p=75254

President Obama recently announced his wish for the FCC to preempt state laws that make building public broadband networks harder. Per the White House, nineteen states “have held back broadband access . . . and economic opportunity” by having onerous restrictions on municipal broadband projects.

Much of the White House claims are PR nonsense. Most of these so-called state restrictions on public broadband are reasonable considering the substantial financial risk public networks pose to taxpayers. Minnesota and Colorado, for instance, require approval from local voters before spending money on a public network. Nevada’s “restriction” is essentially that public broadband is only permitted in the neediest, most rural parts of the state. Some states don’t allow utilities to provide broadband because utilities have a nasty habit of raising, say, everyone’s electricity bills because the money-losing utility broadband network fails to live up to revenue expectations. And so on.

It is an abuse of the English language for political activists to say municipal broadband is just a competitor to existing providers. If the federal government dropped over $100 million in a small city to build publicly-owned grocery stores with subsidized food, local grocery stores would, of course, strenuously object that this is patently unfair and harms private grocers. This is what the US government did in Chattanooga, using $100 million to build a public network. The US government has spent billions on broadband, and much of it goes to public broadband networks. The activists’ response to the carriers, who obviously complain about this “competition,” is essentially, “maybe now you’ll upgrade and compete harder.” It’s absurd on its face.

Public networks are unwise and costly. Every dollar diverted to some money-losing public network is one less to use on worthy societal needs. There are serious problems with publicly-funded retail broadband networks. A few come to mind:

1. The economic benefits of municipal broadband are dubious. A recent Mercatus economics paper by researcher Brian Deignan showed disappointing results for municipal broadband. The paper uses 23 years of BLS data from 80 cities that have deployed broadband and analyzes municipal broadband’s effect on 1) quantity of businesses; 2) employee wages; and 3) employment. Ultimately, the data suggest municipal broadband has almost zero effect on the private sector.

On the plus side, municipal broadband is associated with a 3 percent increase in the number of business establishments in a city. However, there is a small, negative effect on employee wages. There is no effect on private employment but the existence of a public broadband network is associated with a 6 percent increase in local government employment. The substantial taxpayer risk for such modest economic benefits leads many states to reasonably conclude these projects aren’t worth the trouble.

2. There are serious federalism problems with the FCC preempting state laws. Matthew Berry, FCC Commissioner Pai’s chief of staff, explains the legal risks. Cities are creatures of state law and states have substantial powers to regulate what cities do. In some circumstances, Congress can preempt state laws, but as the Supreme Court has held, for an agency to preempt state laws, Congress must provide a clear statement that the FCC is authorized to preempt. Absent a clear statement from Congress, it’s unlikely the FCC could constitutionally preempt state laws regulating municipal broadband.

3. Broadband networks are hard work. Tearing up streets, attaching to poles, and wiring homes, condos, apartments is expensive and time-consuming. It costs thousands of dollars per home passed and the take-up rates are uncertain. Truck-rolls for routine maintenance and customer service cost hundreds of dollars per pop. Additionally, broadband network design is growing increasingly complex as several services converge to IP networks. Interconnection requires complex commercial agreements. Further, carriers are starting to offer additional services using software-defined networks and network function virtualization. I’m skeptical that city managers can stay cutting-edge years into the future. The costs for failed networks will fall to taxpayers.

4. City governments are just not very good at supplying triple play services, as the Phoenix Center and others have pointed out. People want phone, Internet, and television in one bill (and don’t forget video-on-demand service). Cities will often find that there is a lack of interest in a broadband connection that doesn’t also provide traditional television as well. Google Fiber (not a public network, obviously) initially intended to offer only broadband service. However, they quickly found out that potential subscribers wanted their broadband and video bundled together into one contract. The Google Fiber team had to scramble to put together TV packages consumers are accustomed to. If the very competent planners at Google Fiber weren’t aware of this consumer habit, the city planners in Moose Lake and Peoria budgeting for municipal broadband may miss it, too. Further, city administrators are not particularly good at negotiating competitive video bundles (municipal cable revealed this) because of their small size and lack of expertise.

5. A municipal network can chase away commercial network expansion and investment. This, of course, is the main complaint of the cable and telco players. If there is a marginal town an ISP is considering serving or upgrading, the presence of a “public competitor” makes the decision easy. Competing against a network with ready access to taxpayer money is senseless.

6. When cities build networks where ISPs already are serving the public, ISPs do not take it laying down, either. ISPs use their considerable size and industry expertise to their advantage, like adding must-have channels to basic cable packages. The economics are particularly difficult for a city entering the market. Broadband networks have high up-front costs but fairly low marginal costs. This makes price reductions by incumbents very attractive in order to limit customer defections to the entrant. Dropping the price or raising the speeds in neighborhoods where the city builds and frustrating city customer acquisition is a common practice. Apparently some cities didn’t learn their lesson in the late-1990s when municipal cable was a (short-lived) popular idea. Cities often hemorrhaged tax dollars when faced with hard-ball tactics and their penetration rates never reached the optimistic projections.

There are other complications that turn public broadband into expensive boondoggles. People often say in surveys they would pay more for ultra-fast broadband but when actually offered it, many refuse to pay higher prices for higher speeds, particularly when the TV channels offered in the bundle are paltry compared to the “slower” existing providers. When cities do lose money, and they often do, a utility-run broadband network will often cross-subsidize the failing broadband service. Electric utility customers’ dollars are then diverted to maintaining broadband. Further, private carriers can drag lawsuits out to prevent city networks. And your run-of-the-mill city contractor corruption and embezzlement are also possibilities.

I can imagine circumstances where municipal broadband makes sense. However, the President and the FCC are doing the public a disservice by promoting widespread publicly-funded broadband in violation of state laws. This political priority, combined with the probable Title II order next month, signals an inauspicious start to 2015.

]]>
http://techliberation.com/2015/01/14/trouble-ahead-for-municipal-broadband/feed/ 2
Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/ http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/#comments Tue, 13 Jan 2015 18:07:16 +0000 http://techliberation.com/?p=75238

I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption. When it comes to “Trolley Problem”-like ethical questions, Hagemann and I argue that, “these ethical considerations need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

None of this should be read to suggest that the ethical issues being raised by some philosophers or other pundits are unimportant. To the contrary, they are raising legitimate concerns about how ethics are “baked-in” to the algorithms that control autonomous or semi-autonomous systems. It is vital we continue to debate the wisdom of the choices made by the companies and programmers behind those technologies and consider better ways to inform and improve their judgments about how to ‘optimize the sub-optimal,’ so to speak. After all, when you are making decisions about how to minimize the potential for harm — including the loss of life — there are many thorny issues that must be considered and all of them will have downsides. Smith considers a few when he notes:

Automation does not mean an end to uncertainty. How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

Again, these are all valid questions deserving serious exploration, but we’re not having this discussion in a vacuum. Ivory Tower debates cannot be divorced from real-world realities. Although road safety has been improving for many years, people are still dying at a staggering rate due to vehicle-related accidents. Specifically, in 2012, there were 33,561 total traffic fatalities (92 per day) and 2,362,000 people injured (6,454 per day) in over 5,615,000 reported crashes. And, to reiterate, the bulk of those accidents were due to human error.

That is a staggering toll and anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms. Smith argues, correctly in my opinion, that “a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. … [T]his simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.”

Quite right. Indeed, the next time someone poses an an ethical thought experiment along the lines of the Trolley Problem, do what I do and reverse the equation. Ask them about the ethics of slowing down the introduction of a technology into our society that would result in a (potentially significant) lowering of the nearly 100 deaths and over 6,000 injuries that occur because of vehicle-related fatalities each day in the United States. Because that’s no hypothetical thought experiment; that’s the world we live in right now.

______________

(P.S. The late, great political scientist Aaron Wildavsky crafted a framework for considering these complex issues in his brilliant 1988 book, Searching for Safety. No book has had a more significant influence on my thinking about these and other “risk trade-off” issues since I first read it 25 years ago. I cannot recommend it highly enough. I discussed Wildavsky’s framework and vision in my recent little book on “Permissionless Innovation.” Readers might also be interested in my August 2013 essay, “On the Line between Technology Ethics vs. Technology Policy,” which featured an exchange with ethical philosopher Patrick Lin, co-editor of an excellent collection of essays on Robot Ethics: The Ethical and Social Implications of Robotics. You should add that book to your shelf if you are interested in these issues.

 

]]>
http://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/feed/ 0
How the FCC Killed a Nationwide Wireless Broadband Network http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/ http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/#comments Fri, 09 Jan 2015 19:52:27 +0000 http://techliberation.com/?p=75222

Many readers will recall the telecom soap opera featuring the GPS industry and LightSquared and the subsequent bankruptcy of LightSquared. Economist Thomas W. Hazlett (who is now at Clemson, after a long tenure at the GMU School of Law) and I wrote an article published in the Duke Law & Technology Review titled Tragedy of the Regulatory Commons: Lightsquared and the Missing Spectrum Rights. The piece documents LightSquared’s ambitions and dramatic collapse. Contrary to popular reporting on this story, this was not a failure of technology. We make the case that, instead, the FCC’s method of rights assignment led to the demise of LightSquared and deprived American consumers of a new nationwide wireless network. Our analysis has important implications as the FCC and Congress seek to make wide swaths of spectrum available for unlicensed devices. Namely, our paper suggests that the top-down administrative planning model is increasingly harming consumers and delaying new technologies.

Read commentary from the GPS community about LightSquared and you’ll get the impression LightSquared is run by rapacious financiers (namely CEO Phil Falcone) who were willing to flaunt FCC rules and endanger thousands of American lives with their proposed LTE network. LightSquared filings, on the other hand, paint the GPS community as defense-backed dinosaurs who abused the political process to protect their deficient devices from an innovative entrant. As is often the case, it’s more complicated than these morality plays. We don’t find villains in this tale–simply destructive rent-seeking triggered by poor FCC spectrum policy.

We avoid assigning fault to either LightSquared or GPS, but we stipulate that there were serious interference problems between LightSquared’s network and GPS devices. Interference is not an intractable problem, however. Interference is resolved everyday in other circumstances. The problem here was intractable because GPS users are dispersed and unlicensed (including government users), and could not coordinate and bargain with LightSquared when problems arose. There is no feasible way for GPS companies to track down and compel users to use more efficient devices, for instance, if LightSquared compensated them for the hassle. Knowing that GPS mitigation was unfeasible, LightSquared’s only recourse after GPS users objected to the new LTE network was through the political and regulatory process, a fight LightSquared lost badly. The biggest losers, however, were consumers, who were deprived of another wireless broadband network because FCC spectrum assignment prevented win-win bargaining between licensees.

Our paper provides critical background to this dispute. Around 2004, because satellite phone spectrum was underused, the FCC permitted satellite phone licensees flexibility to repurpose some of their spectrum for use in traditional cellular phone networks. (Many people are appalled to learn that spectrum policy still largely resembles Soviet-style command-and-control. The FCC tells the wireless industry, essentially: “You can operate satellite phones only in band X. You can operate satellite TV in band Y. You can operate broadcast TV in band Z.” and assigns spectrum to industry players accordingly.) Seeing this underused satellite phone spectrum, LightSquared acquired some of this flexible satellite spectrum so that LightSquared could deploy a nationwide cellular phone network in competition with Verizon Wireless and AT&T Mobility. LightSquared had spent $4 billion in developing its network and reportedly had plans to spend $10 billion more when things ground to a halt.

In early 2012, the Department of Commerce objected to LightSquared’s network on the grounds that the network would interfere with GPS units (including, reportedly, DOD and FAA instruments). Immediately, the FCC suspended LightSquared’s authorization to deploy a cellular network and backtracked on the 2004 rules permitting cellular phones in that band. Three months later, LightSquared declared bankruptcy. This was a non-market failure, not a market failure. This regulatory failure obtains because virtually any interference to existing wireless operations is prohibited even if the social benefits of a new wireless network are vast.

This analysis is not simply scholarly theory about the nature of regulation and property rights. We provide real-world evidence that supports our notion that, had the FCC assigned flexible, de facto property rights to GPS licensees like the FCC does in some other bands, rather than fragmented unlicensed users, LightSquared might be in operation today serving millions with wireless broadband. Our evidence comes, in fact, from LightSquared’s deals with non-GPS parties. Namely, LightSquared had interference problems with another satellite licensee on adjacent spectrum–Inmarsat.

Inmarsat provides public safety, aviation, and national security applications and hundreds of thousands of devices to government and commercial users. The LightSquared-Inmarsat interference problems were unavoidable but because Inmarsat had de facto property rights to its spectrum, it could internalize financial gains and coordinate with LightSquared. The result was classic Coasian bargaining. The two companies swapped spectrum and activated an agreement in 2010 in which LightSquared would pay Inmarsat over $300 million. Flush with cash and spectrum, Inmarsat could rationalize its spectrum and replace devices that wouldn’t play nicely with LightSquared LTE operations.

These trades avoided the non-market failure the FCC produced by giving GPS users fragmented, non-exclusive property rights. When de facto property rights are assigned to licensees, contentious spectrum border disputes typically give way to private ordering. The result is regular spectrum swaps and sales between competitors. Wireless licensees like Verizon, AT&T, Sprint, and T-Mobile deal with local interference and unauthorized operations daily because they have enforceable, exclusive rights to their spectrum. The FCC, unfortunately, never assigned these kinds of spectrum rights to the GPS industry.

The evaporation of billions of dollars of LightSquared funds was a non-market failure, not a market failure and not a technology failure. The economic loss to consumers was even greater than LightSquared’s. Different FCC rules could have permitted welfare-enhancing coordination between LightSquared and GPS. The FCC’s error was the nature of rights the agency assigned for GPS use. By authorizing the use of millions of unlicensed devices adjacent to LightSquared’s spectrum, the FCC virtually ensured that future attempts to reallocate spectrum in these bands would prove contentious. Going forward, the FCC should think far less about which technologies they want to promote and more about the nature of spectrum rights assigned. For tech entrepreneurs and policy entrepreneurs to create innovative new wireless products, they need well-functioning spectrum markets. The GPS experience shows vividly what to avoid.

]]>
http://techliberation.com/2015/01/09/how-the-fcc-killed-a-nationwide-wireless-broadband-network/feed/ 8
My Writing on Internet of Things (Thus Far) http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/ http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/#comments Mon, 05 Jan 2015 16:55:41 +0000 http://techliberation.com/?p=75210

I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.

I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquiries from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.

That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies.

]]>
http://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/feed/ 0
Hack Hell http://techliberation.com/2014/12/31/hack-hell/ http://techliberation.com/2014/12/31/hack-hell/#comments Wed, 31 Dec 2014 19:24:58 +0000 http://techliberation.com/?p=75160

2014 was quite the year for high-profile hackings and puffed-up politicians trying to out-ham each other on who is tougher on cybercrime. I thought I’d assemble some of the year’s worst hits to ring in 2015.

In no particular order:

Home Depot: The 2013 Target breach that leaked around 40 million customer financial records was unceremoniously topped by Home Depot’s breach of over 56 million payment cards and 53 million email addresses in July. Both companies fell prey to similar infiltration tactics: the hackers obtained passwords from a vendor of each retail giant and exploited a vulnerability in the Windows OS to install malware in the firms’ self-checkout lanes that collected customers’ credit card data. Millions of customers became vulnerable to phishing scams and credit card fraud—with the added headache of changing payment card accounts and updating linked services. (Your intrepid blogger was mysteriously locked out of Uber for a harrowing 2 months before realizing that my linked bank account had changed thanks to the Home Depot hack and I had no way to log back in without a tedious customer service call. Yes, I’m still miffed.)

The Fappening: 2014 was a pretty good year for creeps, too. Without warning, the prime celebrity booties of popular starlets like Scarlett Johansson, Kim Kardashian, Kate Upton, and Ariana Grande mysteriously flooded the Internet in the September event crudely immortalized as “The Fappening.” Apple quickly jumped to investigate its iCloud system that hosted the victims’ stolen photographs, announcing shortly thereafter that the “celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions” rather than any flaw in its system. The sheer volume produced and caliber of icons violated suggests this was not the work of a lone wolf, but a chain reaction of leaks collected over time triggered by one larger dump. For what it’s worth, some dude on 4chan claimed the Fappening was the product of an “underground celeb n00d-trading ring that’s existed for years.” While the event prompted a flurry of discussion about online misogyny, content host ethics, and legalistic tugs-of-war over DMCA takedown requests, it unfortunately did not generate a productive conversation about good privacy and security practices like I had initially hoped.

The Snappening: The celebrity-targeted Fappening was followed by the layperson’s “Snappening” in October, when almost 100,000 photos and 10,000 personal videos sent through the popular Snapchat messaging service, some of them including depictions of underage nudity, were leaked online. The hackers did not target Snapchat itself, but instead exploited a third-party client called SnapSave that allowed users to save images and videos that would normally disappear after a certain amount of time on the Snapchat app. (Although Snapchat doesn’t exactly have the best security record anyways: In 2013, contact information for 4.6 million of its users were leaked online before the service landed in hot water with the FTC earlier this year for “deceiving” users about their privacy practices.) The hackers received access to 13GB library of old Snapchat messages and dumped the images on a searchable online directory. As with the Fappening, discussion surrounding the Snappening tended to prioritize scolding service providers over promoting good personal privacy and security practices to consumers.

Las Vegas Sands Corp.: Not all of these year’s most infamous hacks sought sordid photos or privateering profit. 2014 also saw the rise of the revenge hack. In February, Iranian hackers infiltrated politically-active billionaire Sheldon Adelson’s Sands Casino not for profit or data, but for pure punishment. Adelson, a staunchly pro-Israel figure and partial owner of many Israeli media companies, drew intense Iranian ire after fantasizing about detonating an American nuclear warhead in the Iranian desert as a threat during his speech at Yeshiva University. Hackers released crippling malware into the Sands IT infrastructure early in the year, which proceeded to shut down email services, wipe hard drives clean, and destroy thousands of company computers, laptops, and expensive servers. The Sands website was also hacked to display “a photograph of Adelson chumming around with [Israeli Prime Minister] Netanyahu,” along with the message “Encouraging the use of Weapons of Mass Destruction, UNDER ANY CONDITION, is a Crime,” and a data dump of Sands employees’ names, titles, email addresses, and Social Security numbers. Interestingly, Sands was able to contain the damage internally so that guests and gamblers had no idea of the chaos that was ravaging casino IT infrastructure. Public knowledge of the hack did not serendipitously surface until early December, around the time of the Sony hack. It is possible that other large corporations have suffered similar cyberattacks this year in silence.

JP Morgan: You might think that one of the world’s largest banks would have security systems that are near impossible to crack. This was not the case at JP Morgan. From June to August, hackers infiltrated JP Morgan’s sophisticated security system and siphoned off massive amounts of sensitive financial data. The New York Times reports that “the hackers appeared to have obtained a list of the applications and programs that run on JPMorgan’s computers — a road map of sorts — which they could crosscheck with known vulnerabilities in each program and web application, in search of an entry point back into the bank’s systems, according to several people with knowledge of the results of the bank’s forensics investigation, all of whom spoke on the condition of anonymity.” Some security experts suspect that a nation-state was ultimately behind the infiltration due to the sophistication of the attack and the fact that the hackers neglected to immediately sell or exploit the data or attempt to steal funds from consumer accounts. The JP Morgan hack set off alarm bells among influential financial and governmental circles since banking systems were largely considered to be safe and impervious to these kinds of attacks.

Sony: What a tangled web this was! On November 24, Sony employees were greeted by the mocking grin of a spooky screen skeleton informed they had been “Hacked by the #GOP” and that there was more to come. It was soon revealed that Sony’s email and computer systems had been infiltrated and shut down while some 100 terabytes of data had been stolen. The hackers proceeded to leak embarrassing company information, including emails in which executives made racial jokes, compensation data revealing a considerable gender wage disparity, and unreleased studio films like Annie and Mr. Turner. We also learned about “Project Goliath,” a conspiracy among the MPAA, Sony, and five other studios (Universal, Sony, Fox, Paramount, Warner Bros., and Disney) to revise the spirit of SOPA and attack piracy on the web “by working with state attorneys general and major ISPs like Comcast to expand court power over the way data is served.” (Goliath was their not-exactly-subtle codeword for Google.) Somewhere along the way, a few folks got wild notions that North Korea was behind this attack because of the nation’s outrage at the latest Rogen romp, The Interview. Most cybersecurity experts doubt that the hermit nation was behind the attack, although the official KCNA statement enthusiastically “supports the righteous deed.” The absurdity of the official narrative did not prevent most of our world-class journalistic and political establishment from running with the story and beating the drums of cyberwar. Even the White House and FBI goofed. The FBI and State Department still maintain North Korean culpability, even as research compiled by independent security analysts points more and more to a collection of disgruntled former Sony employees and independent lulz-seekers. Troublingly, the Obama administration publicly entertained cyberwar countermeasures against the troubled communist nation on such slim evidence. A few days later, the Internet in North Korea was mysteriously shut down. I wonder what might have caused that? Truly a mess all around.

LizardSquad: Speaking of Sony hacks, the spirit of LulzSec is alive in LizardSquad. On Christmas day, the black hat collective knocked out Sony’s Playstation network and Microsoft’s Xbox servers with a massive distributed denial of service (DDoS) attack to the great vengeance and furious anger of gamers avoiding family gatherings across the country. These guys are not your average script-kiddies. NexusGuard chief scientist Terrence Gareu warns the unholy lizards boast an artillery that far exceeds normal DDoS attacks. This seems right, given the apparent difficulty that giants Sony and Microsoft had in responding to the attacks. For their part, LizardSquad claims the strength of their attack exceeded the previous record against Cloudflare this February. Megaupload Internet lord Kim Dotcom swooped to save gamers’ Christmas festivities with a little bit of information age, uh, “justice.” The attacks were allegedly called off after Dotcom offered the hacking collective 3,000 Mega vouchers (normally worth $99 each) for his content hosting empire if they agreed to cease. The FBI is investigating the lizards for the attacks. LizardSquad then turned their attention to the TOR network, creating thousands of new relays and comprising a worrying portion of the network’s roughly 8,000 relays in an effort to unmask users. Perhaps they mean to publicize the networks’ vulnerabilities? The group’s official Twitter bio reads, “I cry when Tor deserves to die.” Could this be related to the recent Pando-Tor drama that reinvigorated skepticism of Tor? As with any online brouhaha involving clashing numbers of privacy-obsessed computer whizzes with strong opinions, this incident has many hard-to-read layers (sorry!). While the Tor campaign is still developing, LizardSquad has been keeping busy with it’s newly-launched Lizard Stresser, a distributed DDoS tool that anyone can use for a small fee. These lizards appear very intent on making life as difficult as possible for the powerful parties they’ve identified as enemies and will provide some nice justifications for why governments need more power to crack down on cybercrime.

What a year! I wonder what the next one will bring.

One sure bet for 2015 is increasing calls for enhanced regulatory powers. Earlier this year, Eli and I wrote a Mercatus Research paper explaining why top-down solutions to cybersecurity problems can backfire and make us less secure. We specifically analyzed President Obama’s developing Cybersecurity Framework, but the issues we discuss apply to other rigid regulatory solutions as well. On December 11, in the midst of North Korea’s red herring debut in the Sony debacle, the Senate passed the Cybersecurity Act of 2014, which contains many of the same principles outlined in the Framework. The Act, which still needs House approval, strengthens the Department of Homeland Security’s role in controlling cybersecurity policy by directing DHS to create industry cybersecurity standards and begin routine information-sharing with private entities.

Ranking Member of the Senate Homeland Security Committee, Tom Coburn, had this to say: “Every day, adversaries are working to penetrate our networks and steal the American people’s information at a great cost to our nation. One of the best ways that we can defend against cyber attacks is to encourage the government and private sector to work together and share information about the threats we face. ”

While the problems of poor cybersecurity and increasing digital attacks are undeniable, the solutions proposed by politicians like Coburn are dubious. The federal government should probably try to get its own house in order before it undertakes to save the cyberproperties of the nation. The Government Accountability Office reports that the federal government suffered from almost 61,000 cyber attacks and data breaches last year. The DHS itself was hacked in 2012,while a 2013 GAO report criticized DHS for poor security practices, finding that “systems are being operated without authority to operate; plans of action and milestones are not being created for all known information security weaknesses or mitigated in a timely manner; and baseline security configuration settings are not being implemented for all systems.” GAO also reports that when federal agencies develop cybersecurity practices like those encouraged in the Cybersecurity Framework or the Cybersecurity Act of 2014, they are inconsistently and insufficiently implemented.

Given the federal government’s poor track record managing its own system security, we shouldn’t expect miracles when they take a leadership role for the nation.

Another trend to watch will be the development of a more robust cybersecurity insurance market. The Wall Street Journal reports that 2014’s rash of hacking attacks stimulated sales of formerly-obscure cyberinsurance packages.

The industry had suffered in the past due to its novelty and lack of previous data to use to accurately price insurance packages. This year, demand has been sufficiently stimulated and actuaries have been familiar enough with the relevant risks that the practice has finally become mainstream. Policies can cover “the costs of [data breach] investigations, customer notifications and credit-monitoring services, as well as legal expenses and damages from consumer lawsuits” and “reimbursement for loss of income and extra expenses resulting from suspension of computer systems, and provide payments to cover recreation of databases, software and other assets that were corrupted or destroyed by a computer attack.” As the market matures, cybersecurity insurers may start more actively assessing firms’ digital vulnerabilities and recommend improvements to their systems in exchange for a lower premium payment, as is common in other insurance markets.

Still, nothing ever beats good old-fashioned personal responsibility. One of the easiest ways to ensure privacy and security for yourself online is to take the time to learn how to best protect yourself or your business by developing good habits, using the right services, and remaining conscientious about your digital activities. That’s my New Year’s resolution. I think it should be yours, too! :)

Happy New Year’s, all!

]]>
http://techliberation.com/2014/12/31/hack-hell/feed/ 0
The 10 Most-Read Posts of 2014 http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

10. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.

9. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.

8. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

7. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.

6. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”

5. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.

4. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

3. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

2. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
http://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 0