Internet of Things & Wearable Tech – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 23 Aug 2017 18:22:04 +0000 en-US hourly 1 6772528 Liberty and Security in the Proposed Internet of Things Cybersecurity Improvement Act of 2017 https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/ https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/#comments Wed, 23 Aug 2017 18:22:04 +0000 https://techliberation.com/?p=76183

On August 1, Sens. Mark Warner and Cory Gardner introduced the “Internet of Things  Cybersecurity Improvement Act of 2017.” The goal of the legislation according to its sponsors is to establish “minimum security requirements for federal procurements of connected devices.” Pointing to the growing number of connected devices and their use in prior cyber-attacks, the sponsors aims to provide flexible requirements that limit the vulnerabilities of such networks. Most specifically the bill requires all new Internet of Things (IoT) devices to be patchable, free of known vulnerabilities, and rely on standard protocols. Overall the legislation attempts to increase and standardize baseline security of connected devices, while still allowing innovation in the field to remain relatively permissionless. As Ryan Hagemann[1] at the Niskanen Center states, the bill is generally perceived as a step in the right direction in promoting security while limiting the potential harms of regulation to the overall innovation in the Internet of Things.

The proposed legislation only creates such security requirements for the Internet of Things products purchased by the government. As a result, it does not directly affect the perceived market failure in securing the Internet of Things for either state and local governments or consumers. As a result, it is possible that either further state or federal legislation could develop different security norms in these areas or allow the market to sort out what level of security is needed in such products. Similarly, innovators might create different versions of products for consumers as opposed to the government if they found the security requirements of the federal procurement laws unnecessary. At the same time, consumers and other levels of government might reject such products if they feel they are less secure. For example, states and federal governments have independently developed their protocols and requirements for security in IT and Telecommunications services, and while all require some level of security, the exact requirements may vary. While most consumers still expect or opt in to some level of security for their personal computers, there are different expectations in security protocols for government and medical computer networks. A similar phenomena could emerge in the Internet of Things where the devices procured by the government are more secure than those available to the average consumer.

Defining and quantifying the Internet of Things can be difficult as new connected devices from toasters to teddy bears continue to arrive seemingly daily. As Ariel Rabkin discusses the bill defines the scope of devices covered in a broad ambiguous term of “Internet-connected device” which could cover not only new connected devices but much more mundane and common general purpose items such as laptops and smart phones. This ambiguity presents a serious concern regarding the proposed legislation. Given the security guidelines are being issued by the Office of Management and Budget in conjunction with each executive agency, we could see issues in agency’s use of soft law in an attempt to get Internet of Things entrepreneurs to adopt such standards beyond the items which the government procures. Because the items covered by the proposed legislation is ambiguous, it also raises concerns of what happens to emerging technologies such as connected cars where current security standards are already being discussed by agencies and devices such as laptops and cell phones where there are existing government and agency standards. If not clarified such a broad definition has potential to create uncertainty if the agency-based security standards for procurement. While initial standards are aimed at federal procurement, the delegation to agencies of these standards could lead to broader could lead to agency threats more generally in the Internet of Things and the use of government procurement standards as a type of soft law to influence the pace and course of innovation.

The proposed legislation provides a basic start on limiting the liability for Internet of Things researchers and systems security architects especially when coupled with existing intermediary protections. Unlike the FTC’s strict liability data security rules, the proposed legislation carves out safe harbors for both good faith security research and testing and updating the Digital Millennium Copyright Act (DMCA) and Computer Fraud and Abuse Act (CFAA) to have safe harbors provided the device was in compliance with the issued guidelines under the new legislation. This, however, creates questions of liability for non-federal government purchasers. First, if the devices fail to comply with the proposed standards in the consumer market could the presence of a more secure government alternative be used to support a design defect argument as the availability of a reasonable alternative design? And if not for an individual consumer, then what about a state or local government. Under the proposed legislation, merely not complying with standards in a consumer grade product does not seem likely to give rise to a case against an Internet of Things producer. The proposed legislation also does not appear to adequately address a safe harbor for insufficient fix or a latent defect. While these situations should not immediately find a company negligent, there are concerns that an inefficient patch might exacerbate rather than solve a problem.  It also does not address a possible situation where a third party fails to update the security measures or the government in some way modifies existing protocols on the device inadvertently changing existing security features.

In general, the Internet of Things Cybersecurity Act of 2017 provides a base level of security that could lead to greater adoption by government entities without disrupting the innovation in the consumer market. At the same time its broad definition of the Internet of Things risks potential soft law abuse and its specificity to government procurement limits its potential broader impact on IoT security. If passed, the Internet of Things Cybersecurity Act might lead to promotion of security across devices and broader innovation in such protocols without requiring such technology into captivity.

[1] Ryan provided feedback on an earlier draft of this post.

]]>
https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/feed/ 2 76183
FDA, Biohacking & the “Right to Try” for Families https://techliberation.com/2016/05/09/fda-biohacking-the-right-to-try-for-families/ https://techliberation.com/2016/05/09/fda-biohacking-the-right-to-try-for-families/#comments Mon, 09 May 2016 17:44:07 +0000 https://techliberation.com/?p=76032

In theory, the Food & Drug Administration (FDA) exists to save lives and improve health outcomes. All too often, however, that goal is hindered by the agency’s highly bureaucratic, top-down, command-and-control orientation toward drug and medical device approval.

Today’s case in point involves families of children with diabetes, many of whom are increasingly frustrated with the FDA’s foot-dragging when it comes to approval of medical devices that could help their kids. Writing today in The Wall Street Journal, Kate Linebaugh discusses how “Tech-Savvy Families Use Home-Built Diabetes Device” to help their kids when FDA regulations limit the availability of commercial options. She documents how families of diabetic children are taking matters into their own hands and creating their own home-crafted insulin pumps, which can automatically dose the proper amount of proper amount of the hormone in response to their child’s blood-sugar levels. Families are building, calibrating, and troubleshooting these devices on their own. And the movement is growing. Linebaugh reports that:

More than 50 people have soldered, tinkered and written software to make such devices for themselves or their children. The systems—known in the industry as artificial pancreases or closed loop systems—have been studied for decades, but improvements to sensor technology for real-time glucose monitoring have made them possible. The Food and Drug Administration has made approving such devices a priority and several companies are working on them. But the yearslong process of commercial development and regulatory approval is longer than many patients want, and some are technologically savvy enough to do it on their own.

Linebaugh notes that this particular home-built medical project (known as OpenAPS), was created by Dana Lewis, a 27-year-old with Type 1 diabetes in Seattle. Linebaugh says that:

Ms. Lewis began using the system in December 2014 as a sort of self-experiment. After months of tweeting about it, she attracted others who wanted what she had. The only restriction of the project is users have to put the system together on their own. Ms. Lewis and other users offer advice, but it is each one’s responsibility to know how to troubleshoot. A Bay Area cardiologist is teaching himself software programming to build one for his 1-year-old daughter who was diagnosed in March.

In essence, these individuals and families are engaging in a variant of the sort of decentralized “biohacking” that is becoming increasingly prevalent in society today. As I discussed in a recent law review article, biohacking refers to the efforts of average citizens (often working together in a decentralized fashion) to enhance various human capabilities. This can include implanting things inside one’s body or using external devices to supplement one’s abilities or to address health-related issues.

I documented other examples of this trend in my essays on average citizens making 3D-printed prosthetics (The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA) as well as retainers (“In a World Where Kids Can 3D-Print Their Own Retainers, What Should Regulators Do?”) As “software eats the world” and allows for this sort of democratized medical self-experimentation, more and more citizens are likely going to be engaging in biohacking. In the process, they will often be doing an end-around the FDA and its complex maze of regulatory restrictions on health innovation.

Stated more provocatively, thanks to new technological capabilities and networking platforms, the public may increasingly enjoy a de facto “right to try” for many new medical devices and treatments. Technological innovation will decentralize and democratize medical decisions even when the legal status of such actions is unclear or even flatly illegal.

But is a world of increasingly decentralized, democratized, and such highly personalized medicine actually safe? Well, all risk is relative and as I discussed extensively in my recent book and other work on innovation policy, sometimes the greatest risk of all is the refusal to take any risk to begin with. If you disallow or limit efforts to engage in certain risky endeavours, ultimately, you could end up doing more harm because there can be no reward without a corresponding amount of risk-taking. It is only through constant trial and error experimentation that we find new and better ways of doing things. That is particularly true as it pertains to life-enriching or even life-saving medical treatments. While the FDA likes to think that its hyper-cautious approach to medical drug and device approval ultimately saves lives, in the aggregate, we have no idea how many lives are actually being lost (or how much pain and suffering is occurring) due to FDA prohibitions on our freedom to experiment with new products and services.

One of the parents Linebaugh interviewed for her story made the following remark: “Diabetes is dangerous anyway. Insulin is dangerous. I think what we are doing is actually improving that and lowering the risk.” That is exactly right. This father understands the reality of risk trade-offs. There are certainly risks associated with what these families are doing for their children. But these families also have a very palpable sense of the opposite problem: There is a profound and immediate risk of doing nothing and waiting for the FDA to finally get around to approving the devices that their children need  right now.

All this raises another interesting policy question: Why is it legal for these parents to engage in this sort of medical self-experimentation–experimentation on their children, no less!–while it remains flatly illegal for any commercial operator to offer similar products that could help these families? Many modern regulatory regimes accord differential treatment to commercial activities. Non-commercial versions of some activities are left alone, but as soon as commercial opportunities arise, policymakers seek to apply regulation.

Does this sort of commercial vs. non-commercial regulatory asymmetry make any sense? As far as I can tell, this regulatory distinction is mostly rooted in the fact that deep-pocked commercial operators make easier targets for regulators to go after when compared to harassing average citizens.  Going after average citizens would be bad PR and a serious legal hassle as well because issues pertaining to personal autonomy or parental rights would likely be raised both in the court of public opinion and courts of law.

Regardless, let’s not kid ourselves into thinking that this regulatory distinction is rooted in safety considerations. After all, it is almost certainly the case that those commercial medical innovators are likely building safer products, made by medical professionals with years of experience. Moreover, commercial operators are more likely to carry insurance to address any problems that may develop, and they possess strong reputational incentives to be good market actors. Commercial operators have to maintain brand loyalty to earn new or repeat business, or perhaps just to avoid stiff legal liability that non-commercial operators might not face. 

In any event, one thing should be abundantly clear: If the FDA doesn’t change its ways, we can expect an increasing number of citizens to begin pursuing medical treatments outside the boundaries of the law (and potentially outside the realm of common sense). Many people want a right to try new devices and therapies, and in our modern networked world, they are increasingly going to get it whether regulators like it or not.

Lawmakers in Congress need to exercise better oversight of rogue agencies like the FDA, which face no serious penalties for the sort of endless regulatory foot-dragging that threatens public welfare. If the agency was required by Congress to improve its drug and device approval process, then perhaps fewer Americans would be forced to take matters into their own hands to begin with. Down below, I’ve included a few reports suggesting how we might get this much-needed reform process started.


Additional reading from Mercatus Center scholars:

]]>
https://techliberation.com/2016/05/09/fda-biohacking-the-right-to-try-for-families/feed/ 4 76032
Permissionless Innovation & Cybersecurity: Are They Compatible? https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/ https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/#respond Wed, 09 Mar 2016 16:58:00 +0000 https://techliberation.com/?p=76006

[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now.  To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]


 

Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals.[1] But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.

In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,”[2] a “cyber cold war,”[3] or even a “cyber 9/11.”[4] These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code.[5] Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”[6]

Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security.[7] Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”[8] Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security.

Meanwhile, similar concerns have already been raised about security vulnerabilities associated with the Internet of Things[9] and driverless cars.[10] Legislation has already been floated to address the latter concern through federal certification standards.[11] More broad-based cybersecurity legislative proposals have also been proposed, most notably the Cybersecurity Information Sharing Act, which would extend legal immunity to corporations that share customer data with intelligence agencies.[12]

Ironically, these efforts to expand federal cybersecurity authority come before the federal government has even gotten its own house in order. According to a recent report, federal information security failures had increased by an astounding 1,169 percent, from 5,503 in fiscal year 2006 to 69,851 in fiscal year 2014.[13] Of course, many of these same agencies would be tasked with securing the massive new datasets containing personally identifiable details about US citizens’ online activities that legislation like the Cybersecurity Information Sharing Act would authorize. In the worst-case scenario, such federal data storage could counterintuitively encourage more attacks on government systems.

It’s important to put all these security issues in some context and to realize that proposed legal remedies are often inappropriate to address online security concerns and sometimes end up backfiring. In his research on the digital security marketplace, my Mercatus Center colleague Eli Dourado has illustrated how we are already able to achieve “Internet Security without Law.”[14] Dourado documented the many informal institutions that enforce network security norms on the Internet to show how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado says.[15]

For example, a diverse array of computer security incident response teams (CSIRTs) operate around the globe, sharing their research on and coordinating responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies work with these CSIRTs and other individuals and organizations to address security vulnerabilities.

Encouraging the development of robust and lawful software vulnerability markets would provide even more effective cybersecurity reporting. Some private companies and nonprofit security research firms have offered financial incentives for hackers to find and report software vulnerabilities to the proper parties for years now.[16] Such “bug bounty” and “vulnerability auction” programs better align hackers’ monetary incentives with the public interest. By allowing a space for security researchers to responsibly report and profit from discovered bugs, these markets dissuade hackers from selling vulnerabilities to criminal or state-backed organizations.[17]

A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments. “Corporations, including software vendors, antimalware makers, ISPs, and major websites such as Facebook and Twitter, are aggressively pursuing cyber criminals,” notes Roger Grimes of Infoworld.[18] “These companies have entire legal teams dedicated to national and international cyber crime. They are also taking down malicious websites and bot-spitting command-and-control servers, along with helping to identify, prosecute, and sue bad guys,” he says.[19] Meanwhile, more organizations are employing “active defense” strategies, which are “countermeasures that entail more than merely hardening one’s own network against threats and instead seek to unmask one’s attacker or disable the attacker’s system.”[20]

A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyber law centers and experts have also helped by creating projects like Stop Badware, which originated at Harvard University but then grew into a broader nonprofit organization with diverse financial support.[21] Meanwhile, informal grassroots security groups like The Cavalry have formed to build awareness about digital security threats among developers and the general public and then devise solutions to protect public safety.[22]

The recent debacle over the Commerce Department’s proposed new export rules for so-called cyberweapons provides a good example of how poorly considered policies can inadvertently undermine such beneficial emergent ecosystems. The agency’s new draft of US “Wassenaar Arrangement” arms control policies would have unintentionally criminalized the normal communication of basic software bug-testing techniques that hundreds of companies employ each day.[23] The regulators who were drafting the new rules had good intentions. They wanted to crack down on cyber criminals’ abilities to sell malware to hostile state-backed initiatives. However, their lack of technical sophistication led them to unknowingly write a proposal that would have compelled software engineers to seek Commerce Department permission before communicating information about minor software quirks. Fortunately, regulators wisely heeded the many concerned industry comments and rescinded the initial proposal.[24]

Dourado notes that informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions such as administrative regulatory regimes or punishing liability regimes. First, the informal cooperative approach “gives network operators flexibility to determine what constitutes due care in a dynamic environment.” “Formal legal standards,” by contrast, “may not be able to adapt as quickly as needed to rapidly changing circumstances,” he says.[25] Simply put, markets are more nimble than mandates when it comes to promptly patching security vulnerabilities.

Second, Dourado notes that “formal legal proceedings are adversarial and could reduce ISPs’ incentives to share information and cooperate.”[26] Heavy-handed regulation or threatening legal liability schemes could have the unintended consequence of discouraging the sort of cooperation that today alleviates security problems swiftly.

Indeed, there is evidence that existing cybersecurity law prevents defensive strategies that could help organizations to more quickly respond to system infiltrations. For example, some argue that private individuals and organizations should be allowed to defend themselves using special measures to expel or track system infiltrators, often called “hacking back” or “active defense.” Anthony Glosson’s analysis for the Mercatus Center discusses how the Computer Fraud and Abuse Act currently prevents computer security specialists from utilizing defensive hacking techniques that could improve system defenses or decrease the number of attempted attacks.[27]

Third, legal solutions are less effective because “the direct costs of going to court can be substantial, as can be the time associated with a trial,” Dourado argues.[28] By contrast, private actors working cooperatively “do not need to go to court to enforce security norms,” meaning that “security concerns are addressed quickly or punishment . . . is imposed rapidly.”[29] For example, if security warnings don’t work, ISPs can “punish” negligent or willfully insecure networks by “de-peering,” or terminating network interconnection agreements. The very threat of de-peering helps keep network operators on their toes.

Finally, and perhaps most importantly, Dourado notes that international cooperation between state-based legal systems is limited, complicated, and costly. By contrast, under today’s informal, voluntary approach to online security, international coordination and cooperation are quite strong. The CSIRTs and other security institutions and researchers mentioned above all interact and coordinate today as if national borders did not exist. Territorial legal system and liability regimes don’t have the same advantage; enforcement ends at the border.

Dourado’s model has ramifications for other fields of tech policy. Indeed, as noted above, these collaborative efforts and approaches are already at work in the realms of online safety and digital privacy. Countless organizations and individuals collaborate on educational initiatives to improve online safety and privacy. And many industry and nonprofit groups have established industry best practices and codes of conduct to ensure a safer and more secure online experience for all users. The efforts of the Family Online Safety Institute were discussed above. Another example comes from the Future of Privacy Forum, a privacy think tank that seeks to advance responsible data practices. The think tank helps create codes of conduct to ensure privacy best practices by online operators and also helps highlight programs run by other organizations.[30] Likewise, the National Cyber Security Alliance helps promote Internet safety and security efforts among a variety of companies and coordinates National Cyber Security Awareness Month (every October) and Data Privacy Day (held annually on January 28).[31]

What these efforts prove is that not every complex social problem requires a convoluted legal regime or heavy-handed regulatory response. We can achieve reasonably effective safety and security without layering on more and more law and regulation.[32] Indeed, the Internet and digital systems could arguably be made more secure by reforming outdated legislation that prevents potential security-increasing collaborations. “Dynamic systems are not merely turbulent,” Postrel notes. “They respond to the desire for security; they just don’t do it by stopping experimentation.”[33] She adds, “Left free to innovate and to learn, people find ways to create security for themselves. Those creations, too, are part of dynamic systems. They provide personal and social resilience.”[34]

Education is a crucial part of building resiliency in the security context as well. People and organizations can prepare for potential security problems rationally if given even more information and better tools to secure their digital systems and to understand how to cope when problems arise. Again, many corporations and organizations already take steps to guard against malware and other types of cyberattacks by offering customers free (or cheap) security software. For example, major broadband operators offer free antivirus software to customers and various parental control tools to parents. In the context of “connected car” technology, automakers have banded together to come up with privacy and security best practices to address worries about remote hacking of cars as well as concerns about how much data they collect about our driving habits.[35]

Thus, although it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks.[36] Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats through creative “bottom-up” solutions instead of innovation-limiting “top-down” regulatory approaches.


 

[1]    This section partially adapted from Adam Thierer, “Achieving Internet Order without Law,” Forbes, June 24, 2012, http://www.forbes.com/sites/adamthierer/2012/06/24/achieving-internet-order-without-law. The author wishes to thank Andrea Castillo for major contributions to this section.

[2]    See Richard A. Serrano, “Cyber Attacks Seen as a Growing Threat,” Los Angeles Times, February 11, 2011, A18. (“[T]he potential for the next Pearl Harbor could very well be a cyber attack.”)

[3]    Harry Raduege, “Deterring Attackers in Cyberspace,” The Hill, September 23, 2011, 11, http://thehill.com/opinion/op-ed/183429-deterring-attackers-in-cyberspace.

[4]    Kurt Nimmo, “Former CIA Official Predicts Cyber 9/11,” InfoWars.com, August 4, 2011, http://www.infowars.com/former-cia-official-predicts-cyber-911.

[5]    Rodney Brown, “Cyber Bombs: Data-Security Sector Hopes Adoption Won’t Require a ‘Pearl Harbor’ Moment,” Innovation Report, October 26, 2011, 10, http://digital.masshightech.com/launch.aspx?referral=other&pnum=&refresh=6t0M1Sr380Rf&EID=1c256165-396b-454f-bc92-a7780169a876&skip=; Craig Spiezle, “Defusing the Internet of Things Time Bomb,” TechCrunch, August 11, 2015, http://techcrunch.com/2015/08/10/defusing-the-internet-of-things-time-bomb.

[6]    “Morning Edition: Cybersecurity Bill: Vital Need or Just More Rules?” NPR, March 22, 2012, http://www.npr.org/templates/transcript/transcript.php?storyId=149099866.

[7]    Jerry Brito and Tate Watkins, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy” (Mercatus Working Paper No. 11-24, Mercatus Center at George Mason University, Arlington, VA, 2011).

[8]    Jane K. Cramer and A. Trevor Thrall, “Introduction: Understanding Threat Inflation,” in American Foreign Policy and the Politics of Fear: Threat Inflation Since 9/11, ed. A. Trevor Thrall and Jane K. Cramer (London: Routledge, 2009), 1.

[9]    Tufekci, “Dumb Idea”; Byron Acohido, “Hackers Take Control of Internet Appliances,” USA Today, October 15, 2013, http://www.usatoday.com/story/cybertruth/2013/10/15/hackers-taking-control-of-internet-appliances/2986395.

[10]   Ed Markey, Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, US Senate, February 2015, http://www.markey.senate.gov/imo/media/doc/2015-02-06_MarkeyReport-Tracking_Hacking_CarSecurity%202.pdf.

[11]   Ed Markey, “Markey, Blumenthal to Introduce Legislation to Protect Drivers from Auto Security and Privacy Vulnerabilities with Standards and ‘Cyber Dashboard,’” press release, February 11, 2015, http://www.markey.senate.gov/news/press-releases/markey-blumenthal-to-introduce-legislation-to-protect-drivers-from-auto-security-and-privacy-vulnerabilities-with-standards-and-cyber-dashboard.

[12]   Andrea Castillo, “How CISA Threatens Both Privacy and Cybersecurity,” Reason, May 10, 2015, https://reason.com/archives/2015/05/10/why-cisa-wont-improve-cybersecurity.

[13]   Eli Dourado and Andrea Castillo, “Poor Federal Cybersecurity Reveals Weakness of Technocratic Approach” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, June 22, 2015), http://mercatus.org/publication/poor-federal-cybersecurity-reveals-weakness-technocratic-approach.

[14]   Eli Dourado, “Internet Security without Law: How Security Providers Create Online Order” (Mercatus Working Paper No. 12-19, Mercatus Center at George Mason University, Arlington, VA, June 19, 2012), http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online.

[15]   Ibid.

[16]   Charlie Miller, “The Legitimate Vulnerability Market: Inside the Secretive World of 0-day Exploit Sales,” Independent Security Evaluators, May 6, 2007, http://www.econinfosec.org/archive/weis2007/papers/29.pdf.

[17]   Andrea Castillo, “The Economics of Software-Vulnerability Sales: Can the Feds Encourage ‘Pro-social’ Hacking?” Reason, August 11, 2015, https://reason.com/archives/2015/08/11/economics-of-the-zero-day-sales-market.

[18]   Roger Grimes, “The Cyber Crime Tide Is Turning,” Infoworld, August 9, 2011, http://www.pcworld.com/article/237647/the_cyber_crime_tide_is_turning.html.

[19]   Dourado, “Internet Security.”

[20]   Anthony D. Glosson, “Active Defense: An Overview of the Debate and a Way Forward,” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, August 10, 2015), http://mercatus.org/publication/active-defense-overview-debate-and-way-forward-guardians-of-peace-hackers-cybersecurity.

[21]   http://stopbadware.org.

[22]   https://www.iamthecavalry.org.

[23]   Andrea Castillo, “The Government’s Latest Attempt to Stop Hackers Will Only Make Cybersecurity Worse,” Reason, July 28, 2015, https://reason.com/archives/2015/07/28/gov-ploy-to-stop-hackers-will-backfire.

[24]   Russell Brandom, “The US is Rewriting its Controversial Zero-Day Export Policy,” The Verge, July 29, 2015, http://www.theverge.com/2015/7/29/9068665/wassenaar-export-zero-day-revisions-department-of-commerce.

[25]   Dourado, “Internet Security.”

[26]   Ibid.

[27]   Glosson, “Active Defense.”

[28]   Dourado, “Internet Security.”

[29]   Dourado, “Internet Security.”

[30]   Future of Privacy Forum, “Best Practices,” http://www.futureofprivacy.org/resources/best-practices/.

[31]   See http://www.staysafeonline.org/ncsam and http://www.staysafeonline.org/data-privacy-day.

[32]   Glosson, “Active Defense,” 22. (“The precautionary principle is especially inadvisable in the dynamic realm of tech policy, and until the ostensible harms of active defense materialize, the law should facilitate maximum innovation in the network security field.”)

[33]   Postrel, Future and Its Enemies, at 199.

[34]   Ibid., 202.

[35]   See Future of Privacy Forum, “Connected Cars Project,” accessed October 16, 2015, http://www.futureofprivacy.org/connectedcars; Auto Alliance, “Automakers Believe That Strong Consumer Data Privacy Protections Are Essential to Maintaining the Trust of Our Customers,” accessed October 16, 2015, http://www.autoalliance.org/automotiveprivacy. See also Future of Privacy Forum, “Comments of the Future of Privacy Forum on Connected Smart Technologies in Advance of the FTC ‘Internet of Things’ Workshop,” May 31, 2013, http://www.futureofprivacy.org/wp-content/uploads/FPF-Comments-Regarding-Internet-of-Things.pdf.

[36]   Adam Thierer, “Don’t Panic over Looming Cybersecurity Threats,” Forbes, August 7, 2011, http://www.forbes.com/sites/adamthierer/2011/08/07/dont-panic-over-looming-cybersecurity-threats.

 

]]>
https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/feed/ 0 76006
Smart Device Paranoia https://techliberation.com/2015/10/05/smart-device-paranoia/ https://techliberation.com/2015/10/05/smart-device-paranoia/#comments Mon, 05 Oct 2015 21:16:04 +0000 http://techliberation.com/?p=75822

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future , the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples . In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

An hour versus a second may seem like a trivial advantage, but as an object or problem becomes more complex the opacity and limitations of “dumb” things only grow. Tom Brady’s “dumb” football is a case in point. After deflategate, I have much more confidence in the unbiasedness of the virtual ball in Madden. And to eliminate any doubt, I can once again run simulations –  a standard practice among video game designers . This is what allows balance to be achieved in complex, asymmetrical video game maps, for example, while American football is stuck with a rectangle and switching ends at half-time.

In other words, despite Stanley’s repeated assertion that smart devices inevitably sacrifice equity for ruthless efficiency (like a hypothetical traffic light that turns green when it detects surgeons and corporate VPs), embedding algorithms is a demonstrably useful tool for achieving equity in the face of complexity that mirrors the real world. Think, for instance, of the algorithms that draw congressional districts to eliminate gerrymandering.

Yet even if smart devices and algorithms can improve both efficiency and equity, nonetheless they require a dose of human intention and therein lies the danger. Or does it?

Imagine a person, running late for something crucial, sitting at a seemingly interminable red light getting tense and angry. Today he may rail at his bad luck and at the universe, but in the future he will feel he’s the victim of a mind—and of whatever political entities are responsible for the shape of that signal’s logic.

In this future world of omnipresent agency, Stanley essentially imagines a pandemic of paranoid schizophrenia, where conspiracies lurk in every corner, and strings of bad luck are interpreted as punishment by the puppet masters. But this seems to get things exactly backwards. Smart devices are useful precisely because they remove agency, both in terms our personal cognitive effort (like when the lights turn on as you enter a room), and in terms of discretionary influence over our lives.

In this respect, one of Stanley’s own examples directly contradicts his thesis. He points to

an award-winning image of a Gaza City funeral procession, which was challenged due to manual adjustments the photographer made to its tone. I suspect that if the adjustments had been made automatically by his camera (being today little more than a specialized computer), the photo would not have been questioned.

Exactly! The smart focus and light balance of a modern point and click camera not only makes us all better photographers, but it removes worry of unfair and manipulative human input. Afterall, before normal traffic lights was the traffic guard, who let drivers through at his or her discretion. The move to automated lights condensed that human agency to the point of initial creation, thus dramatically reducing the potential for abuse. If smart devices mean we can automatically detect an ambulance or adjust camera aperture, it’s precisely the same sort of improvement.

The fact is that a benign rationality is already replete in the world around us, embedded not just in our technology, but also in our laws and institutions. Externalizing intelligence into rules and structures is the stuff of civilization what’s called “ extended cognition ”. In the words of philosopher Andy Clark :

Advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.

And yet we go through life without constantly looking over our shoulders. This is because we have adapted to the point where we are happily ignorant of the intelligence surrounding us. The hiddenness is a feature, not a bug, as it allows our attention to move on to more pressing things.

Critics of new technology always fail to appreciate this adaptability of human beings, implicitly answering 21st century thought experiments with 20th century prejudices. The enduring lesson of extended cognition is that smart devices promise to make not just our stuff but us , as living creatures, in a very real way more intelligent, expanding our own capabilities rather than subordinating us to the whim of invisible others.

To that end, I can’t help be reminded of the tagline at TechLiberation.com : “The problem is not whether machines think, but whether men do.”

]]>
https://techliberation.com/2015/10/05/smart-device-paranoia/feed/ 1 75822
FTC’s Ohlhausen on Innovation, Prosperity, “Rational Optimism” & Wise Tech Policy https://techliberation.com/2015/09/25/ftcs-ohlhausen-on-innovation-prosperity-rational-optimism-wise-tech-policy/ https://techliberation.com/2015/09/25/ftcs-ohlhausen-on-innovation-prosperity-rational-optimism-wise-tech-policy/#respond Fri, 25 Sep 2015 14:29:59 +0000 http://techliberation.com/?p=75791

commissioner-ohlhausenI wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.

On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)

As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that:

regulators face a fundamental knowledge problem that limits the effective reach of regulation. A regulator must acquire knowledge about the present state and future trends of the industry being regulated. The more prescriptive the regulation, and the more complex the industry, the more detailed knowledge the regulator must collect. But, regulators simply cannot gather all the information relevant to every problem. Such information is widely distributed and therefore very expensive to collect. Even when a regulator manages to collect information, it quickly becomes out of date as a regulated industry continues to evolve. Obsolete data is a particular concern for regulators of fast-changing technological fields like the Internet of Things. This knowledge problem means that centralized problem solving cannot make full use of the available knowledge about a problem. Therefore, centralized regulation generally offers worse solutions when compared to distributed or emergent constraints such as social norms.

She continued on to explain the dangers of “precautionary principle” thinking as applied to new technologies, noting that, far too often, policymakers seek to impose preemptive, top-down controls on new sectors and technologies based on “concern over largely hypothetical future harms.” That’s a point I have stressed repeatedly in my own work on the importance of “permissionless innovation.” As I note in my book of the same title, living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

What’s the better alternative to precautionary controls to address potential risks? As Commissioner Ohlhausen noted in her speech to the Chamber of Commerce, regulators should “focus on identifying and addressing real, not speculative, consumer harm.” She explains how the FTC already has the tools to do so:

At the FTC, this focus is part of our statute. Congress charged us in Section 5 of the FTC Act with preventing deceptive or unfair acts and practices. Deceptive acts violate Section 5 only if they are material – that is, if they actually harm consumers. And practices are only unfair if there is a substantial harm that consumers cannot avoid and that outweighs any benefits to consumers or competition. In both cases, the law concerns itself with addressing actual consumer harms. Likewise the FTC carefully evaluates consumer welfare (or, its corollary, consumer harm) when it exercises its antitrust authority.

Importantly, she noted, the focus in this regard is on  ex post enforcement, not highly prescriptive ex ante regulation. “This incremental approach, which we have been using for nearly 100 years, has significant benefits,” Ohlhausen argued, and it is “consistent with Hayek’s thesis about the knowledge problem.” Namely, regulators should not be acting based on limited knowledge to address hypothetical future threats. Doing so derails opportunities for innovation and leads to myriad unintended consequences.

But the best part of Commissioner Ohlhausen’s speech was her embrace of what author Matt Ridley calls “rational optimism”:

Over the past two centuries, humankind has proven its ability to transform innovation into widespread prosperity. Fueled by supportive social attitudes and free market institutions, businesses have been the engines of this prosperity. Regulators who don’t want to stall these engines of innovation should remember the long history of beneficial innovation, remain humble about what they can know and accomplish, focus on addressing real consumer harm, and apply tools appropriate to the harms that do arise.

Critics will protest that innovation can just be too darn disruptive and that we have to preemptively legislate or regulate to counter those effects. But Ohlhausen has a powerful response to those critics:

innovation can, and will, be unnerving or unsettling. By its very nature, innovation changes things. Change is uncomfortable. That is why, as long as there has been innovation, there have been detractors and doomsayers. William Petty, the economist and doctor, said, “When a new invention is first propounded, in the beginning every man objects and the poor inventor runs the gauntloop of all petulant wits.” And he was talking in 1679! Pessimism about innovation sells newspapers and books. It also has a surprising intellectual caché. “The man who despairs when others hope is admired by a large class of persons as a sage,” said John Stuart Mill. But if the past 200 years of innovation have any lesson, it is this: society has repeatedly and quickly integrated and greatly benefited from innovation. The somber doomsday “sages” – from the Luddites in 19th century England to critics of credit card technology in the 1970s – have been wrong about the general effects of innovation. The many benefits have far outweighed the few costs. I am quite optimistic that the disruption of the Internet of Everything will continue the trend and greatly promote our prosperity.

Preach it, sister! That is exactly right.

Anyway, make sure to read Commissioner Ohlhausen’s entire speech. It is absolutely spectacular. I wish every regulatory approached their jobs with the same degree of humility, patience, and “rational optimism” that Commissioner Ohlhausen does.

]]>
https://techliberation.com/2015/09/25/ftcs-ohlhausen-on-innovation-prosperity-rational-optimism-wise-tech-policy/feed/ 0 75791
New Paper Surveying Growth Projections for the Internet of Things  https://techliberation.com/2015/06/15/new-paper-surveying-growth-projections-for-the-internet-of-things/ https://techliberation.com/2015/06/15/new-paper-surveying-growth-projections-for-the-internet-of-things/#respond Mon, 15 Jun 2015 19:16:15 +0000 http://techliberation.com/?p=75587

The “Internet of Things” (IoT) is already growing at a breakneck pace and is expected to continue to accelerate rapidly. In a short new paper (“Projecting the Growth and Economic Impact of the Internet of Things“) that I’ve just released with my Mercatus Center colleague Andrea Castillo, we provide a brief explanation of IoT technologies before describing the current projections of the economic and technological impacts that IoT could have on society. In addition to creating massive gains for consumers, IoT is projected to provide dramatic improvements in manufacturing, health care, energy, transportation, retail services, government, and general economic growth. Take a look at our paper if you’re interested, and you might also want to check out my 118-page law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation” as well as my recent congressional testimony on the policy issues surrounding the IoT.)

IoT-projections

 

]]>
https://techliberation.com/2015/06/15/new-paper-surveying-growth-projections-for-the-internet-of-things/feed/ 0 75587
Bipartisan Internet of Things Resolution Introduced in Senate https://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/ https://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/#comments Wed, 04 Mar 2015 21:08:24 +0000 http://techliberation.com/?p=75493

A new bipartisan “sense of the Senate” resolution was introduced today calling for “a national strategy for the Internet of Things to promote economic growth and consumer empowerment.” [PDF is here.] The resolution was cosponsored by U.S. Senators Deb Fischer (R-Neb.), Cory A. Booker (D-N.J.), Kelly Ayotte (R-N.H.), and Brian Schatz (D-Hawaii), who are all members of the Senate Commerce Committee, which oversees these issues. Just last month, on February 11th, the full Commerce Committee held a hearing titled “The Connected World: Examining the Internet of Things,” which examined the policy issues surrounding this exciting new space.

[ Update:  The U.S. Senate unanimously approved the resolution on the evening of March 24th, 2015.]

The new Senate resolution begins by stressing the many current or potential benefits associate with the Internet of Things (IoT), which, it notes, “currently connects tens of billions of devices worldwide and has the potential to generate trillions of dollars in economic opportunity.” It continues on to note how average consumers will benefit because “increased connectivity can empower consumers in nearly every aspect of [our] daily lives, including in the fields of agriculture, education, energy, healthcare, public safety, security, and transportation, to name just a few.” And then the resolution also discussed the commercial benefits, noting, “businesses across our economy can simplify logistics, cut costs in supply chains, and pass savings on to consumers because of the Internet of Things and innovations derived from it.” More generally, the Senators argue “the United States should strive to be a world leader in smart cities and smart infrastructure to ensure its citizens and businesses, in both rural and urban parts of the country, have access to the safest and most resilient communities in the world.”

In light of those amazing potential benefits, the resolution continues on to argue that while “the United States is the world leader in developing the Internet of Things technology,” an even more focused and dedicated policy vision is needed to promote continued success. “[W]ith a national strategy guiding both public and private entities,” it argues, “the United States will continue to produce breakthrough technologies and lead the world in innovation.” 

Toward that end, the resolution says that it is the sense of the Senate that:

(1) the United States should develop a national strategy to incentivize the development of the Internet of Things in a way that maximizes the promise connected technologies hold to empower consumers, foster future economic growth, and improve our collective social well-being; (2) the United States should prioritize accelerating the development and deployment of the Internet of Things in a way that recognizes its benefits, allows for future innovation, and responsibly protects against misuse; (3) the United States should recognize the importance of consensus-based best practices and communication among stakeholders, with the understanding that businesses can play an important role in the future development of the Internet of Things; (4) the United States Government should commit itself to using the Internet of Things to improve its efficiency and effectiveness and cut waste, fraud, and abuse whenever possible; and, (5) using the Internet of Things, innovators in the United States should commit to improving the quality of life for future generations by developing safe, new technologies aimed at tackling the most challenging societal issues facing the world.

This is a pretty solid statement from this group of Senators, who appear committed to advancing a pro-innovation, pro-growth approach to the emerging Internet of Things universe of technologies. This is exciting because this reflects the strong bipartisan approach American policymakers adopted two decades ago for the Internet more generally. America’s unified, “light-touch” Internet policy vision worked wonders for consumers and our economy before, and it can happen again thanks to a vision like the one these four Senators floated today.

As I explained in more detail when I testified at the February 11th Senate Commerce hearing on IoT issue:

America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation” — the idea that experimentation with new technologies and business models should generally be permitted without prior approval. Congress embraced permissionless innovation by passing the Telecommunications Act of 1996 and rejecting archaic Analog Era command-and-control regulations for this exciting new medium. The Clinton administration embraced permissionless innovation with its 1997 “Framework for Global Electronic Commerce,” which outlined a clear vision for Internet governance that relied on civil society, voluntary agreements, and ongoing marketplace experimentation. This nonpartisan blueprint sketched out almost two decades ago for the Internet is every bit as sensible today as we begin crafting a policy paradigm for the Internet of Things

I view this new Senate resolution on the Internet of Things as an effort to freshen up and extend that original vision that lawmakers crafted for the Internet back in the mid-1990s.  As I documented in my recent essay, “Why Permissionless Innovation Matters,” that vision has worked wonders for American consumers and our modern economy. Meanwhile, our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

We got policy right once before in the United States, and we can get it right again with a policy vision like that found in this new Senate resolution for the Internet of Things.


Additional Reading

]]>
https://techliberation.com/2015/03/04/bipartisan-internet-of-things-resolution-introduced-in-senate/feed/ 2 75493
What Cory Booker Gets about Innovation Policy https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/ https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/#respond Mon, 16 Feb 2015 15:32:43 +0000 http://techliberation.com/?p=75460

Cory BookerLast Wednesday, it was my great pleasure to testify at a Senate Commerce Committee hearing entitled, “The Connected World: Examining the Internet of Things.” The hearing focused “on how devices… will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

But the session went well beyond the Internet of Things and became a much more wide-ranging discussion about how America can maintain its global leadership for the next-generation of Internet-enabled, data-driven innovation. On both sides of the aisle at last week’s hearing, one Senator after another made impassioned remarks about the enormous innovation opportunities that were out there. While doing so, they highlighted not just the opportunities emanating out of the IoT and wearable device space, but also many other areas, such as connected cars, commercial drones, and next-generation spectrum.

I was impressed by the energy and nonpartisan vision that the Senators brought to these issues, but I wanted to single out the passionate statement that Sen. Cory Booker (D-NJ) delivered when it came his turn to speak because he very eloquently articulated what’s at stake in the battle for global innovation supremacy in the modern economy. (Sen. Booker’s remarks were not published, but you can watch them starting at the 1:34:00 mark of the hearing video.)

Embrace the Opportunity

First, Sen. Booker stressed the enormous opportunity with the Internet of Things. “ This is a phenomenal opportunity for a bipartisan, profoundly patriotic approach to an issue that can explode our economy. I think that there are trillions of dollars, creating countless jobs, improving quality of life, [and] democratizing our society,” he said. “We can’t even imagine the future that this portends of, and we should be embracing that.”

Sen. Booker has it exactly right. And for more details about the enormous innovation opportunities associated with the Internet of Things, see Section 2 of my new law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” which provides concrete evidence.

Protect America’s Competitive Advantage in the Innovation Age

Second, Sen. Booker highlighted the importance of getting our policy vision right to achieve those opportunities. He noted that “a lot of my concerns are what my Republican colleagues also echoed, which is we should be doing everything possible to encourage this and nothing to restrict it.”

America right now is the net exporter of technology and innovation in the globe, and we can’t lose that advantage,” he said and “we should continue to be the global innovators on these areas.” He continued on to say:

And so, from copyright issues, security issues, privacy issues… all of these things are worthy of us wrestling and grappling with, but to me we cannot stop human innovation and we can’t give advantages in human innovation to other nations that we don’t have. America should continue to lead.

This is something I have been writing actively about now for many years and I agree with Sen. Booker that America needs to get our policy vision right to ensure we don’t lose ground in the international competition to see who will lead the next wave of Internet-enabled innovation. As I noted in my testimony, “If America hopes to be a global leader in the Internet of Things, as it has been for the Internet more generally over the past two decades, then we first have to get public policy right. America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation”—the idea that experimentation with new technologies and business models should generally be permitted without prior approval.”

Meanwhile, as I documented in my longer essay, “Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others?” our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

Reject Fear-Based Policymaking

Third, and perhaps most importantly, Sen. Booker stressed how essential it was that we reject a fear-based approach to public policymaking. As he noted at the hearing about these new information technologies, “ there’s a lot of legitimate fears, but in the same way of every technological era, there must have been incredible fears.”

He cited, for example, the rise of air travel and the onset of humans taking flight. Sen. Booker correctly noted that while that must have been quite jarring at first, we quickly came to realize the benefits of that new innovation. The same will be true for new technologies such as the Internet of Things, connected cars, and private drones, Booker argued. In each case, some early fears about these technologies could lead to overly-precautionary approach to policy. “ But for us to do anything to inhibit that leap in humanity to me seems unfortunate,” he said.

Once again, the Senator has it exactly right. As I noted in my law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as my recent essay, “Muddling Through: How We Learn to Cope with Technological Change,” humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. More often than not, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Booker gets that and understands why we need to be patient to allow that process to unfold once again so that we can enjoy the abundance of riches that will accompany a more innovative economy.

Avoiding Global Innovation Arbitrage

Sen. Booker also highlighted how some existing government legal and regulatory barriers could hold back progress. On the wireless spectrum front he noted that “ the government hoards too much spectrum and there is a need for more spectrum out there. Everything we are talking about,” he argued, “is going to necessitate more spectrum.” Again, 100% correct. Although some spectrum reform proposals (licensed vs. unlicensed, for example) will still prove contentious, we can at least all agree that we have to work together to find ways to open up more spectrum since the coming Internet of Things universe of technologies is going to demand lots of it.

Booker also noted that another area where fear undermines American leadership is the issue of private drone use. He noted that, “ the potential possibilities for drone technology to alleviate burdens on our infrastructure, to empower commerce, innovation, jobs… to really open up unlimited opportunities in this country is pretty incredible to me.”

The problem is that existing government policies, enforced by the Federal Aviation Administration (FAA), have been holding back progress. And that has had consequences in terms of global competitiveness. “As I watch our government go slow in promulgating rules holding back American innovation,” Booker said, “what happened as a result of that is that innovation has spread to other countries that don’t have these rules (or have) put in place sensible regulations. But now we seeing technology exported from America and going other places.”

Correct again! I wrote about this problem in a recent essay on “global innovation arbitrage,” in which I noted how “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

That’s already happening with drone innovation, as I documented in that piece. Evidence suggests that the FAA’s heavy-handed and overly-precautionary approach to drones has encouraged some innovators to flock overseas in search of more hospitable regulatory environment.

Luckily, just this weekend, the FAA finally announced its (much-delayed) rules for private drone operations. (Here’s a summary of those rules.) Unfortunately, the rules are a bit of mixed bag, with some greater leeway being provided for very small drones, but the rules will still be too restrictive to allow for other innovative applications, such as widespread drone delivery (which has Amazon angry, among others.)

Bottom line: if our government doesn’t take a more flexible, light-touch approach to these and other cutting-edge technologies, than some of our most creative minds and companies are going to bolt.

I dealt with all of these innovation policy issues in far more detail in my latest little book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, which I condensed further still into this essay on, “Embracing a Culture of Permissionless Innovation.” But Sen. Booker has offered us an even more concise explanation of just what’s at stake in the battle for innovation leadership in the modern economy. His remarks point the way forward and illustrate, as I have noted before, that innovation policy can and should be a nonpartisan issue.

 


Additional Reading

 

]]>
https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/feed/ 0 75460
My Testimony for Senate Internet of Things Hearing https://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/ https://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/#comments Wed, 11 Feb 2015 14:31:34 +0000 http://techliberation.com/?p=75444

This morning at 9:45, the Senate Committee on Commerce, Science, and Transportation is holding a full committee hearing entitled, “The Connected World: Examining the Internet of Things.” According to the Committee press release, the hearing “will focus on how devices — from home heating systems controlled by users online, to wearable devices that track health and activity with the help of Internet-based analytics — will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

It is my pleasure to have been invited to testify at this hearing. I’ve long had an interest in the policy issues surrounding the Internet of Things. All my relevant research products can be found online here, including my latest law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation.

My testimony, which can be found on the Mercatus Center website here, begins by highlighting the three general conclusions of my work:

  1. First, the Internet of Things offers compelling benefits to consumers, companies, and our country’s national competitiveness that will only be achieved by adopting a flexible policy regime for this fast-moving space.
  2. Second, while there are formidable privacy and security challenges associated with the Internet of Things, top-down or one-size-fits-all regulation will limit innovative opportunities.
  3. Third, with those first two points in mind, we should seek alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing legal mechanisms. Long-term privacy and security protection requires a multifaceted approach incorporating many flexible solutions.

I continue on to elaborate on each point and then conclude my testimony on a note of optimism:

we should also never forget that, no matter how disruptive these new technologies may be in the short term, we humans have an extraordinary ability to adapt to technological change and bounce back from adversity. That same resilience will be true for the Internet of Things. We should remain patient and continue to embrace permissionless innovation to ensure that the Internet of Things thrives and American consumers and companies continue to be global leaders in the digital economy.

My testimony also includes 7 appendices offering more detail for those interested.  Two of those appendices focus on defining the parameters of the Internet of Things as then documenting the projected economic impact associated with this rapidly-growing market.  The other appendices reproduce essays I have published here before, including articles about the Federal Trade Commission’s recent Internet of Things report as well as my thoughts on how to craft a nonpartisan policy vision for the Internet of Things.

Finally, here’s a list of most of my recent work the Internet of Things and wearable technology policy issues for those interested in reading even more about the topic:

]]>
https://techliberation.com/2015/02/11/my-testimony-for-senate-internet-of-things-hearing/feed/ 2 75444
Some Initial Thoughts on the FTC Internet of Things Report https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/ https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/#comments Wed, 28 Jan 2015 14:54:30 +0000 http://techliberation.com/?p=75351

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be considered synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

]]>
https://techliberation.com/2015/01/28/some-initial-thoughts-on-the-ftc-internet-of-things-report/feed/ 2 75351
Striking a Sensible Balance on the Internet of Things and Privacy https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/ https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/#comments Fri, 16 Jan 2015 21:08:39 +0000 http://techliberation.com/?p=75274

FPF logoThis week, the Future of Privacy Forum (FPF) released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” which I believe can help us find policy consensus regarding the privacy and security concerns associated with the Internet of Things (IoT) and wearable technologies. I’ve been monitoring IoT policy developments closely and I recently published a big working paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will appear shortly in the Richmond Journal of Law & Technology. I have also penned several other essays on IoT issues. So, I will be relating the FPF report to some of my own work.

The new FPF report, which was penned by Christopher Wolf, Jules Polonetsky, and Kelsey Finch, aims to accomplish the same goal I had in my own recent paper: sketching out constructive and practical solutions to the privacy and security issues associated with the IoT and wearable tech so as not to discourage the amazing, life-enriching innovations that could flow from this space. Flexibility is the key, they argue. “Premature regulation at an early stage in wearable technological development may freeze or warp the technology before it achieves its potential, and may not be able to account for technologies still to come,” the authors note. “Given that some uses are inherently more sensitive than others, and that there may be many new uses still to come, flexibility will be critical going forward.” (p. 3)

That flexible approach is at the heart of how the FPF authors want to see Fair Information Practice Principles (FIPPs) applied in this space. The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. The FPF authors correctly note that,

The FIPPs do not establish specific rules prescribing how organizations should provide privacy protections in all contexts, but rather provide high-level guidelines. Over time, as technologies and the global privacy context have changed, the FIPPs have been presented in different ways with different emphases. Accordingly, we urge policymakers to enable the adaptation of these fundamental principles in ways that reflect technological and market developments. (p. 4)

They continue on to explain how each of the FIPPS can provide a certain degree of general guidance for the IoT and wearable tech, but also caution that: “A rigid application of the FIPPs could inhibit these technologies from even functioning, and while privacy protections remain essential, a degree of flexibility will be key to ensuring the Internet of Things can develop in ways that best help consumer needs and desires.” (p. 4) And throughout the report, the FPF authors stress the need for the FIPPS to be “practically applied” and they nicely explain how the appropriate application of any particular one of the FIPPS “will depend on the circumstances.”  For those reasons, they conclude by saying, “we urge policymakers to adopt a forward-thinking, flexible application of the FIPPs.” (p. 11)

The approach that Wolf, Polonetsky, and Finch set forth in this new FPF report is very much consistent with the policy framework I sketched out in my forthcoming law review article. “The need for flexibility and adaptability will be paramount if innovation is to continue in this space,” I argued. In essence, best practices need to remain just that: best practicesnot fixed, static, top-down regulatory edicts. As I noted:

Regardless of whether they will be enforced internally by firms or by ex post FTC enforcement actions, best practices must not become a heavy-handed, quasi-regulatory straitjacket. A focus on security and privacy by design does not mean those are the only values and design principles that developers should focus on when innovating. Cost, convenience, choice, and usability are all important values too. In fact, many consumers will prioritize those values over privacy and security — even as activists, academics, and policymakers simultaneously suggest that more should be done to address privacy and security concerns. Finally, best practices for privacy and security issues will need to evolve as social acceptance of various technologies and business practices evolve. For example, had “privacy by design” been interpreted strictly when wireless geolocation capabilities were first being developed, these technologies might have been shunned because of the privacy concerns they raised. With time, however, geolocation technologies have become a better understood and more widely accepted capability that consumers have come to expect will be embedded in many of their digital devices.  Those geolocation capabilities enable services that consumers now take for granted, such as instantaneous mapping services and real-time traffic updates. This is why flexibility is crucial when interpreting the privacy and security best practices.

The only thing I think that was missing from the FPF report was a broader discussion of other constructive privacy and security solutions that involve education, etiquette, and empowerment-based solutions. I would have also liked to have seen some discussion of how other existing legal mechanisms — privacy torts, contractual enforcement mechanisms, property rights, state “peeping Tom” law, and existing privacy statutes — might cover some of the hard cases that could develop on this front. I discuss those and other “bottom-up” solutions in Section IV of my law review article and note that they can contribute to the sort of “layered” approach we need to address privacy and security concerns for the IoT and wearable tech.

In any event, I encourage everyone to check out the new Future of Privacy Forum report as well as the many excellent best practice guidelines they have put together to help innovators adopt sensible privacy and security best practices. FPF has done some great work on this front.

Additional Reading

]]>
https://techliberation.com/2015/01/16/striking-a-sensible-balance-on-the-internet-of-things-and-privacy/feed/ 3 75274
Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation https://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/ https://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/#respond Tue, 13 Jan 2015 18:07:16 +0000 http://techliberation.com/?p=75238

I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles. Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption. When it comes to “Trolley Problem”-like ethical questions, Hagemann and I argue that, “these ethical considerations need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

None of this should be read to suggest that the ethical issues being raised by some philosophers or other pundits are unimportant. To the contrary, they are raising legitimate concerns about how ethics are “baked-in” to the algorithms that control autonomous or semi-autonomous systems. It is vital we continue to debate the wisdom of the choices made by the companies and programmers behind those technologies and consider better ways to inform and improve their judgments about how to ‘optimize the sub-optimal,’ so to speak. After all, when you are making decisions about how to minimize the potential for harm — including the loss of life — there are many thorny issues that must be considered and all of them will have downsides. Smith considers a few when he notes:

Automation does not mean an end to uncertainty. How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

Again, these are all valid questions deserving serious exploration, but we’re not having this discussion in a vacuum. Ivory Tower debates cannot be divorced from real-world realities. Although road safety has been improving for many years, people are still dying at a staggering rate due to vehicle-related accidents. Specifically, in 2012, there were 33,561 total traffic fatalities (92 per day) and 2,362,000 people injured (6,454 per day) in over 5,615,000 reported crashes. And, to reiterate, the bulk of those accidents were due to human error.

That is a staggering toll and anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms. Smith argues, correctly in my opinion, that “a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. … [T]his simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.”

Quite right. Indeed, the next time someone poses an an ethical thought experiment along the lines of the Trolley Problem, do what I do and reverse the equation. Ask them about the ethics of slowing down the introduction of a technology into our society that would result in a (potentially significant) lowering of the nearly 100 deaths and over 6,000 injuries that occur because of vehicle-related fatalities each day in the United States. Because that’s no hypothetical thought experiment; that’s the world we live in right now.


(P.S. The late, great political scientist Aaron Wildavsky crafted a framework for considering these complex issues in his brilliant 1988 book, Searching for Safety. No book has had a more significant influence on my thinking about these and other “risk trade-off” issues since I first read it 25 years ago. I cannot recommend it highly enough. I discussed Wildavsky’s framework and vision in my recent little book on “Permissionless Innovation.” Readers might also be interested in my August 2013 essay, “On the Line between Technology Ethics vs. Technology Policy,” which featured an exchange with ethical philosopher Patrick Lin, co-editor of an excellent collection of essays on Robot Ethics: The Ethical and Social Implications of Robotics. You should add that book to your shelf if you are interested in these issues.

 

]]>
https://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/feed/ 0 75238
My Writing on Internet of Things (Thus Far) https://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/ https://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/#comments Mon, 05 Jan 2015 16:55:41 +0000 http://techliberation.com/?p=75210

I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.

I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquiries from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.

That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies.

]]>
https://techliberation.com/2015/01/05/my-writing-on-internet-of-things-thus-far/feed/ 1 75210
The 10 Most-Read Posts of 2014 https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

  1. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.
  1. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.
  1. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

  1. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
  1. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”
  1. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.
  1. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

  1. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).
  1. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

  1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 1 75156
A Nonpartisan Policy Vision for the Internet of Things https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/ https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/#comments Thu, 11 Dec 2014 20:07:11 +0000 http://techliberation.com/?p=75076

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair.

Sen. Deb Fischer

In her opening remarks at the CDI event last week, Sen. Deb Fischer explained how “the Internet of Things can be a game changer for the U.S. economy and for the American consumer.” “It gives people more information and better tools to analyze data to make more informed choices,” she noted.

After outlining some of the potential benefits associated with the Internet of Things, Sen. Fischer continued on to explain why it is essential we get public policy incentives right first if we hope to unlock the full potential of these new technologies. Specifically, she argued that:

In order for Americans to receive the maximum benefits from increased connectivity, there are two things the government must avoid. First, policymakers can’t bury their heads in the sand and pretend this technological revolution isn’t happening only to wake up years down the road and try to micromanage a fast-changing, dynamic industry. Second, the federal government must also avoid regulation just for the sake of regulation. We need thoughtful, pragmatic responses and narrow solutions to any policy issues that arise. For too long, the only “strategy” in Washington policy-making has been to react to crisis after crisis. We should dive into what this means for U.S. global competitiveness, consumer welfare, and economic opportunity before the public policy challenges overwhelm us, before legislative and executive branches of government – or foreign governments – react without all the facts.

Fischer concluded by noting that, “it’s entirely appropriate for the U.S. government to think about how to modernize its regulatory frameworks, consolidate, renovate, and overhaul obsolete rules. We’re destined to lose to the Chinese or others if the Internet of Things is governed in the United States by rules that pre-date the VCR.”

Sen. Kelly Ayotte

Like Sen. Fischer, Ayotte similarly stressed the many economic opportunities associated with IoT technologies for both consumers and producers alike. [Note: Sen. Ayotte did not publish her remarks on her website, but you can watch her speech from the CDI event beginning around the 17-minute mark of the event video.]

Ayotte also noted that IoT is going to be a major topic for the Senate Commerce Committee and that there will be an upcoming hearing on the issue. She said that the role of the Committee will be to ensure that the various agencies looking into IoT issues are not issuing “conflicting regulatory directives” and “that what is being done makes sense and allows for future innovation that we can’t even anticipate right now.” Among the agencies she cited that are currently looking into IoT issues: FTC (privacy & security), FDA (medical device apps), FCC (wireless issues), FAA (commercial drones), NHTSA (intelligent vehicle technology), NTIA (multistakeholder privacy reviews), as well as state lawmakers and regulatory agencies.

Sen. Ayotte then explained what sort of policy framework America needed to adopt to ensure that the full potential of the Internet of Things could be realized. She framed the choice lawmakers are confronted with as follows:

we as policymakers we can either create an environment that allows that to continue to grow, or one that thwarts that. To stay on the cutting edge, we need to make sure that our regulatory environment is conducive to fostering innovation.” […] “we’re living in the Dark Ages in the ways the some of the regulations have been framed. Companies must be properly incentivized to invest in the future, and government shouldn’t be a deterrent to innovation and job-creation.

Ayotte also stressed that “technology continues to evolve so rapidly there is no one-size-fits-all regulatory approach” that can work for a dynamic environment like this. “If legislation drives technology, the technology will be outdated almost instantly,” and “that is why humility is so important,” she concluded.

The better approach, she argued was to let technology evolve freely in a “permissionless” fashion and then see what problems developed and then address them accordingly. “[A] top-down, preemptive approach is never the best policy” and will only serve to stifle innovation, she argued. “If all regulators looked with some humility at how technology is used and whether we need to regulate or not to regulate, I think innovation would stand to benefit.”

FTC Commissioner Maureen K. Ohlhausen

Fischer and Ayotte’s remarks reflect a vision for the Internet of Things that FTC Commissioner Maureen K. Ohlhausen has articulated in recent months. In fact, Sen. Ayotte specifically cited Ohlhausen in her remarks.

Ohlhausen has actually delivered several excellent speeches on these issues and has become one of the leading public policy thought leaders on the Internet of Things in the United States today. One of her first major speeches on these issues was her October 2013 address entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” In that speech, Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.”

She also issued a wise word of caution to her fellow regulators:

It is . . . vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

In this and other speeches, Ohlhausen has highlighted the various other remedies that already exist when things do go wrong, including FTC enforcement of “unfair and deceptive practices,” common law solutions (torts and class actions), private self-regulation and best practices, social pressure, and so on. (Note: Inspired by Ohlhausen’s approach, I devoted the final section of my big law review article on IoT issues to a deeper exploration of all those “bottom-up” solutions to privacy and security concerns surrounding the IoT and wearable tech.)

The Clinton Administration Vision

These three women have articulated what I regard as the ideal vision for fostering the growth of the Internet of Things. It should be noted, however, that their framework is really just an extension of the Clinton Administration’s outstanding vision for the Internet more generally.

In the 1997 Framework for Global Electronic Commerce, the Clinton Administration outlined its approach toward the Internet and the emerging digital economy. As I’ve noted many times before, the Framework was a succinct and bold market-oriented vision for cyberspace governance that recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. Specifically, it stated that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” “[G]overnments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

Sen. Ayotte specifically cited those Clinton principles in her speech and said, “I think those words, given twenty years ago at the infancy of the Internet, are today even more relevant as we look at the challenges and the issues that we continue to face as regulators and policymakers.”

I completely agree. This is exactly the sort of vision that we need to keep innovation moving forward to benefit consumers and the economy, and this also illustrates how IoT policy can be a nonpartisan effort.

Why does this matter so much? As I noted in this recent essay, thanks to the Clinton Administration’s bold vision for the Internet:

This policy disposition resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. . . . The result of this freedom to experiment was an outpouring of innovation. America’s info-tech sectors thrived thanks to permissionless innovation, and they still do today. An annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing, software, and digital technology.

In other words, America got policy right before and we can get policy right again to ensure we are again global innovation leaders. Patience, flexibility, and forbearance are the key policy virtues that nurture an environment conducive to entrepreneurial creativity, economic progress, and greater consumer choice.

Other policymakers should endorse the vision originally sketched out by the Clinton Administration and now so eloquently embraced and extended by Sen. Fischer, Sen. Ayotte, and Commissioner Ohlhausen. This is the path forward if we hope to realize the full potential of the Internet of Things.

]]>
https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/feed/ 3 75076
New Paper on Privacy & Security Implications of the Internet of Things & Wearable Technology https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/ https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/#comments Fri, 21 Nov 2014 15:23:31 +0000 http://techliberation.com/?p=74973

IoT paperThe Mercatus Center at George Mason University has just released my latest working paper, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation.” The “Internet of Things” (IoT) generally refers to “smart” devices that are connected to both the Internet and other devices. Wearable technologies are IoT devices that are worn somewhere on the body and which gather data about us for various purposes. These technologies promise to usher in the next wave of Internet-enabled services and data-driven innovation. Basically, the Internet will be “baked in” to almost everything that consumers own and come into contact with.

Some critics are worried about the privacy and security implications of the Internet of Things and wearable technology, however, and are proposing regulation to address these concerns. In my new 93-page article, I explain why preemptive, top-down regulation would derail the many life-enriching innovations that could come from these new IoT technologies. Building on a recent book of mine, I argue that “permissionless innovation,” which allows new technology to flourish and develop in a relatively unabated fashion, is the superior approach to the Internet of Things.

As I note in the paper and my earlier book, if we spend all our time living in fear of the worst-case scenarios — and basing public policies on them — then best-case scenarios can never come about. As the old saying goes: nothing ventured, nothing gained. Precautionary principle-based regulation paralyzes progress and must be avoided.  We instead need to find constructive, “bottom-up” solutions to the privacy and security risks accompanying these new IoT technologies instead of top-down controls that would limit the development of life-enriching IoT innovations.

The better alternative is to deal with concerns creatively as they develop, using a balanced, layered approach  involving many different solutions, including: educational efforts, technological empowerment tools, social norms, public and watchdog pressure, industry best practices and self-regulation, transparency, torts and products liability law, and targeted enforcement of existing legal standards as needed.

Generally speaking, patience, humility, and forbearance by policymakers is crucial to allowing greater innovation and consumer choice in this arena. Importantly, policymakers should not forget that societal and individual adaptation will play a role here, just as it has during so many other turbulent technological transformations.

This article can be downloaded on my Mercatus Center page, on SSRN, or at Research Gate. I am hoping to find a law or policy journal interested in publishing this paper soon. If you with a journal and are interested, please contact me. [UPDATE 12/3/14: This paper has been accepted for publication in the Richmond Journal of Law & Technology, Vol. 21, Issue 6 (2015).]

Finally, if you are interested in this topic, you might want to flip through these slides I prepared for a presentation on this topic that I made at the Federal Communications Commission in September:

Additional reading:
]]>
https://techliberation.com/2014/11/21/new-paper-on-privacy-security-implications-of-the-internet-of-things-wearable-technology/feed/ 5 74973
New Paper: “Removing Roadblocks to Intelligent Vehicles and Driverless Cars” https://techliberation.com/2014/09/17/new-paper-removing-roadblocks-to-intelligent-vehicles-and-driverless-cars/ https://techliberation.com/2014/09/17/new-paper-removing-roadblocks-to-intelligent-vehicles-and-driverless-cars/#comments Wed, 17 Sep 2014 15:03:42 +0000 http://techliberation.com/?p=74730

Driverless CarI’m pleased to announce that the Mercatus Center at George Mason University has just released my latest working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” This paper, which was co-authored with Ryan Hagemann, has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy.

In the paper, Hagemann and I explore the growing market for both “connected car” technologies as well as autonomous (or “driverless”) vehicle technology. We argue that intelligent-vehicle technology will produce significant benefits. Most notably, these technologies could save many lives. In 2012, 33,561 people were killed and 2,362,000 injured in traffic crashes, largely as a result of human error. Reducing the number of accidents by allowing intelligent vehicle technology to flourish would constitute a major public policy success. As Philip E. Ross noted recently at IEEE Spectrum, thanks to these technologies, “eventually it will be positively hard to use a car to hurt yourself or others.” The sooner that day arrives, the better.

These technologies could also have positive environmental impacts in the form of improved fuel economy, reduced traffic congestion, and reduced parking needs. They might also open up new mobility options for those who are unable to drive, for whatever reason. Any way you cut it, these are exciting technologies that promise to substantially improve human welfare.

Of course, as with any new disruptive technology, connected cars and driverless vehicles raise a variety of economic, social, and ethical concerns. Hagemann and I address some of the early policy concerns about these technologies (safety, security, privacy, liability, etc.) and we outline a variety of “bottom-up” solutions to ensure that innovation continues to flourish in this space. Importantly, we also argue that policymakers should keep in mind that individuals have gradually adapted to similar disruptions in the past and, therefore, patience and humility are needed when considering policy for intelligent-vehicle systems.

More generally, we note that the debate over intelligent vehicle technologies foreshadows many other tech policy debates to come in that it raises the larger question of what principle will guide the future of technological progress. Will “permissionless innovation” be our lodestar, allowing individuals to pursue a world of which they can, as of now, only dream? Or will “precautionary principle”-based reasoning prevail instead, driven by a desire to preserve the status quo?

To the maximum extent possible, we argue, policymakers should embrace permissionless innovation for intelligent vehicles. Creative minds–especially those most vociferously opposed to technological change–will always be able to concoct horrific-sounding scenarios about the future. Best-case scenarios will never develop if we are gripped by fear of the worst-case scenarios and try to preemptively plan for all of them with policy interventions.

This 55-page (double-spaced) working paper is available on the Mercatus Center website as well as SSRN, Research Gate, and Scribd. In coming weeks and months, we’ll be writing more about the themes addressed in this paper. Stay tuned, things are unfolding rapidly in this highly innovative arena.

 

Additional Reading

]]>
https://techliberation.com/2014/09/17/new-paper-removing-roadblocks-to-intelligent-vehicles-and-driverless-cars/feed/ 1 74730
Slide Presentation: Policy Issues Surrounding the Internet of Things & Wearable Technology https://techliberation.com/2014/09/12/slide-presentation-policy-issues-surrounding-the-internet-of-things-wearable-technology/ https://techliberation.com/2014/09/12/slide-presentation-policy-issues-surrounding-the-internet-of-things-wearable-technology/#comments Fri, 12 Sep 2014 16:04:09 +0000 http://techliberation.com/?p=74721

On Thursday, it was my great pleasure to present a draft of my forthcoming paper, “The Internet of Things & Wearable Technology: Addressing Privacy & Security Concerns without Derailing Innovation,” at a conference that took place at the Federal Communications Commission on “Regulating the Evolving Broadband Ecosystem.” The 3-day event was co-sponsored by the American Enterprise Institute and the University of Nebraska College of Law.

The 65-page working paper I presented is still going through final peer review and copyediting, but I posted a very rough first draft on SSRN for conference participants. I expect the paper to be released as a Mercatus Center working paper in October and then I hope to find a home for it in a law review. I will post the final version once it is released. [UPDATE:The final version of this working paper was released on November 19, 2014.]

In the meantime, however, I thought I would post the 46 slides I presented at the conference, which offer an overview of the nature of the Internet of Things and wearable technology, the potential economic opportunities that exist in this space, and the various privacy and security challenges that could hold this technological revolution back. I also outlined some constructive solutions to those concerns. I plan to be very active on these issues in coming months.

Additional Reading

 

 

 

]]>
https://techliberation.com/2014/09/12/slide-presentation-policy-issues-surrounding-the-internet-of-things-wearable-technology/feed/ 3 74721
The Growing Conflict of Visions over the Internet of Things & Privacy https://techliberation.com/2014/01/14/the-growing-conflict-of-visions-over-the-internet-of-things-privacy/ https://techliberation.com/2014/01/14/the-growing-conflict-of-visions-over-the-internet-of-things-privacy/#comments Tue, 14 Jan 2014 20:32:44 +0000 http://techliberation.com/?p=74086

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come:

With an estimated 50 billion connected objects coming online by 2050, some see good reason to put policies in place that regulate the new categories of data they will collect about the people who use those products. “The basic problem with the Internet of Things, unless privacy safeguards are established up front, is that users will lose control over the data they generate,” Marc Rotenberg, the president of the Electronic Privacy Information Center, told Fast Company in an email. Others see the emerging category as a perfect reason to block omnibus attempts to regulate user data. “If we spend all of our time living in fear of hypothetical worst-case scenarios, then the best-case scenarios will never come about,” says Adam Thierer, a Senior Research Fellow at George Mason University’s Mercatus Center. “That’s the nature of how innovation works. You have to allow for risks and experimentation, and even accidents and failures, if you want to get progress.”

Last week, I wrote about this conflict of visions in my dispatch from the CES show and this topic is also the focus of my forthcoming eBook, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” To reiterate what I already said, my book will describe the future of the Internet of Things and all technology policy as a grand battle the “precautionary principle” and “permissionless innovation.” The “precautionary principle” refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other worldview, “permissionless innovation,” refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

While those adhering to the precautionary principle mindset tend to favor “top-down” legalistic approaches to solving those potential problems that might creep up, those of us who favor the premissionless innovation approach favor “bottom-up” solutions that evolve over time but do not interrupt the ongoing experimentation and innovation that consumers demand. What does a “bottom-up” approach mean in practice? Education and empowerment, social pressure, societal norms, voluntary self-regulation, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” (i.e., permissioned) nature.

We really should not underestimate the power of norms and public pressure to “regulate” in this regard, perhaps even better than law, which tends to be too slow-moving to make much of a difference. In my book I spend a great deal of time talking about how other technological innovations have been shaped by social norms, public pressure, and press attention. That same will be true for the Internet of Things and various new technologies I discuss in my book. Others will gradually adapt to the new technological realities and integrate these new devices and services into their lives over time.

Perhaps, then, it will be the case that if Google does something particularly bone-headed with Nest that a public backlash will ensue. Or maybe some consumers will just reject Nest and look for other options, which is apparently what Rotenberg is doing according to the Fast Company article. Of course, as I noted in concluding the interview, others may act quite differently and accept Nest and other new Internet of Things technologies, even if there are some privacy or security downsides. As I told Sarah Kessler, while I was visiting the consumer electronics show last week, I heard it was freezing back here in DC. If I would have had Nest in my house, perhaps Google Now could have alerted me to the dangerously low temps in my house and suggested that I raise the temp remotely before my pipes froze. As I noted to Kessler:

“Would that have been creepy?” he says. “To me it would have been helpful. So for everything that people regard as a negative, I can usually find a positive. And if there’s that balance there, then it should be left to individuals to decide for themselves how to decide that balance.”

Finally, since I often get accused of being some sort of nihilist in these debates, I want to make it clear that ethics should influence all these discussions, but I prefer that we not impose ethics in a heavy-handed, inflexible way through preemptive, proscriptive regulatory controls. It makes more sense to wait and see how things play out before regulating to address harms, once we figure out which ones are real. (See the second and third essays listed below for more on ethics and technological innovation.) But we absolutely need to be engaging in robust societal discussions about digital ethics, digital citizenship, privacy and security by design, and sensible online etiquette. I’ve spent a lifetime writing about the power of that approach in the context of online child safety and I think it is equally applicable for privacy and security-related matters. In particular, we need to talk to our kids and our future technologists and innovators about smarter digital habits that respect the safety, security, and privacy of others. Those conversations can help us chart a more sensible path forward without sacrificing the many benefits that accompany the ongoing technological revolution we are blessed to be experiencing today.


Additional Reading:

]]>
https://techliberation.com/2014/01/14/the-growing-conflict-of-visions-over-the-internet-of-things-privacy/feed/ 2 74086
CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It? https://techliberation.com/2014/01/08/ces-2014-report-the-internet-of-things-arrives-but-will-washington-welcome-it/ https://techliberation.com/2014/01/08/ces-2014-report-the-internet-of-things-arrives-but-will-washington-welcome-it/#comments Wed, 08 Jan 2014 21:15:26 +0000 http://techliberation.com/?p=74061

With each booth I pass and presentation I listen to at the 2014 International Consumer Electronics Show (CES), it becomes increasingly evident that the “Internet of Things” era has arrived. In just a few short years, the Internet of Things (IoT) has gone from industry buzzword to marketplace reality. Countless new IoT devices are on display throughout the halls of the Las Vegas Convention Center this week, including various wearable technologies, smart appliances, remote monitoring services, autonomous vehicles, and much more.

This isn’t vaporware; these are devices or services that are already on the market or will launch shortly. Some will fail, of course, just as many other earlier technologies on display at past CES shows didn’t pan out. But many of these IoT technologies will succeed, driven by growing consumer demand for highly personalized, ubiquitous, and instantaneous services.

But will policymakers let the Internet of Things revolution continue or will they stop it dead in its tracks? Interestingly, not too many people out here in Vegas at the CES seem all that worried about the latter outcome. Indeed, what I find most striking about the conversation out here at CES this week versus the one about IoT that has been taking place in Washington over the past year is that there is a large and growing disconnect between consumers and policymakers about what the Internet of Things means for the future.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers. And that’s what has them so excited and ready to embrace these new technologies. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.

But at least so far, most consumers don’t seem to share the same worries. Instead, they are too busy shouting “More, More, More!” IoT technologies have generated enormous interest and every projection I’ve seen so far shows that explosive growth can be expected across all classes of devices. ABI Research estimates that there are more than ten billion wirelessly connected devices in the market today and more than thirty billion devices expected by 2020. Last year Cisco projected that by 2020 thirty-seven billion intelligent things will be connected and communicating but has now apparently revised that estimate upward to 40 or 50 billion. Thus, we are well on the way to a world where “everyone and everything will be connected to the network.”

Yet, it remains unclear what the IoT public policy landscape will look like in coming years and what disposition lawmakers and regulators will adopt toward these new amazing new technologies. Two distinct policy disposition are clashing over what approach should govern the future of innovation in this space.

I discussed this tension during a CES panel this morning on “The Internet of Things and the Home of the Future.” It featured outstanding opening remarks by FTC Commissioner Maureen K. Ohlhausen, who made the case for regulatory humility and focusing on how these new technologies can empower individuals in important new ways. “The Internet has evolved in one generation from a network of electronically interlinked research facilities in the United States to one of the most dynamic forces in the global economy, in the process reshaping entire industries and even changing the way we interact on a personal level,” she noted. “And the Internet of Things offers the promise of even greater progress ahead for consumers and competition.” I strongly encourage you to read Commissioner Ohlhausen’s entire speech. It is terrific and sets exactly the right tone for these discussions.

After Commissioner Ohlhausen spoke, we had a panel discussion that was expertly moderated by tech policy guru Larry Downes and which included remarks from Robert M. McDowell (Hudson Institute), Jeff  Hagins, (Smart Things), Robert Pepper (Cisco), Marc Rogers (Lookout), and me.

When I spoke, I described the future of the Internet of Things as a grand battle of two alternative worldviews: the “precautionary principle” and “permissionless innovation.” The “precautionary principle” refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other worldview, “permissionless innovation,” refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I’ll soon be releasing a new eBook about this conflict of visions. The book will be called, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom” and it should be out in the next few weeks. In it, I will explain how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, explain how that is dangerous and must be rejected, and argue that policymakers should instead unapologetically embrace and defend the permissionless innovation vision — not just for the Internet but also for all new classes of networked technologies and platforms.

This intellectual tension is already evident in debates over the Internet of Things. While we are still very early in this debate, we can expect rising calls for preemptive regulatory controls on IoT technologies based on various safety, security, and especially privacy rationales.  If the precautionary principle mentality wins out and trumps the permissionless innovation ethos that has already powered the first wave of the digital revolution, it will have profound ramifications.

As I’ll note in my forthcoming eBook, preserving and extending the permissionless innovation ethos to the Internet of Things is not about “protecting corporate profits” or assisting any particular technology, industry sector, or set of innovators. Rather, preserving an environment in which permissionless innovation can flourish is about ensuring that individuals as both citizens and consumers continue to enjoy the myriad benefits that accompany an open, innovative information ecosystem. More profoundly, this general freedom to innovate is essential for powering the next great wave of industrial innovation and rejuvenating our dynamic, high-growth economy. Even more profoundly, this is about preserving social and economic freedom more generally while rejecting the central-planning mentality and methods that throughout history have stifled human progress and prosperity.

Safety, security, and privacy problems will continue to persist, of course, and we should work to find practical, “bottom-up” solutions to them. As I detail in my eBook, education and empowerment, social pressure, societal norms, voluntary self-regulation, transparency efforts, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to “top-down,” command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” (i.e., permissioned) nature. Preemptive technological controls of that sort would limit new innovation in this space and sacrifice the many benefits that will flow to consumers from continued experimentation.

Those who advocate precautionary regulatory approaches to the Internet of Things should think through to consequences of preemptively prohibiting technological innovation and realize that not everyone shares their same values, especially pertaining to privacy, which is a highly subjective concept that is often difficult to legislate around. We should instead find ways work with together to seek out those practical, bottom-up solutions that will help individuals, institutions, and society learn how to better cope with technological change over time. Using this approach, we can embrace our dynamic future together without doing permanent damage to our innovative minds and economy.

]]>
https://techliberation.com/2014/01/08/ces-2014-report-the-internet-of-things-arrives-but-will-washington-welcome-it/feed/ 1 74061
What’s at Stake with the FTC’s Internet of Things Workshop https://techliberation.com/2013/11/18/whats-at-stake-with-the-ftcs-internet-of-things-workshop/ https://techliberation.com/2013/11/18/whats-at-stake-with-the-ftcs-internet-of-things-workshop/#comments Tue, 19 Nov 2013 01:57:13 +0000 http://techliberation.com/?p=73855

Tomorrow, the Federal Trade Commission (FTC) will host an all-day workshop entitled, “Internet of Things: Privacy and Security in a Connected World.” [Detailed agenda here.] According to the FTC: “The workshop will focus on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”

Where is the FTC heading on this front? This Politico story by Erin Mershon from last week offers some possible ideas. Yet, it still remains unclear whether this is just another inquiry into an exciting set of new technologies or if it is, as I worried in my recent comments to the FTC on this matter, “the beginning of a regulatory regime for a new set of information technologies that are still in their infancy.”

First, for those not familiar with the “Internet of Things,” this short new report from Daniel Castro & Jordan Misra of the Center for Data Innovation offers a good definition:

The “Internet of Things” refers to the concept that the Internet is no longer just a global network for people to communicate with one another using computers, but it is also a platform or devices to communicate electronically with the world around them. The result is a world that is alive with information as data flows from one device to another and is shared and reused for a multitude of purposes. Harnessing the potential of all of this data for economic and social good will be one of the primary challenges and opportunities of the coming decades.

The report continues on to offer a wide range of examples of new products and services that could fulfill this promise.

What I find somewhat worrying about the FTC’s sudden interest in the Internet of Things is that it opens to the door for some regulatory-minded critics to encourage preemptive controls on this exciting new wave of digital age innovation, based almost entirely on hypothetical worst-case scenarios they have conjured up. And plenty of those boogeyman scenarios are floating around already because the Internet of Things has created a potential perfect storm of four major information policy concerns: online safety, privacy, security, and even intellectual property issues. You can find concerned critics from each of those quarters already wringing their hands about what the Internet of Things means for their pet issues.

This is why in both my filing to the agency and in an upcoming eBook, I discuss the danger of letting “precautionary principle” reasoning trump the alternative paradigm of “permissionless innovation.” As I’ve explained here before as well in this longer law review article, the precautionary principle generally holds that, because a given new technology could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.

The problem with letting such precautionary thinking guide policy is that it poses a serious threat to technological progress, economic entrepreneurialism, and human prosperity. Under an information policy regime guided at every turn by a precautionary principle, technological innovation would be impossible because of fear of the unknown; hypothetical worst-case scenarios would trump all other considerations. Social learning and economic opportunities become far less likely, perhaps even impossible, under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

For these reasons, to the maximum extent possible, the default position toward new forms of technological innovation should be innovation allowed. This policy norm is better captured in the well-known Internet ideal of “permissionless innovation,” or the general freedom to experiment and learn through trial-and-error experimentation.

Which leads back to the FTC workshop tomorrow. Which path will the agency head down? If the recent comments of FTC Chairwoman Edith Ramirez are any indication, there is certainly a healthy appetite for precautionary principle policymaking, at least as it pertains to “big data.” As I noted here in a critique of one of her recent speeches, Chairwoman Ramirez has offered “a rather succinct articulation of precautionary principle thinking as applied to modern data collection practices.”

She worried that “‘big data’ leads to the indiscriminate collection of personal information,” and that “the indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the offchance that it might prove useful is not consistent with privacy best practices,” she continued, and she went on to argue that “Information that is not collected in the first place can’t be misused” and then suggests a parade of horribles that will befall if such data collection is allowed at all.  So, it would not be surprising to see her extend that sort of precautionary reasoning to the Internet of Things since all those fears would apply equally to it.

A better approach can be found in some remarks delivered by Ramirez’s fellow FTC Commissioner Maureen K. Ohlhausen. In an important speech last month entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.” This reflects Ohlhausen’s general embrace of permissionless innovation reasoning and a rejection of the precautionary principle mindset articulated by FTC Chairwoman Ramirez.

More importantly, in her speech, Commissioner Ohlhausen went on to highlight another crucial point about why the precautionary mindset is dangerous when enshrined into laws or regulations. Put simply, many elites and regulatory advocates ignore regulator irrationality or regulatory ignorance. That is, they spend so much time focused on the supposed irrationality of consumers and their openness to persuasion or “manipulation” that they ignore the more concerning problem of the  irrationality or ignorance of those who (incorrectly) believe they are always in the best position to solve every complex problem. Regulators simply do not possess the requisite knowledge to perfectly plan for every conceivable outcome. This is particularly true for information technology markets, which generally evolve much more rapidly than other sectors, and especially more rapidly that law itself.

That insight leads Ohlhausen to issue a wise word of caution to her fellow regulators:

It is [] vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

That is absolutely right and this again makes it clears how Commissioner Ohlhausen’s approach to technological innovation is consistent with the permissionless innovation approach while Chairwoman Ramirez’s is based on precautionary principle thinking. This conflict of visions dominates almost all policy debates over new technology today, even if it is not always on such vivid display as it is in this case.

This also makes it abundantly clear just what is at stake as the FTC embarks on its exploration of the Internet of Things. Will we continue to embrace and defend the philosophy that made America’s digital economy the envy of the world (i.e., “permissionless innovation”), or will we be paralyzed by fear of the unknown and hypothetical worst-case scenarios.  As I have said here many times before, living in constant fear of such worst-case scenarios — and premising public policy upon them — means that best-cast scenarios will never come about.

So, stay tuned. The fight over the Internet of Things promises to be one of the most important public policy battles in the technology policy arena for many years to come.


This issue will be the focus of my forthcoming eBook, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” but until that is released, here are a few other recommended readings on the topic:

Blog posts:

Testimony / Filings:

Journal articles & book chapters:

 

]]>
https://techliberation.com/2013/11/18/whats-at-stake-with-the-ftcs-internet-of-things-workshop/feed/ 2 73855