Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. Continue reading →

FAA sealRegular readers know that I can get a little feisty when it comes to the topic of “regulatory capture,” which occurs when special interests co-opt policymakers or political bodies (regulatory agencies, in particular) to further their own ends. As I noted in my big compendium, “Regulatory Capture: What the Experts Have Found“:

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity.  Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.

Indeed, the more I highlight the problem of regulatory capture and offer concrete examples of it in practice, the more push-back I get from true believers in the idea of “independent” agencies. Even if I can get them to admit that history offers countless examples of capture in action, and that a huge number of scholars of all persuasions have documented this problem, they will continue to persist that, WE CAN DO BETTER! and that it is just a matter of having THE RIGHT PEOPLE! who will TRY HARDER!

Well, maybe. But I am a realist and a believer in historical evidence. And the evidence shows, again and again, that when Congress (a) delegates broad, ambiguous authority to regulatory agencies, (b) exercises very limited oversight over that agency, and then, worse yet, (c) allows that agency’s budget to grow without any meaningful constraint, then the situation is ripe for abuse. Specifically, where unchecked power exists, interests will look to exploit it for their own ends.

In any event, all I can do is to continue to document the problem of regulatory capture in action and try to bring it to the attention of pundits and policymakers in the hope that we can start the push for real agency oversight and reform. Today’s case in point comes from a field I have been covering here a lot over the past year: commercial drone innovation. Continue reading →

Over at the International Association of Privacy Professionals (IAPP) Privacy Perspectives blog, I have two “Dispatches from CES 2015” up. (#1 & #2) While I was out in Vegas for the big show, I had a chance to speak on a panel entitled, “Privacy and the IoT: Navigating Policy Issues.” (Video can be found here. It’s the second one on the video playlist.) Federal Trade Commission (FTC) Chairwoman Edith Ramirez kicked off that session and stressed some of the concerns she and others share about the Internet of Things and wearable technologies in terms of the privacy and security issues they raise.

Before and after our panel discussion, I had a chance to walk the show floor and take a look at the amazing array of new gadgets and services that will soon hitting the market. A huge percentage of the show floor space was dedicated to IoT technologies, and wearable tech in particular. But the show also featured many other amazing technologies that promise to bring consumers a wealth of new benefits in coming years. Of course, many of those technologies will also raise privacy and security concerns, as I noted in my two essays for IAPP. Continue reading →

President Obama recently announced his wish for the FCC to preempt state laws that make building public broadband networks harder. Per the White House, nineteen states “have held back broadband access . . . and economic opportunity” by having onerous restrictions on municipal broadband projects.

Much of the White House claims are PR nonsense. Most of these so-called state restrictions on public broadband are reasonable considering the substantial financial risk public networks pose to taxpayers. Minnesota and Colorado, for instance, require approval from local voters before spending money on a public network. Nevada’s “restriction” is essentially that public broadband is only permitted in the neediest, most rural parts of the state. Some states don’t allow utilities to provide broadband because utilities have a nasty habit of raising, say, everyone’s electricity bills because the money-losing utility broadband network fails to live up to revenue expectations. And so on. Continue reading →

I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.Continue reading →

Many readers will recall the telecom soap opera featuring the GPS industry and LightSquared and the subsequent bankruptcy of LightSquared. Economist Thomas W. Hazlett (who is now at Clemson, after a long tenure at the GMU School of Law) and I wrote an article published in the Duke Law & Technology Review titled Tragedy of the Regulatory Commons: Lightsquared and the Missing Spectrum Rights. The piece documents LightSquared’s ambitions and dramatic collapse. Contrary to popular reporting on this story, this was not a failure of technology. We make the case that, instead, the FCC’s method of rights assignment led to the demise of LightSquared and deprived American consumers of a new nationwide wireless network. Our analysis has important implications as the FCC and Congress seek to make wide swaths of spectrum available for unlicensed devices. Namely, our paper suggests that the top-down administrative planning model is increasingly harming consumers and delaying new technologies.

Read commentary from the GPS community about LightSquared and you’ll get the impression LightSquared is run by rapacious financiers (namely CEO Phil Falcone) who were willing to flaunt FCC rules and endanger thousands of American lives with their proposed LTE network. LightSquared filings, on the other hand, paint the GPS community as defense-backed dinosaurs who abused the political process to protect their deficient devices from an innovative entrant. As is often the case, it’s more complicated than these morality plays. We don’t find villains in this tale–simply destructive rent-seeking triggered by poor FCC spectrum policy.

We avoid assigning fault to either LightSquared or GPS, but we stipulate that there were serious interference problems between LightSquared’s network and GPS devices. Interference is not an intractable problem, however. Interference is resolved everyday in other circumstances. The problem here was intractable because GPS users are dispersed and unlicensed (including government users), and could not coordinate and bargain with LightSquared when problems arose. There is no feasible way for GPS companies to track down and compel users to use more efficient devices, for instance, if LightSquared compensated them for the hassle. Knowing that GPS mitigation was unfeasible, LightSquared’s only recourse after GPS users objected to the new LTE network was through the political and regulatory process, a fight LightSquared lost badly. The biggest losers, however, were consumers, who were deprived of another wireless broadband network because FCC spectrum assignment prevented win-win bargaining between licensees. Continue reading →

I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.

I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquiries from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.

That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies. Continue reading →

Hack Hell

by on December 31, 2014 · 0 comments

2014 was quite the year for high-profile hackings and puffed-up politicians trying to out-ham each other on who is tougher on cybercrime. I thought I’d assemble some of the year’s worst hits to ring in 2015.

In no particular order:

Home Depot: The 2013 Target breach that leaked around 40 million customer financial records was unceremoniously topped by Home Depot’s breach of over 56 million payment cards and 53 million email addresses in July. Both companies fell prey to similar infiltration tactics: the hackers obtained passwords from a vendor of each retail giant and exploited a vulnerability in the Windows OS to install malware in the firms’ self-checkout lanes that collected customers’ credit card data. Millions of customers became vulnerable to phishing scams and credit card fraud—with the added headache of changing payment card accounts and updating linked services. (Your intrepid blogger was mysteriously locked out of Uber for a harrowing 2 months before realizing that my linked bank account had changed thanks to the Home Depot hack and I had no way to log back in without a tedious customer service call. Yes, I’m still miffed.)

The Fappening: 2014 was a pretty good year for creeps, too. Without warning, the prime celebrity booties of popular starlets like Scarlett Johansson, Kim Kardashian, Kate Upton, and Ariana Grande mysteriously flooded the Internet in the September event crudely immortalized as “The Fappening.” Apple quickly jumped to investigate its iCloud system that hosted the victims’ stolen photographs, announcing shortly thereafter that the “celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions” rather than any flaw in its system. The sheer volume produced and caliber of icons violated suggests this was not the work of a lone wolf, but a chain reaction of leaks collected over time triggered by one larger dump. For what it’s worth, some dude on 4chan claimed the Fappening was the product of an “underground celeb n00d-trading ring that’s existed for years.” While the event prompted a flurry of discussion about online misogyny, content host ethics, and legalistic tugs-of-war over DMCA takedown requests, it unfortunately did not generate a productive conversation about good privacy and security practices like I had initially hoped.

The Snappening: The celebrity-targeted Fappening was followed by the layperson’s “Snappening” in October, when almost 100,000 photos and 10,000 personal videos sent through the popular Snapchat messaging service, some of them including depictions of underage nudity, were leaked online. The hackers did not target Snapchat itself, but instead exploited a third-party client called SnapSave that allowed users to save images and videos that would normally disappear after a certain amount of time on the Snapchat app. (Although Snapchat doesn’t exactly have the best security record anyways: In 2013, contact information for 4.6 million of its users were leaked online before the service landed in hot water with the FTC earlier this year for “deceiving” users about their privacy practices.) The hackers received access to 13GB library of old Snapchat messages and dumped the images on a searchable online directory. As with the Fappening, discussion surrounding the Snappening tended to prioritize scolding service providers over promoting good personal privacy and security practices to consumers.

Continue reading →

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy. Continue reading →

This morning, a group of organizations led by the Center for Responsibility and Ethics in Washington (CREW), R Street, and the Sunlight Foundation released a public letter to House Speaker John Boehner and Minority Leader Nancy Pelosi calling for enhanced congressional oversight of U.S. national security surveillance policies.

The letter—signed by over fifty organizations, ranging from the Electronic Frontier Foundation, the Competitive Enterprise Institute, and the Brennan Center for Justice at the New York University School of Law, and a handful of individuals, including Pentagon Papers whistleblower Daniel Ellsberg—expresses deep concerns about the expansive scope and limited accountability of intelligence activities and agencies, famously exposed by whistleblower Edward Snowden in 2013. The letter states:

Congress is responsible for authorizing, overseeing, and funding these programs. In recent years, however, the House of Representatives has not always effectively performed its duties.

The time for modernization is now. When the House convenes for the 114th Congress in January and adopts rules, the House should update them to enhance opportunities for oversight by House Permanent Select Committee on Intelligence (“HPSCI”) members, members of other committees of jurisdiction, and all other representatives. The House should also consider establishing a select committee to review intelligence activities since 9/11. We urge the following reforms be included in the rules package.

The proposed modernization reforms include:

1) modernizing HPSCI membership to more accurately reflect House interests by allowing chairs and ranking members of other committees with intelligence jurisdiction to select a designee on HPSCI;

2) allowing each HPSCI Member to designate a staff member of his or her choosing to represent their interests on the committee, as is the practice in the Senate;

3) making all unclassified intelligence reports quickly available to the public;

4) improving HPSCI the speed and transparency of responsiveness to member requests for information; and

5) improving general HPSCI transparency by better informing members of relevant activities like upcoming closed hearings, legislative markups, and committee activities

The groups also urge reforms to empower all members of Congress to be informed of and involved with executive intelligence agencies’ activities. They are: Continue reading →