National Journal reports that the Department of Commerce (NTIA) will, at a Senate Commerce Committee hearing today, call for a “consumer privacy bill of rights”—a euphemism for sweeping privacy regulation:
“Having carefully reviewed all stakeholder comments to the Green Paper, the department has concluded that the U.S. consumer data privacy framework will benefit from legislation to establish a clearer set of rules for the road for businesses and consumers, while preserving the innovation and free flow of information that are hallmarks of the Internet,” [NTIA Administrator Larry] Strickling said in his prepared testimony obtained by Tech Daily Dose.
In other words: “We’ve taken the time to think this through very carefully and have reluctantly come to the conclusion that regulation is necessary.” Sorry, but I’m just not buying it—not just the wisdom of the recommendation but the process that produced it. Let’s consider the timeline here:
- October 27, 2010 – NTIA Administrator Strickling announces Green Paper is coming but says nothing about timing and little about substance
- December 16, 2010 – NTIA/Commerce releases its Privacy Green Paper
- January 28, 2011 – deadline for public comments (28 non-holiday business days later)
- ??? – Commerce decides regulation is necessary
- March 16, 2011 – Commerce is ready to ask Congress for legislation (31 non-holiday business days later)
The Commerce Department gave the many, many interested parties the
worst four weeks of the year—including Christmas, New Year’s and Martin Luther King Day—to digest and comment on an 88 page, ~31,000 tome of a report on proposed regulation of how information flows in our… well, information economy. Oh, and did I mention that those same parties had already been given a deadline of January 31, 2011 to comment on the FTC’s 122 page, ~34,000 word privacy report back on December 1 (too bad for those celebrating Hanukkah)? In fairness, the FTC did, on January 21, extend its deadline to February 18—but that hardly excuses the Commerce Department’s rush to judgment. Continue reading →
One of the arguments I’ve been making about proposed cybersecurity regulation and legislation is that despite a lot of hype about a massive online threat, there is little evidence to corroborate the dire warnings. Almost every article I’ve read revealing a breach or cyberattack only quotes anonymous government sources, then defense contractors and politicians point to these articles and proclaim, “If you only knew what we know, you’d be taking action now!”
Fear, however, is poor driver of public policy. Before we start telling private companies how to run their security, we should analyze the threat and asses whether there is a legitimate concern and whether government could do a better job. That’s impossible as long as most evidence of a threat is classified.
So I’m glad to see former NSA and CIA chief Gen. Michael Hayden call for less secrecy in order to get better analysis. In the new issue of Startegic Studies Quartley, he writes [PDF]:
Let me be clear: This stuff is overprotected. It is far easier to learn about physical threats from US government agencies than to learn about cyber threats. In the popular culture, the availability of 10,000 applications for my smart phone is viewed as an unalloyed good. It is not—since each represents a potential vulnerability. But if we want to shift the popular culture, we need a broader flow of information to corporations and individuals to educate them on the threat. To do that we need to recalibrate what is truly secret. Our most pressing need is clear policy, formed by shared consensus, shaped by informed discussion, and created by a common body of knowledge. With no common knowledge, no meaningful discussion, and no consensus . . . the policy vacuum continues. This will not be easy, and in the wake of WikiLeaks it will require courage; but, it is essential and should itself be the subject of intense discussion. Who will step up to lead?
Who indeed. Congress may be getting secret briefings that outline a potential cyberthreat. If they are, they should recognize that they may be only getting one view of the issue. Also, the people on whose behalf they are legislating also deserve to have a clear understanding of the risks against which Congress might legislate. “Trust us,” is not good enough. By reducing the over-classification Hayden writes about, Congress could allow economists, computer scientists, and other academics delve into the weeds of determine what is the true nature of the threat and whether a market failure exists that calls for government intervention.
Over at Neighborhood Effects, the Mercatus Center’s state and local policy blog, my colleague Dan Rothschild compares wireless taxes to sin taxes. His analysis is too good not to reprint here in large part:
The purpose of taxes is to raise money for necessary governmental functions. To that end, economists frequently prescribe that rates be low and broad in order to minimize the impact on consumers’ behavior — so-called tax neutrality. This is because taxation should be about raising revenue, not changing behavior.
Some economists tweak this prescription through the
Ramsey Rule, which holds (in a nutshell) that the more influenced by tax rates consumers are (demand elasticity) the less something should be taxed (and vice versa).
Sin taxes are the opposite; they’re about reducing a behavior that policy makers judge to be morally offensive (like many people view smoking).
Relatedly,
Pigouvian taxes seek to bring the costs to society (the social cost) in line with the costs born by a buyer. (For instance, some people advocate higher alcohol taxes on the theory that drinkers impose costs on others, though this argument is fraught with difficulties.)
Cell phone taxes above regular sales taxes levied by states and localities do not fit any of the four rationales provided here. On the one hand, taxing them at over twenty percent of a user’s bill is hardly neutral. Nor does it likely fit the Ramsey Rule prescription; consumers respond to cell phone taxes by buying less of it or by avoiding taxes by pretending to move. (Just look around you at how consumer takeup and use of cell phones has changed as prices have fallen over the last decade.) Cell phones are not sinful or offensive. And there’s no serious case to be made that the social cost of cell phones exceeds the cost born by users. In short, by any principle of public finance, high cell phone taxes are a bad bad bad idea.
Now here’s hoping he takes this awesome analysis and turns it into a paper!
On numerous occasions here and elsewhere I have cited the enormous influence that Virginia Postrel’s 1998 book, The Future and Its Enemies, has had on me. Her “dynamist” versus “stasis” paradigm helps us frame and better understand almost all debates about technological progress. I cannot recommend that book highly enough.
In her latest Wall Street Journal column, Postrel considers what makes the iPad such a “magical” device and in doing so, she takes on the logical set forth in Jonathan Zittrain 2009 book, The Future of the Internet and How to Stop It, although she doesn’t cite the book by name in her column. You will recall that in that book and his subsequent essays, Prof. Zittrain made Steve Jobs and his iPhone out to be the great enemy of digital innovation — at least as Zittrain defined it. How did Zittrain reach this astonishing conclusion and manage to turn Jobs into a pariah and his devices into the supposed enemy of innovation? It came down to “generativity,” Zittrain said, by which he meant technologies or networks that invite or allow tinkering and all sorts of creative uses. Zittrain worships general-purpose personal computers and the traditional “best efforts” Internet. By contrast, he decried “sterile, tethered” digital “appliances” like the iPhone, which he claimed limited generativity and innovation, mostly because of their generally closed architecture.
In her column, Postrel agrees that the iPad is every bit as closed as Zittrain feared iPhone successor devices would be. She notes: “customers haven’t the foggiest idea how the machine works. The iPad is completely opaque. It is a sealed box. You can’t see the circuitry or read the software code. You can’t even change the battery.” But Postrel continues on to explain why the hand-wringing about perfect openness is generally overblown and, indeed, more than a bit elitist: Continue reading →
Twitter could be in for a world of potential pain. Regulatory pain, that is. The company’s announcement on Friday that it would soon be cracking down on the uses of its API by third parties is raising eyebrows in cyberspace and, if recent regulatory history is any indicator, this high-tech innovator could soon face some heat from regulatory advocates and public policy makers. If this thing goes down as I describe it below, it will be one hell of a fight that once again features warring conceptions of “Internet freedom” butting heads over the question of whether Twitter should be forced to share its API with rivals via some sort of “open access” regulatory regime or “API neutrality,” in particular. I’ll explore that possibility in this essay. First, a bit of background.
Understanding Forced Access Regulation
In the field of communications law, the dominant public policy fight of the past 15 years has been the battle over “open access” and “neutrality” regulation. Generally speaking, open access regulations demand that a company share its property (networks, systems, devices, or code) with rivals on terms established by law. Neutrality regulation is a variant of open access regulation, which also requires that systems be used in ways specified by law, but usually without the physical sharing requirements. Both forms of regulation derive from traditional common carriage principles / regulatory regimes. Critics of such regulation, which would most definitely include me, decry the inefficiencies associated with such “forced access” regimes, as we prefer to label them. Forced access regulation also raises certain constitutional issues related to First and Fifth Amendment rights of speech and property. Continue reading →

With all the attention on net neutrality this week, I thought I’d bring your attention to a debate on the-issue-that-won’t-go-away between Tom Hazlett and Tim Wu, which took place earlier this year at Harvard Univerisity. Below is the MP3 audio of the event, but if you want to check it out in living color, check out the video at the Information Economy Project wabsite.
Hazlett vs. Wu Net Neutrality Debate – Jan. 2011
Yet another hearing on privacy issues has been slated for this coming Wednesday, March 16th. This latest one is in the Senate Commerce Committee and it is entitled “The State of Online Consumer Privacy.” As I’m often asked by various House and Senate committee staffers to help think of good questions for witnesses, I’m listing a few here that I would love to hear answered by any Federal Trade Commission (FTC) or Dept. of Commerce (DoC) officials testifying. You will recall that both agencies released new privacy “frameworks” late last year and seem determined to move America toward a more “European-ized” conception of privacy regulation. [See our recent posts critiquing the reports here.] Here are a few questions that should be put to the FTC and DoC officials, or those who support the direction they are taking us. Please feel free to suggest others:
- Before implying that we are experiencing market failure, why hasn’t either the FTC or DoC conducted a thorough review of online privacy policies to evaluate how well organizational actions match up with promises made in those policies?
- To the extent any sort of internal cost-benefit analysis was done internally before the release of these reports, has an effort been made to quantify the potential size of the hidden “privacy tax” that new regulations like “Do Not Track” could impose on the market?
- Has the impact of new regulations on small competitors or new entrants in the field been considered? Has any attempt been made to quantify how much less entry / innovation would occur as a result of such regulation?
- Were any economists from the FTC’s Economics Bureau consulted before the new framework was released? Did the DoC consult any economists?
- Why do FTC and DoC officials believe that citing unscientific public opinions polls from regulatory advocacy organizations serves as a surrogate for serious cost-benefit analysis or an investigation into how well privacy policies actual work in the marketplace?
- If they refuse to conduct more comprehensive internal research, have the agencies considered contracting with external economists to build a body of research looking into these issues (as the Federal Communications Commission did in a decade ago in its media ownership proceeding)?
- Has either agency attempted to determine consumer’s “willingness to pay” for increased privacy regulation?
- More generally, where is the “harm” and aren’t there plenty of voluntary privacy-enhancing tools out there that privacy-sensitive users can tap to shield their digital footsteps, if they feel so inclined?
You have to wade through a lot to reach the good news at the end of Time reporter Joel Stein’s article about “data mining”—or at least data collection and use—in the online world. There’s some fog right there: what he calls “data mining” is actually ordinary one-to-one correlation of bits of information, not mining historical data to generate patterns that are predictive of present-day behavior. (See my data mining paper with Jeff Jonas to learn more.) There is some data mining in and among the online advertising industry’s use of the data consumers emit online, of course.
Next, get over Stein’s introductory language about the “vast amount of data that’s being collected both online and off by companies in stealth.” That’s some kind of stealth if a reporter can write a thorough and informative article in
Time magazine about it. Does the moon rise “in stealth” if you haven’t gone outside at night and looked at the sky? Perhaps so.
Now take a hard swallow as you read about Senator John Kerry’s (D-Mass.) plans for government regulation of the information economy.
Kerry is about to introduce a bill that would require companies to make sure all the stuff they know about you is secured from hackers and to let you inspect everything they have on you, correct any mistakes and opt out of being tracked. He is doing this because, he argues, “There’s no code of conduct. There’s no standard. There’s nothing that safeguards privacy and establishes rules of the road.”
Securing data from hackers and letting people correct mistakes in data about them are kind of
equally opposite things. If you’re going to make data about people available to them, you’re going to create opportunities for other people—it won’t even take hacking skills, really—to impersonate them, gather private data, and scramble data sets. Continue reading →
What I hoped would be a short blog post to accompany the video from Geoff Manne and my appearances this week on PBS’s “Ideas in Action with Jim Glassman” turned out to be a very long article which I’ve published over at Forbes.com.
I apologize to Geoff for taking an innocent comment he made on the broadcast completely out of context, and to everyone else who chooses to read 2,000 words I’ve written in response.
So all I’ll say here is that Geoff Manne and I taped the program in January, as part of the launch of TechFreedom and of “The Next Digital Decade.” Enjoy!
Few people have experienced just how oppressive “privacy” regulation can be quite so directly as Peter Fleischer, Google’s Global Privacy Counsel. Early last year, Peter was convicted by an Italian court because Italian teenagers used Google Video to host a video they shot of bullying a an autistic kid—even though he didn’t know about the video until after Google took it down.
Of course, imposing criminal liability on corporate officers for failing to take down user-generated content is just a more extreme form of the more popular concept of holding online intermediaries liable for failing to take down content that is allegedly defamatory, bullying, invasive of a user’s privacy, etc. Both have the same consequence: Given the incredible difficulty of evaluating such complaints, sites that host UGC will tend simply to take it down upon receiving complaints—thus being forced to censor their own users.
Now Peter has turned his withering analysis on the muddle that is Europe’s popular “Right to be Forgotten.” Adam noted the inherent conflict between that supposed “right” and our core values of free speech. It’s exactly the kind of thing UCLA Law Prof. Eugene Volokh had in mind when he asked what is your “right to privacy” but a right to stop me from observing you and speaking about you?” Peter hits the nail on the head:
More and more, privacy is being used to justify censorship. In a sense, privacy depends on keeping some things private, in other words, hidden, restricted, or deleted. And in a world where ever more content is coming online, and where ever more content is find-able and share-able, it’s also natural that the privacy counter-movement is gathering strength. Privacy is the new black in censorship fashions. It used to be that people would invoke libel or defamation to justify censorship about things that hurt their reputations. But invoking libel or defamation requires that the speech not be true. Privacy is far more elastic, because privacy claims can be made on speech that is true.
Continue reading →