April 2012

The Cato Institute’s jobs page has a new posting. If you have the right mix of data/technical skillz, public policy knowledge, love of freedom, and vim, this could be your chance to advance the ball on government transparency! [Added: For more background on Cato’s transparency work, see this and this.]

Data Curator, Center on Information Policy Studies

The Cato Institute seeks a data curator to support its government data transparency program. This candidate will perform a variety of functions that translate government documents and activities into semantically rich, machine-readable data. Major duties will include reading legislative documents, marking them up with semantic information, and identifying opportunities for automated identification and extraction of semantic information in documents. The candidate will also oversee the data entry process and train and supervise others to perform data entry. The ideal candidate will have a college degree, preferably in computer science and/or political science, and experience using XML, RDFa, and regular expressions. Attention to detail is a must, with an understanding of U.S federal legislative, spending, and regulatory processes preferred.

Applicants should send their resume, cover letter, and a short writing sample to:

Jim Harper
Director of Information Policy Studies
Cato Institute
1000 Massachusetts Ave. NW
Washington, DC 20001
Fax (202) 842-3490
Email: jharper@cato.org

Here’s an exclusive insider tip for TechLiberationFront readers. Don’t send your application by fax! That would send the wrong signal…

Yesterday on TechCrunch, Josh Constine posted an interesting essay about how some in the press were “Selling Digital Fear” on the privacy front. His specific target was The Wall Street Journal, which has been running an ongoing investigation of online privacy issues with a particular focus on online apps. Much of the reporting in their “What They Know” series has been valuable in that it has helped shine light on some data collection practices and privacy concerns that deserve more scrutiny. But as Constine notes, sometimes the articles in the WSJ series lack sufficient context, fail to discuss trade-offs, or do not identify any concrete harm or risk to users. In other words, some of it is just simple fear-mongering. Constine argues:

Reality has yet to stop media outlets from yelling about privacy, and because the WSJ writers were on assignment, they wrote the “Selling You On Facebook” hit piece despite thin findings. These kind of articles can make mainstream users so worried about the worst-case scenario of what could happen to their data, they don’t see the value they get in exchange for it. “Selling You On Facebook” does bring up the important topic of how apps can utilize personal data granted to them by their users, but it overstates the risks. Yes, the business models of Facebook and the apps on its platform depend on your personal information, but so do the services they provide. That means each user needs to decide what information to grant to who, and Facebook has spent years making the terms of this value exchange as clear as possible.

“While sensationalizing the dangers of online privacy sure drives page views and ad revenue,” Constine also noted, “it also impedes innovation and harms the business of honest software developers.” These trade-offs are important because, to the extent policymakers get more interested in pursing privacy regulations based on these fears, they could force higher prices or less innovation upon us with very little benefit in exchange.

Of course, the press generating hypothetical fears or greatly inflating dangers is nothing new. We have seen it happen many times in the past and it can be seen at work in many other fields today (online child safety is a good example). In my recent 80-page paper on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” I discussed how and why the press and other players inflate threats and sell fear. Here’s a passage from my paper: Continue reading →

Last week, I posted about the conflict between the Koch brothers and the Cato Institute, threatening to make that post first in a series. Never let it be said that I don’t follow through on my threats, sometimes.

Recapping: I believe the Koch brothers want what’s best for liberty, but the actions of the “Koch side”—an array of actors with differing motivations and strategies—may not be serving that goal. This seems due to miscalculation: the Koch side seems not to recognize how much of the Cato Institute’s value is in its reputational capital, capital which would be despoiled by a Koch takeover. I basically fleshed out an early point of Jonathan Adler’s on the Volokh Conspiracy.

But why is it the Koch side that gets the attention and not the Cato side? Continue reading →

Reason.org has just posted my commentary on the five reasons why Federal Trade Commission’s proposals to regulate the collection and use of consumer information on the Web will do more harm than good.

As I note, the digital economy runs on information. Any regulations that impede the collection and processing of any information will affect its efficiency. Given the overall success of the Web and the popularity of search and social media, there’s every reason to believe that consumers have been able to balance their demand for content, entertainment and information services with the privacy policies these services have.

But there’s more to it than that. Technology simply doesn’t lend itself to the top-down mandates. Notions of privacy are highly subjective. Online, there is an adaptive dynamic constantly at work. Certainly web sites have pushed the boundaries of privacy sometimes. But only when the boundaries are tested do we find out where the consensus lies.

Legislative and regulatory directives pre-empt experimentation. Consumer needs are best addressed when best practices are allowed to bubble up through trial-and-error. When the economic and functional development of European Web media, which labors under the sweeping top-down European Union Privacy Directive, is contrasted with the dynamism of the U.S. Web media sector which has been relatively free of privacy regulation – the difference is profound.

An analysis of the web advertising market undertaken by researchers at the University of Toronto found that after the Privacy Directive was passed, online advertising effectiveness decreased on average by around 65 percent in Europe relative to the rest of the world. Even when the researchers controlled for possible differences in ad responsiveness and between Europeans and Americans, this disparity manifested itself. The authors go on to conclude that these findings will have a “striking impact” on the $8 billion spent each year on digital advertising: namely that European sites will see far less ad revenue than counterparts outside Europe.

Other points I explore in the commentary are:

  • How free services go away and paywalls go up
  • How consumers push back when they perceive that their privacy is being violated
  • How Web advertising lives or dies by the willingness of consumers to participate
  • How greater information availability is a social good

The full commentary can be found here.

 

On the podcast this week, Christina Mulligan, Visiting Fellow at the Information Society Project at Yale Law School, discusses Her new paper, co-authored with Tim Lee, entitled, Scaling the Patent System. Mulligan begins by describing the policy behind patents: to give temporary exclusive rights to inventors so they can benefit monetarily for their inventions. She then explains the thesis of the paper, which argues that the patent system is failing because it is too large to scale. Mulligan claims that some industries are ignoring patents when they develop new products because it is nearly impossible to discover whether a new product will infringe on an existing patent. She then highlights industries where patents are effective, like the pharmaceutical and chemical industries. According to Mulligan, these industries rarely infringe on patents because existing patents are “indexable,” meaning they are easy to look up. The discussion concludes with Mulligan offering solutions for the current problem, which includes restricting the subject matter of patents to indexable matters.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?