Christopher Steiner, author of Automate This: How Algorithms Came to Rule the World, discusses his new book. Steiner originally set about studying the prevalence of algorithms in Wall Street stock trading but soon found they were everywhere. Stock traders were the first to use algorithms as a substitute for human judgment to make trades automatically, allowing for much faster trading. But now algorithms are used to diagnose illnesses, interpret legal documents, analyze foreign policy, and write newspaper articles. Algorithms have even been used to look at how people form sentences to determine that person’s personality and mental state so that customer service agents can deal with upset customers better. Steiner discusses the benefits–and risks–of algorithmic automation and how it will change the world.

Listen to the Podcast

Download MP3

Related Links

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

Psychologists Daniel Simons and Christopher Chabris had an interesting editorial in The Wall Street Journal this weekend asking, “Do Our Gadgets Really Threaten Planes?” They conducted an online survey of 492 American adults who have flown in the past year and found that “40% said they did not turn their phones off completely during takeoff and landing on their most recent flight; more than 7% left their phones on, with the Wi-Fi and cellular communications functions active. And 2% pulled a full Baldwin, actively using their phones when they weren’t supposed to.”

Despite the widespread prevalence of such law-breaking activity, planes aren’t falling from the sky and yet the Federal Aviation Administration continues to enforce the rule prohibiting the use of digital gadgets during certain times during flight. “Why has the regulation remained in force for so long despite the lack of solid evidence to support it?” Simons and Chabris ask. They note:

Human minds are notoriously overzealous “cause detectors.” When two events occur close in time, and one plausibly might have caused the other, we tend to assume it did. There is no reason to doubt the anecdotes told by airline personnel about glitches that have occurred on flights when they also have discovered someone illicitly using a device. But when thinking about these anecdotes, we don’t consider that glitches also occur in the absence of illicit gadget use. More important, we don’t consider how often gadgets have been in use when flights have been completed without a hitch. Our survey strongly suggests that there are multiple gadget violators on almost every flight.

That’s all certain true, but what actually motivated this ban — and has ensured its continuation despite a lack of evidence it is needed to diminish technological risk — is the precautionary principle. As the authors correct note: Continue reading →

The privacy debate has been increasingly shaped by an apparent consensus that de-identifying sets of personally identifying information doesn’t work.  In particular, this has led the FTC to abandon the PII/non-PII distinction on the assumption that re-identification is too easy.  But a new paper shatters this supposed consensus by rebutting the methodology of Latanya Sweeney’s seminal 1997 study of re-identification risks, which in turn, shaped the HIPAA’s rules for de-identification of health data and the larger privacy debate ever since.

This new critical paper, “The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now” was published by Daniel Barth-Jones, an epidemiologist and statistician at Columbia University. After carefully re-examining the methodology of Sweeney’s 1997 study, he concludes that re-identification attempts will face “far-reaching systemic challenges” that are inherent in the statistical methods used to re-identify. In short, re-identification turns out to be harder than it seemed—so our identity can more easily be obscured in large data sets. This more nuanced story must be understood by privacy law scholars and public policy-makers if they want to realistically assess current privacy risks posed by de-identified data—not just for health data, but for all data.

The importance of Barth-Jones’s paper is underscored by the example of Vioxx, which stayed on the market years longer than it should have because of HIPAA’s privacy rules, thus resulting in  88,000 and 139,000 unnecessary heart attacks, and 27,000-55,000 avoidable deaths—as University of Arizona Law Professor Jane Yakowitz Bambauer explained in a recent Huffington Post piece.

Ultimately, overstating the risk of re-identification causes policymakers to strike the wrong balance in the trade-off of privacy with other competing values.  As Barth-Jones and Yakowitz have suggested, policymakers should instead focus on setting standards for proper de-identification of data that are grounded in a rigorous statistical analysis of re-identification risks.  A safe harbor for proper de-identification, combined with legal limitations on re-identification, could protect consumers against real privacy harms while still allowing the free flow of data that drives research and innovation throughout the economy.

Unfortunately, the Barth-Jones paper has not received the attention it deserves.  So I encourage you consider writing about this, or just take a moment to share this with your friends on Twitter or Facebook.

In my last post, I discussed an outstanding new paper from Ronald Cass on “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk.” As I noted, it’s one of the best things I’ve ever read about the relationship between antitrust regulation and the modern information economy. That got me thinking about what other papers on this topic that I might recommend to others. So, for what it’s worth, here are the 12 papers that have most influenced my own thinking on the issue. (If you have other suggestions for what belongs on the list, let me know. No reason to keep it limited to just 12.)

  1. J. Gregory Sidak & David J. Teece, “Dynamic Competition in Antitrust Law,” 5 Journal of Competition Law & Economics (2009).
  2. Geoffrey A. Manne &  Joshua D. Wright, “Innovation and the Limits of Antitrust,” 6 Journal of Competition Law & Economics, (2010): 153
  3. Joshua D. Wright, “Antitrust, Multi-Dimensional Competition, and Innovation: Do We Have an Antitrust-Relevant Theory of Competition Now?” (August 2009).
  4. Daniel F. Spulber, “Unlocking Technology: Antitrust and Innovation,” 4(4) Journal of Competition Law & Economics, (2008): 915.
  5. Ronald Cass, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” 9(2) Journal of Law, Economics and Policy, Forthcoming (Spring 2012)
  6. Richard Posner, “Antitrust in the New Economy,” 68 Antitrust Law Journal, (2001).
  7. Stan J. Liebowitz & Stephen E. Margolis,”Path Dependence, Lock-in, and History,” 11(1) Journal of Law, Economics and Organization, (April 1995): 205-26.
  8. Robert Crandall and Charles Jackson, “Antitrust in High-Tech Industries,” Technology Policy Institute (December 2010).
  9. Bruce Owen, “Antitrust and Vertical Integration in ‘New Economy’ Industries,” Technology Policy Institute (November 2010).
  10. Douglas H. Ginsburg & Joshua D. Wright, “Dynamic Analysis and the Limits of Antitrust Institutions,” 78 (1) Antitrust Law Journal (2012): 1-21.
  11. Thomas Hazlett, David Teece, Leonard Waverman, “Walled Garden Rivalry: The Creation of Mobile Network Ecosystems,” George Mason University Law and Economics Research Paper Series, (November 21, 2011), No. 11-50.
  12. David S. Evans, “The Antitrust Economics of Two Sided Markets.”

Ronald Cass, Dean Emeritus of Boston University School of Law, has penned the best paper on antitrust regulation that you will read this year, especially if you’re interested in the relationship between antitrust and  information technology sectors.  His paper is entitled, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” and it makes two straightforward points:

  1. Antitrust enforcement has characteristics and risks similar to other forms of regulation.
  2. Antitrust authorities need to exercise special care in making enforcement decisions respecting conduct of individual dominant firms in high-technology industries.

Here are some highlights from the paper that build on those two points. Continue reading →

The New WCITLeaks

by on September 6, 2012 · 0 comments

Today, Jerry and I are pleased to announce a major update to WCITLeaks.org, our project to bring transparency to the ITU’s World Conference on International Telecommunications (WCIT, pronounced wicket).

If you haven’t been following along, WCIT is an upcoming treaty conference to update the International Telecommunication Regulations (ITRs), which currently govern some parts of the international telephone system, as well as other antiquated communication methods, like telegraphs. There has been a push from some ITU member states to bring some aspects of Internet policy into the ITRs for the first time.

We started WCITLeaks.org to provide a public hosting platform for people with access to secret ITU documents. We think that if ITU member states want to discuss the future of the Internet, they need to do so on an open and transparent basis, not behind closed doors.

Today, we’re taking our critique one step further. Input into the WCIT process has been dominated by member states and private industry. We believe it is important that civil society have its say as well. That is why we are launching a new section of the site devoted to policy analysis and advocacy resources. We want the public to have the very best information from a broad spectrum of civil society, not just whatever information most serves interests of the ITU, member states, and trade associations.

Continue reading →

Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.

Listen to the Podcast

Download MP3

Related Links

I have always found it strange that the ACLU speaks with two voices when it comes to user empowerment as a response to government regulation of the Internet. That is, when responding to government efforts to regulate the Internet for online safety or speech purposes, the ACLU stresses personal responsibility and user empowerment as the first-order response. But as soon as the conversation switches to online advertising and data collection, the ACLU suggests that people are basically sheep who can’t possibly look out for themselves and, therefore, increased Internet regulation is essential. They’re not the only ones adopting this paradoxical position. In previous essays I’ve highlighted how both EFF and CDT do the same thing. But let me focus here on ACLU.

Writing today on the ACLU “Free Future” blog, ACLU senior policy analyst Jay Stanley cites a new paper that he says proves “the absurdity of the position that individuals who desire privacy must attempt to win a technological arms race with the multi-billion dollar internet-advertising industry.” The new study Stanley cites says that “advertisers are making it impossible to avoid online tracking” and that it isn’t paternalistic for government to intervene and regulate if the goal is to enhance user privacy choices. Stanley wholeheartedly agrees. In this and other posts, he and other ACLU analysts have endorsed greater government action to address this perceived threat on the grounds that, in essence, user empowerment cannot work when it comes to online privacy.

Again, this represents a very different position from the one that ACLU has staked out and brilliantly defended over the past 15 years when it comes to user empowerment as the proper and practical response to government regulation of objectionable online speech and pornography. For those not familiar, beginning in the mid-1990s, lawmakers started pursuing a number of new forms of Internet regulation — direct censorship and mandatory age verification were the primary methods of control — aimed at curbing objectionable online speech. In case after case, the ACLU rose up to rightly defend our online liberties against such government encroachment. (I was proud to have worked closely with many former ACLU officials in these battles.) Most notably, the ACLU pushed back against the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA) and they won landmark decisions for us in the process. Continue reading →

Nicolas Christin, Associate Director of the Information Networking Institute at Carnegie Mellon University, discuses the Silk Road anonymous online marketplace. Silk Road is a site where buyers and sellers can exchange goods much like eBay and Craigslist. The difference is that the identity of both the buyers and sellers is anonymous and goods are exchanged for bitcoins rather than traditional currencies. The site has developed a reputation of being a popular online portal for buying and selling drugs because of this anonymity, which has caused some politicians to call for the site to be investigated and closed by law enforcement. Despite all of this, the Silk Road remains a very stable marketplace with a very good track record of consumer satisfaction. Christin conducted an extensive empirical study of the site, which he discusses.


Download