Deep Web Time CoverToday is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.

The fact is that Bitcoin is inching its way into the mainstream. Indeed, the NYT’s headline is “Bitcoin Pursues the Mainstream,” and this month’s issue of WIRED includes an article titled, “Bitcoin’s Radical Days Are Over. Here’s How to Take It Mainstream.

The radicals, however, are not taking this sitting down. Also today, Cody Wilson and Unsystem have launched a crowdfunding campaign to build an anonymizing wallet. In their explanatory video, they criticize the Bitcoin Foundation as “helping the United States” regulate Bitcon, presumably to hasten its mainstream adoption. “Their mission is a performance to both agree with, and maintain an independence from, regulatory power,” Wilson says. “But you can’t have it both ways.”

This is an internecine battle that I’ve observed in the Bitcoin community for years. That of the cypherpunks who see Bitcoin as an escape hatch from state control versus the entrepreneurs who are more interested in the network’s disruptive (and thus profitable) potential. While it might be a fool’s errand, I’d like to make the case that not only is the work of the two groups not in conflict, they actually benefit from each other.

I’ve been following Bitcoin since early 2011, and in April of that year I penned the first (yes) mainstream article about Bitcoin. It was in TIME.com, and it’s been credited with kicking off the first bubble. Since then my work has focused on the regulatory policy around Bitcoin and other crypto currencies, especially looking to educate policymakers about the workings and potential benefits of decentralized payments systems. Why am I so interested in this? My reasons are twofold and they track both the entrepreneurial and cypherpunk ideals, and yet I don’t think I’m bipolar.

Continue reading →

Last week, the House held a hearing about the so-called IP Transition. The IP Transition refers to the telephone industry practice of carrying all wire-based consumer services–voice, Internet, and television–via faster, better fiber networks and not on the traditional copper wires that had fewer capabilities. Most consumers have not and will not notice the change. The completed IP Transition, however, has enormous implications for how the FCC regulates. As one telecom watcher said, “What’s at stake? Everything in telecom policy.”

For 100 years or so, phone service has had a special place in regulatory law given its importance in connecting the public. Phone service was almost exclusively over copper wires, a service affectionately called “plain old telephone service” (POTS). AT&T became the government-approved POTS national monopolist in 1913 (which ended with the AT&T antitrust breakup in the 1980s). The deal was: AT&T got to be a protected monopolist while the government got to require AT&T provide various public benefits. The most significant of these is universal service–AT&T had to serve virtually every US household and charge reasonable rates even to remote (that is, expensive) customers.

To create more phone competitors to the Baby Bells–the phone companies spun off from the AT&T break-up in the 1980s–the Congress passed the 1996 Telecom Act and the FCC put burdens on the Baby Bells to allow new phone companies to lease the Baby Bells’ AT&T-created copper wires at regulated rates. The market changed in ways never envisioned in the 1990s however. Today, phone companies face competition–not from the new phone companies leasing the old monopoly infrastructure but from entirely different technologies. You can receive voice service from your cable company (“digital voice”), your “phone” company (POTS), your wireless company, and even Internet-based providers like Vonage and Skype. Increasingly, households are leaving POTS behind in favor of voice service from cable or wireless providers. Yet POTS providers–like Verizon and AT&T (which also offer wireless service)–must abide by monopoly-era regulations that their cable and wireless competitors–Comcast, Sprint, and others–don’t have to abide by.

Understanding the significance of the IP Transition requires (unfortunately) knowing a little bit about Title I and Title II of the Communications Act. “Telecommunications services,” which are the phone companies with copper networks, are heavily regulated by the FCC under Title II. On the other hand, “information services,” which includes Internet service, are lightly regulated under Title I. This division made some sense in the 1990s. It is increasingly under stress now because burdened “telecommunications” companies like AT&T and Verizon are offering “information services” like Internet via DSL, FiOS, and U-Verse. Conversely, lightly-regulated “information services” companies like Comcast, Charter, and Time-Warner Cable are entering the regulated telephone market but face few of the regulatory burdens.

Which brings us to the IP Transition. As Title II phone companies replace their copper wires with fiber and deploy broadband networks to compete with cable companies, their customers’ phone service is being carried via IP packets. Functionally, these new networks act like a heavily-regulated Title II service since they carry voice, but they also act like the Title I broadband networks that cable providers built. So should these new fiber networks be burdened like Title II services or deregulated like Title I services? Or is it possible to achieve some middle ground using existing law? Those are the questions before the FCC and policymakers. Billions of dollars of investment will be accelerated or slowed and many firms will live or die depending on how the FCC and Congress act. Stay tuned.

Ryan Radia is one of the few people in the world with whom it is a true pleasure to discuss copyright issues. We see eye to eye on almost everything, but there is enough difference in our perspectives to make things interesting. More importantly, Ryan’s only religious fealty is to logic and the economic way of thinking, which makes for reasoned and respectful conversations. So I am delighted that he took the time to conduct one of his patented Radianalysis™ reviews of the issues raised by PiracyData.org. As is very often the case, I agree from top to bottom with what Ryan has laid out, and it has prompted some thoughts that I’d like to share.

What Ryan is addressing in his piece is the question of whether shortening or eliminating release windows would reduce piracy. He concludes that yes, “Hollywood probably could make a dent in piracy if it put every new movie on iTunes, Vudu, Google Play, Amazon, and Netflix the day of release. Were these lawful options available from the get-go, they’d likely attract some people who would otherwise pirate a hit new film by grabbing a torrent on The Pirate Bay.” That said, Ryan points out quite rightly that “even if Hollywood could better compete with piracy by vastly expanding online options for viewing new release films, this might not be a sound money-making strategy. Each major film studio is owned by a publicly-held corporation that operates for the benefit of its shareholders. In other words, the studios are in the business of earning profits, not maximizing their audiences.” I couldn’t have said it better myself.

One thing that caught me off guard when we launched PiracyData.org (but that in retrospect should not have), is that many people interpreted our attempt to create a dataset as a statement that Hollywood is to blame for its own piracy problem. As I’ve explained, I think it’s dumb to blame Hollywood for piracy, and doing so was not what motivated the project. What motivated the project was Hollywood’s claim that private third parties, such as search engines, have an obligation to do everything in their power to reduce piracy, and that companies like Google are not doing “enough” today.

As Ryan points out, the studios could probably curb piracy by changing their business model, but doing so might very well mean taking a cut in revenue. And as he also points out, the studios are not audience maximizers; they are profit maximizers. This is why they are not about to drastically change their business model anytime soon, which is their prerogative and one I understand. But then the question is, how many resources should we expect taxpayers and private third parties to spend to ensure that the studios can maximize their profits?

Continue reading →

Two weeks ago, with much fanfare, PiracyData.org went live. Created by co-liberators Jerry Brito and Eli Dourado, along with Matt Sherman, the website tracks TorrentFreak’s list of which movies are most pirated each week, and indicates whether and how consumers may legally watch these movies online. The site’s goal, Brito explains, is to “shed light on the relationship between piracy and viewing options.” Tim Lee has more details over on The Switch.

Assuming the site’s data are accurate—which it appears to be, despite some launch hiccups—PiracyData.org offers an interesting snapshot of the market for movies on the Internet. To date, the data suggest that a sizeable percentage of the most-pirated movies cannot be purchased, rented, or streamed from any legitimate Internet source. Given that most major movies are legally available online, why do the few films that aren’t online attract so many pirates? And why hasn’t Hollywood responded to rampant piracy by promptly making hit new releases available online?

Is Hollywood leaving money on the table?

To many commentators, PiracyData.org is yet another nail in Hollywood’s coffin. Mike Masnick, writing on Techdirt, argues that “the data continues to be fairly overwhelming that the ‘piracy problem’ is a problem of Hollywood’s own making.” The solution? Hollywood should focus on “making more content more widely available in more convenient ways and prices” instead of “just point[ing] the blame finger,” Masnick concludes. Echoing this sentiment, CCIA’s Ali Sternburg points out on DisCo that “[o]ne of the best options for customers is online streaming, and yet piracydata.org shows that none of the most pirated films are available to be consumed in that format.”

But the argument that Hollywood could reap greater profits and discourage piracy simply by making its content more available has serious flaws. For one thing, as Ryan Chittum argues in the Columbia Journalism Review, “the movies in the top-10 most-pirated list are relatively recent releases.” Thus, he observes, these movies are “in higher demand—including from thieves—than back-catalog films.” If PiracyData.org tracked release dates, each film’s recency of release might well turn out to be more closely correlated with piracy than availability of legitimate viewing options.

In fairness to Masnick and Sternburg, Hollywood probably could make a dent in piracy if it put every new movie on iTunes, Vudu, Google Play, Amazon, and Netflix the day of release. Were these lawful options available from the get-go, they’d likely attract some people who would otherwise pirate a hit new film by grabbing a torrent on The Pirate Bay. Those who pirate movies may be law-breaking misers, but they still weigh tradeoffs and respond to incentives like any other consumer. Concepts like legality may not matter to pirates, but they still care about price, quality, and convenience. This is why you won’t see a video that’s freely available in high-definition on YouTube break a Bittorrent record anytime soon.

Continue reading →

Christopher Wolf, director of the law firm Hogan Lovells’ Privacy and Information Management group, addresses his new book with co-author Abraham Foxman, Viral Hate: Containing Its Spread on the Internet. To what extent do hateful or mean-spirited Internet users hide behind anonymity? How do we balance the protection of the First Amendment online while addressing the spread of hate speech? Wolf discusses how to define hate speech on the Internet; whether online hate speech leads to real-world violence; how news sites like the Huffington Post and New York Times have dealt with anonymity; lessons we should impart on the next generation of Internet users to discourage hate speech; and cases where anonymity has proved particularly beneficial or valuable.

Download

Related Links

Jon Brodkin at Ars Technica and Brian Fung at The Switch have posts featuring a New America Foundation study, The Cost of Connectivity 2013, comparing international prices and speeds of broadband. As I told Fung when he asked for my assessment of the study, I was left wondering whether lower prices in some European and Asian cities arise from more competition in those cities or unacknowledged tax benefits and consumer subsidies that bring the price of, say, a local fiber network down.

The report raised a few more questions in my mind, however, that I’ll outline here. Continue reading →

In her UN General Assembly speech denouncing NSA surveillance, Brazil’s President Dilma Rousseff said:

Information and communications technologies cannot be the new battlefield between States. Time is ripe to create the conditions to prevent cyberspace from being used as a weapon of war, through espionage, sabotage, and attacks against systems and infrastructure of other countries. … For this reason, Brazil will present proposals for the establishment of a civilian multilateral framework for the governance and use of the Internet and to ensure the protection of data that travels through the web.

We share her outrage at mass surveillance. We share her opposition to the militarization of the Internet. We share her concern for privacy.

But when President Rousseff proposes to solve these problems by means of a “multilateral framework for the governance and use of the Internet,” she reveals a fundamental flaw in her thinking. It is a flaw shared by many in civil society.

You cannot control militaries, espionage and arms races by “governing the Internet.” Cyberspace is one of many aspects of military competition. Unless one eliminates or dramatically diminishes political and military competition among sovereign states, states will continue to spy, break into things, and engage in conflict when it suits their interests. Cyber conflict is no exception.

Rousseff is mixing apples and oranges. If you want to control militaries and espionage, then regulate arms, militaries and espionage – not “the Internet.”

This confusion is potentially dangerous. If the NSA outrages feed into a call for global Internet governance, and this governance focuses on critical Internet resources and the production and use of Internet-enabled services by civil society and the private sector, as it inevitably will, we are certain to get lots of governance of the Internet, and very little governance of espionage, militaries, and cyber arms.

In other words, Dilma’s “civilian multilateral framework for the governance and use of the Internet” is only going to regulate us – the civilian users and private sector producers of Internet products and services. It will not control the NSA, the Chinese Peoples Liberation Army, the Russian FSB or the British GCHQ.

Realism in international relations theory is based on the view that the international system is anarchic. This does not mean that it is chaotic, but simply that the system is composed of independent states and there is no central authority capable of coercing all of them into following rules. The other key tenet of realism is that the primary goal of states in the international system is their own survival.

It follows that the only way one state can compel another state to do anything is through some form of coercion, such as war, a credible threat of war, or economic sanctions. And the only time states agree to cooperate to set and enforce rules, is when it is in their self-interest to do so. Thus, when sovereign states come together to agree to regulate things internationally, their priorities will always be to:

  • Preserve or enlarge their own power relative to other states; and
  • Ensure that the regulations are designed to bring under control those aspects of civil society and business that might undermine or threaten their power.

Any other benefits, such as privacy for users or freedom of expression, will be secondary concerns. That’s just the way it is in international relations. Asking states to prevent cyberspace from being used as a weapon of war is like asking foxes to guard henhouses.

That’s one reason why it is so essential that these conferences be fully open to non-state actors, and that they not be organized around national representation.

Let’s think twice about linking the NSA reaction too strongly to Internet governance. There is some linkage, of course. The NSA revelations should remind us to be realist in our approach to Internet governance. This means recognizing that all states will approach Internet regulation with their own survival and power uppermost in their agenda; it also means that any single state cannot be trusted as a neutral steward of the global Internet but will inevitably use its position to benefit itself. These implications of the Snowden revelations need to be recognized. But let us not confuse NSA regulation with Internet regulation.

The forum has largely been overtaken by discussion of ICANN’s move to organize a new Internet governance coalition. ICANN representatives have had both open- and closed-door meetings to push the proposal, but there are still many questions that have not been adequately answered.

One important question is about the private discussions that have led to this. The I-stars came out at least nominally aligned on this issue, though there is speculation that they are not all totally unified. Over drinks, I mentioned to an ICANN board member that it rubs a lot of people in civil society the wrong way that the I-stars seem to have coordinated on this in private. He replied that I was probably assuming too much about the level of coordination. If that’s the case, then I wonder if we will hear more from the other I-stars about their level of support for ICANN’s machinations.

More basically, we still don’t know much about the Rio non-summit. It will be in Rio, it will be in May, there will be some sort of output document. But we don’t know the agenda, or the agenda-setting process, or even the process for setting an agenda-setting process.

And strategically, we don’t know how the Brazil meeting is going to affect all of the other parts of the take-over-the-Internet industry in the coming year. The CWG-Internet happens next month, and they will take up Brazil’s proposal from the WTPF. But since Brazil is positioning itself as a leader in this new process (and aligned with ICANN now), what will they try to get at the CWG? WTDC is in March-April. And of course the Plenipot will be in the fall next year. If the Brazil summit is perceived to have failed in any sense, will that make the battle at Plenipot even more intense?

Also, whose idea was it to have a gala without alcohol?

How do DC and SF think about the future? Are their visions of how to promote, and adapt to, technological change compatible? Or are America’s policymakers fundamentally in conflict with its innovators? Can technology ultimately trump politics?

In the near-term, are traditional left/right divides breaking down? What are the real fault lines in technology policy? Where might a divided Congress reach consensus on tech policy issues like privacy, immigration, copyright, censorship, Internet freedom and biotech?

For answers and more questions, join moderator Declan McCullagh (Chief Political Correspondent for CNET), and a panel of technology policy experts: Berin Szoka (President, TechFreedom), Larry Downes (author, Laws of Disruption), and Mike McGeary (Co-Founder and Chief Political Strategist, Engine Advocacy). This event will include a complimentary lunch and is co-sponsored by TechFreedom, Reason Foundation, and the Charles Koch Institute.

Continue reading →

Brent SkorupAdam, Eli, and I are very happy to announce that Brent Skorup joined us this week as a research fellow at Mercatus. He will focus on telecommunications, radio spectrum, and media issues, which will help round out our existing portfolio of work on privacy, cybersecurity, intellectual property, Internet governance, and innovation policy.

Brent has written Mercatus research papers on federal spectrum policy, cronyism in the technology sector, and antitrust standards in the tech economy. Brent also has a forthcoming paper co-authored with Thomas Hazlett on the lessons of LightSquared. His work has appeared in several law reviews, The Hill, US News & World Report, The Washington Post, Bloomberg Businessweek, and San Francisco Chronicle. He also blogs here at Tech Liberation.

With the ongoing debate at the federal level over how to efficiently use radio spectrum, Brent has proposed establishing a congressional commission to determine spectrum allocation for federal users and put up newly available spectrum for auction. He has also called for having an agency similar to the General Services Administration take ownership of federal spectrum and “rent” it to agencies at a fair market value.

Brent previously served as director of operations and research for the Information Economy Project at the George Mason University School of Law, applying law and economics to telecommunications policy. He has a BA in economics from Wheaton College and received his JD at Mason.