April 2014

Andrea Castillo and I have a new paper out from the Mercatus Center entitled “Why the Cybersecurity Framework Will Make Us Less Secure.” We contrast emergent, decentralized, dynamic provision of security with centralized, technocratic cybersecurity plans. Money quote:

The Cybersecurity Framework attempts to promote the outcomes of dynamic cybersecurity provision without the critical incentives, experimentation, and processes that undergird dynamism. The framework would replace this creative process with one rigid incentive toward compliance with recommended federal standards. The Cybersecurity Framework primarily seeks to establish defined roles through the Framework Profiles and assign them to specific groups. This is the wrong approach. Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts. What’s more, an assessment of DHS critical infrastructure categorizations by the Government Accountability Office (GAO) finds that the DHS itself has failed to adequately communicate its internal categories with other government bodies. Adding to the confusion is the proliferating amalgam of committees, agencies, and councils that are necessarily invited to the table as the number of “critical” infrastructures increases. By blindly beating the drums of cyber war and allowing unfocused anxieties to clumsily force a rigid structure onto a complex system, policymakers lose sight of the “far broader range of potentially dangerous occurrences involving cyber-means and targets, including failure due to human error, technical problems, and market failure apart from malicious attacks.” When most infrastructures are considered “critical,” then none of them really are.

We argue that instead of adopting a technocratic approach, the government should take steps to improve the existing emergent security apparatus. This means declassifying information about potential vulnerabilities and kickstarting the cybersecurity insurance market by buying insurance for federal agencies, which experienced 22,000 breaches in 2012. Read the whole thing, as they say.

[The following essay is a guest post from Dan Rothschild, director of state projects and a senior fellow with the R Street Institute.]

As anyone who’s lived in a major coastal American city knows, apartment renting is about as far from an unregulated free market as you can get. Legal and regulatory stipulations govern rents and rent increases, what can and cannot be included in a lease, even what constitutes a bedroom. And while the costs and benefits of most housing policies can be debated and deliberated, it’s generally well known that housing rentals are subject to extensive regulation.

But some San Francisco tenants have recently learned that, in addition to their civil responsibilities under the law, their failure to live up to some parts of the city’s housing code may trigger harsh criminal penalties as well. To wit: tenants who have been subletting out part or all of their apartments on a short-term basis, usually through web sites like Airbnb, are finding themselves being given 72 hours to vacate their (often rent-controlled) homes.

San Francisco’s housing stock is one of the most highly regulated in the country. The city uses a number of tools to preserve affordable housing and control rents, while at the same time largely prohibiting higher buildings that would bring more units online, increasing supply and lowering prices. California’s Ellis Act provides virtually the only legal and effective means of getting tenants (especially those benefiting from rent control) out of their units — but it has the perverse incentive of causing landlords to demolish otherwise useable housing stock.

Again, the efficiency and equity ramifications of these policies can be discussed; the fact that demand curves slope downward, however, is really not up for debate.

Under San Francisco’s municipal code it may be a crime punishable by jail time to rent an apartment on a short-term basis. More importantly, it gives landlords the excuse they need to evict tenants they otherwise can’t under the city’s and state’s rigorous tenant protection laws. After all, they’re criminals!

Here’s the relevant section of the code:

Any owner who rents an apartment unit for tourist or transient use as defined in this Chapter shall be guilty of a misdemeanor. Any person convicted of a misdemeanor hereunder shall be punishable by a fine of not more than $1,000 or by imprisonment in the County Jail for a period of not more than six months, or by both. Each apartment unit rented for tourist or transient use shall constitute a separate offense.

Here lies the rub. There are certainly legitimate reasons to prohibit the short-term rental of a unit in an apartment or condo building — some people want to know who their neighbors are, and a rotating cast of people coming and going could potentially be a nuisance.

But that’s a matter for contracts and condo by-laws to sort out. If people value living in units that they can list on Airbnb or sublet to tourists when they’re on vacation, that’s a feature like a gas stove or walk-in closet that can come part-and-parcel of the rental through contractual stipulation. Similarly, if people want to live in a building where overnight guests are verboten, that’s something landlords or condo boards can adjudicate. The Coase Theorem can be a powerful tool, if the law will allow it.

The fact that, so far as I can tell, there’s no prohibition on having friends or family stay a night — or even a week — under San Francisco code, it seems that the underlying issue isn’t a legitimate concern about other tenants’ rights but an aversion to commerce. From the perspective of my neighbor, there’s no difference between letting my friend from college crash in my spare bedroom for a week or allowing someone I’ve never laid eyes on before do the same in exchange for cash.

The peer production economy is still in its infancy, and there’s a lot that needs to be worked out. Laws like those in San Francisco’s that circumvent the discovery process of markets prevent landlords, tenants, condos, homeowners, and regulators from leaning from experience and experimentation — and lock in a mediocre system that threatens to put people in jail for renting out a room.

opengraphI’m thrilled to make available today a discussion draft of a new paper I’ve written with Houman Shadab and Andrea Castillo looking at what will likely be the next wave of Bitcoin regulation, which we think will be aimed at financial instruments, including securities and derivatives, as well as prediction markets and even gambling. You can grab the draft paper from SSRN, and we very much hope you will give us your feedback and help us correct any errors. This is a complicated issue area and we welcome all the help we can get.

While there are many easily regulated intermediaries when it comes to traditional securities and derivatives, emerging bitcoin-denominated instruments rely much less on traditional intermediaries. Additionally, the block chain technology that Bitcoin introduced for the first time makes completely decentralized markets and exchanges possible, thus eliminating the need for intermediaries in complex financial transactions. In the article we survey the type of financial instruments and transactions that will most likely be of interest to regulators, including traditional securities and derivatives, new bitcoin-denominated instruments, and completely decentralized markets and exchanges.

We find that bitcoin derivatives would likely not be subject to the full scope of regulation under the Commodities and Exchange Act because such derivatives would likely involve physical delivery (as opposed to cash settlement) and would not be capable of being centrally cleared. We also find that some laws, including those aimed at online gambling, do not contemplate a payment method like Bitcoin, thus placing many transactions in a legal gray area.

Following the approach to Bitcoin taken by FinCEN, we conclude that other financial regulators should consider exempting or excluding certain financial transactions denominated in Bitcoin from the full scope of the regulations, much like private securities offerings and forward contracts are treated. We also suggest that to the extent that regulation and enforcement becomes more costly than its benefits, policymakers should consider and pursue strategies consistent with that new reality, such as efforts to encourage resilience and adaptation.

I look forward to your comments!

It was my great pleasure to join Jasmine McNealy last week on the “New Books in Technology” podcast to discuss my new book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. (A description of my book can be found here.)

My conversation with Jasmine was wide-ranging and lasted 47 minutes. The entire show can be heard here if you’re interested.

By the way, if you don’t follow Jasmine, you should begin doing so immediately. She’s on Twitter and here’s her page at the University of Kentucky School of Library and Information Science.  She’s doing some terrifically interesting work. For example, check out her excellent essay on “Online Privacy & The Right To Be Forgotten,” which I commented on here.

Recent reports highlight that the telephone meta-data collection efforts of the National Security Agency are being undermined by the proliferation of flat-rate, unlimited voice calling plans.  The agency is collecting data for less than a third of domestic voice traffic, according to one estimate.

It’s been clear for the past couple months that officials want to fix this, and President Obama’s plan for leaving meta-data in the hands of telecom companies—for NSA to access with a court order—might provide a back door opportunity to expand collection to include all calling data.  There was a potential new twist last week, when Reuters seemed to imply that carriers could be forced to collect data for all voice traffic pursuant to a reinterpretation of the current rule.

While the Federal Communications Commission requires phone companies to retain for 18 months records on “toll” or long-distance calls, the rule’s application is vague (emphasis added) for subscribers of unlimited phone plans because they do not get billed for individual calls.

The current FCC rule (47 C.F.R. § 42.6) requires carriers to retain billing information for “toll telephone service,” but the FCC doesn’t define this familiar term.  There is a statutory definition, but you have to go to the Internal Revenue Code to find it.  According to 26 U.S.C. § 4252(b),

the term “toll telephone service” means—

(1) a telephonic quality communication for which

(A) there is a toll charge which varies in amount with the distance and elapsed transmission time of each individual communication… Continue reading →

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.

______________________________

Michael:

Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.

In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”

I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.

That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms. Continue reading →

Monday, TechFreedom submitted comments urging the White House to apply economic thinking to its inquiry into “Big Data,” also pointing out that the worst abuses of data come not from the private sector, but government. The comments were in response to a request by the Office of Science and Technology Policy.

“On the benefits of Big Data, we urge OSTP to keep in mind two cautions. First, Big Data is merely another trend in an ongoing process of disruptive innovation that has characterized the Digital Revolution. Second, cost-benefit analyses generally, and especially in advance of evolving technologies, tend to operate in aggregates which can be useful for providing directional indications of future trade-offs, but should not be mistaken for anything more than that,” writes TF President Berin Szoka.

The comments also highlight the often-overlooked reality that data, big or small, is speech. Therefore, OSTP’s inquiry must address the First Amendment analysis. Historically, policymakers have ignored the First Amendment in regulating new technologies, from film to blogs to video games, but in 2011 the Supreme Court made clear in Sorrell v. IMS Health that data is a form of speech. Any regulation of Big Data should carefully define the government’s interest, narrowly tailor regulations to real problems, and look for less restrictive alternatives to regulation, such as user empowerment, transparency and education. Ultimately, academic debates over how to regulate Big Data are less important than how the Federal Trade Commission currently enforces existing consumer protection laws, a subject that is the focus of the ongoing FTC: Technology & Reform Project led by TechFreedom and the International Center for Law & Economics.

More important than the private sector’s use of Big Data is the government’s abuse of it, the group says, referring to the NSA’s mass surveillance programs and the Administration’s opposition to requiring warrants for searches of Americans’ emails and cloud data. Last December, TechFreedom and its allies garnered over 100,000 signatures on a WhiteHouse.gov petition for ECPA reform. While the Administration has found time to reply to frivolous petitions, such as asking for the construction of a Death Star, it has ignored this serious issue for over three months. Worse, the administration has done nothing to help promote ECPA reform and, instead, appears to be actively orchestrating opposition to it from theoretically independent regulatory agencies, which has stalled reform in the Senate.

“This stubborn opposition to sensible, bi-partisan privacy reform is outrageous and shameful, a hypocrisy outweighed only by the Administration’s defense of its blanket surveillance of ordinary Americans,” said Szoka. “It’s time for the Administration to stop dodging responsibility or trying to divert attention from the government-created problems by pointing its finger at the private sector, by demonizing private companies’ collection and use of data while the government continues to flaunt the Fourth Amendment.”

Szoka is available for comment at media@techfreedom.org. Read the full comments and see TechFreedom’s other work on ECPA reform.

Today on Capitol Hill, the House Energy and Commerce Committee is holding a hearing on the NTIA’s recent announcement that it will relinquish its small but important administrative role in the Internet’s domain name system. The announcement has alarmed some policymakers with a well-placed concern for the future of Internet freedom; hence the hearing. Tomorrow, I will be on a panel at ITIF discussing the IANA oversight transition, which promises to be a great discussion.

My general view is that if well executed, the transition of the DNS from government oversight to purely private control could actually help secure a measure of Internet freedom for another generation—but the transition is not without its potential pitfalls. Continue reading →

This blog was made in cooperation with Michael James Horney, George Mason University master’s student, based upon our upcoming paper on broadband innovation, investment and competition.

Ezra Klein’s interview with Susan Crawford paints a glowing picture of  publicly provided broadband, particularly fiber to the home (FTTH), but the interview missed a number of important points.

The international broadband comparisons provided were selective and unstandardized.  The US is much bigger and more expensive to cover than many small, highly populated countries. South Korea is the size of Minnesota but has 9 times the population. Essentially the same amount of network can be deployed and used by 9 times as many people. This makes the business case for fiber more cost effective.  However South Korea has limited economic growth to show for its fiber investment. A recent Korean government report complained of “jobless growth”.  The country still earns the bulk of its revenue from the industries from the pre-broadband days.

It is more realistic and correct to compare the US to the European Union, which has a comparable population and geographic areas.  Data from America’s National Broadband Map and the EU Digital Agenda Scoreboard show that  the US exceeds the EU on many important broadband measures, including the deployment of fiber to the home (FTTH), which is twice the rate of EU.  Considering where fiber networks are available in the EU, the overall adoption rate is just 2%.  The EU government itself, as part of its Digital Single Market initiative, has recognized that its approach to broadband has not worked and is now looking to the American model.

The assertion that Americans are “stuck” with cable as the only provider of broadband is false.  It is more correct to say that Europeans are “stuck” with DSL, as 74% of all EU broadband connections are delivered on copper networks. Indeed broadband and cable together account for 70% of America’s broadband connections, with the growing 30% comprising FTTH, wireless, and other  broadband solutions.  In fact, the US buys and lays more fiber than all of the EU combined.

The reality is that Europeans are “stuck” with a tortured regulatory approach to broadband, which disincentivizes investment in next generation networks. As data from Infonetics show, a decade ago the EU accounted for one-third of the world’s investment in broadband; that amount has plummeted to less than one-fifth today. Meanwhile American broadband providers invest at twice the rate of European and account for a quarter of the world’s outlay in communication networks. Americans are just 4% of the world’s population, but enjoy one quarter of its broadband investment.

The following chart illustrates the intermodal competition between different types of broadband networks (cable, fiber, DSL, mobile, satellite, wifi) in the US and EU.

US (%)

EU (%)

Availability of broadband with a download speed of 100 Mbps or higher

57*

30

Availability of cable broadband

88

42

Availability of LTE

94**

26

Availability of FTTH

25

12

Percent of population that subscribes to broadband by DSL

34

74

Percent of households that subscribe to broadband by cable

36***

17

 

The interview offered some cherry picked examples, particularly Stockholm as the FTTH utopia. The story behind this city is more complex and costly than presented.  Some $800 million has been invested in FTTH in Stockholm to date with an additional $38 million each year.  Subscribers purchase the fiber broadband with a combination of monthly access fees and increases to municipal fees assessed on homes and apartments. Acreo, a state-owned consulting company charged with assessing Sweden’s fiber project concludes that the FTTH project shows at best a ”weak but statistically significant correlation between fiber and employment” and that ”it is difficult to estimate the value of FTTH for end users in dollars and some of the effects may show up later.”

Next door Denmark took a different approach.  In 2005, 14 utility companies in Denmark invested $2 billion in FTTH.  With advanced cable and fiber networks, 70% of Denmark’s households and businesses has access to ultra-fast broadband, but less than 1 percent subscribe to the 100 mbps service.  The utility companies have just 250,000 broadband customers combined, and most customers subscribe to the tiers below 100 mbps because it satisfies their needs and budget. Indeed 80% of the broadband subscriptions in Denmark are below 30 mbps.  About 20 percent of homes and businesses subscribe to 30 mbps, but more than two-thirds subscribe to 10 mbps.

Meanwhile, LTE mobile networks have been rolled out, and already 7 percent (350,000) of Danes use 3G/4G as their primary broadband connection, surpassing FTTH customers by 100,000.  This is particularly important because in many sectors of the Danish economy, including banking, health, and government, users can only access services only digitally. Services are fully functional on mobile devices and their associated speeds.  The interview claims that wireless will never be a substitute for fiber, but millions of people around the world are proving that wrong every day.

The price comparisons provided between the US and selected European countries also leave out compulsory media license fees (to cover state broadcasting) and taxes that can add some $80 per month to the cost of every broadband subscription. When these real fees are added up, the real price of broadband is not so cheap in Sweden and other European countries.  Indeed, the US frequently comes out less expensive.

The US broadband approach has a number of advantages.  Private providers bear the risks, not taxpayers. Consumers dictate the broadband they want, not the government.  Also prices are scalable and transparent. The price reflects the real cost. Furthermore, as the OECD and the ITU have recognized, the entry level costs for broadband in the US are some of the lowest in the world. The ITU recommends that people pay no more than 5% of their income for broadband; most developed countries fall within 2-3% for the highest tier of broadband, including the US.  It is only fair to pay more more for better quality. If your needs are just email and web browsing, then basic broadband will do. But if you wants high definition Netflix, you should pay more.  There is no reason why your neighbor should subsidize your entertainment choices.

The interview asserted that government investment in FTTH is needed to increase competitiveness, but there was no evidence given.  It’s not just a broadband network that creates economic growth. Broadband is just one input in a complex economic equation.  To put things into perspective, consider that the US has transformed its economy through broadband in the last two decades.   Just the internet portion alone of America’s economy is larger than the entire GDP of Sweden.

The assertion that the US is #26 in broadband speed is simply wrong. This is an outdated statistic from 2009 used in Crawford’s book. The Akamai report references is released quarterly, so there should have been no reason not to include a more recent figure in time for publication in December 2012. Today the US ranks #8 in the world for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from #26 to #8. In any case, according to Akamai, many US cities and states have some of the fastest download speeds in the world and would rank in the top ten in the world.

There is no doubt that fiber is an important technology and the foundation of all modern broadband networks, but the economic question is to what extent should fiber be brought to every household, given the cost of deployment (many thousands of dollars per household), the low level of adoption (it is difficult to get a critical mass of a community to subscribe given diverse needs), and that other broadband technologies continue to improve speed and price.

The interview didn’t mention the many failed federal and municipal broadband projects.  Chattanooga is just one example of a federally funded fiber projects costing hundreds of millions of dollars with too few users  A number of municipal projects that have failed to meet expectations include Chicago, Burlington, VT; Monticello, MN; Oregon’s MINET, and Utah’s UTOPIA.

Before deploying costly FTTH networks, the feasibility to improve existing DSL and cable networks as well as to deploy wireless broadband markets should be considered. As case in point is Canada.  The OECD reports that both Canada and South Korea have essentially the same advertised speeds, 68.33 and 66.83 Mbps respectively.  Canada’s fixed broadband subscriptions are shared almost equally between DSL and cable, with very little FTTH.   This shows that fast speeds are possible on different kinds of networks.

The future demands a multitude of broadband technologies. There is no one technology that is right for everyone. Consumers should have the ability to choose based upon their needs and budget, not be saddled with yet more taxes from misguided politicians and policymakers.

Consider that mobile broadband is growing at four times the rate of fixed broadband according to the OECD, and there are some 300 million mobile broadband subscriptions in the US, three times as many fixed broadband subscriptions.  In Africa mobile broadband is growing at 50 times the rate of fixed broadband.  Many Americans have selected mobile as their only broadband connection and love its speed and flexibility. Vectoring on copper wires enables speeds of 100 mbps. Cable DOCSIS3 enables speeds of 300 mbps, and cable companies are deploying neighborhood wifi solutions.  With all the innovation and competition, it is mindless to create a new government monopoly.  We should let the golden age of broadband flourish.


Source for US and EU Broadband Comparisons: US data from National Broadband Map, “Access to Broadband Technology by Speed,” Broadband Statistics Report, July 2013, http://www.broadbandmap.gov/download/Technology%20by%20Speed.pdf and http://www.broadbandmap.gov/summarize/nationwide. EU data from European Commission, “Chapter 2: Broadband Markets,” Digital Agenda Scoreboard 2013 (working document, December 6, 2013), http://ec.europa.eu/digital-agenda/sites/digital-agenda/files/DAE%20SCOREBOARD%202013%20-%202-BROADBAND%20MARKETS%20_0.pdf.

*The National Cable Telecommunications Association suggests speeds of 100 Mbps are available to 85% of Americans.  See “America’s Internet Leadership,” 2013, www.ncta.com/positions/americas-internet-leadership.

**Verizon’s most recent report notes that it reaches 97 percent of America’s population with 4G/LTE networks. See Verizon, News Center: LTE Information Center, “Overview,” www.verizonwireless.com/news/LTE/Overview.html.

***This figure is based on 49,310,131 cable subscribers at the end of 2013, noted by Leichtman Research http://www.leichtmanresearch.com/press/031714release.html compared to 138,505,691 households noted by the National Broadband Map.

Later today I’ll be testifying at a hearing before the House Small Business Committee titled “Bitcoin: Examining the Benefits and Risks for Small Business.” It will be live streamed starting at 1 p.m. My testimony will be available on the Mercatus website at that time, but below is some of my work on Bitcoin in case you’re new to the issue.

Also, tonight I’ll be speaking at a great event hosted by the DC FinTech meetup on “Bitcoin & the Internet of Money.” I’ll be joined by Bitcoin core developer Jeff Garzik and we’ll be interviewed on stage by Joe Weisenthal of Business Insider. It’s open to the public, but you have to RSVP.

Finally, stay tuned because in the next couple of days my colleagues Houman Shadab, Andrea Castillo, and I will be posting a draft of our new law review article looking at Bitcoin derivatives, prediction markets, and gambling. Bitcoin is the most fascinating issue I’ve ever worked on.

Here’s Some Bitcoin Reading…

And here’s my interview with Reihan Salam discussing Bitcoin…