Those of us with deep reservations about the push for ever more unlicensed spectrum are having many of our fears realized with the new resistance to novel technologies using unlicensed spectrum. By law unlicensed spectrum users have no rights to their spectrum; unlicensed spectrum is a managed commons. In practice, however, existing users frequently act as if they own their spectrum and they can exclude others. By entertaining these complaints, the FCC simply encourages NIMBYism in unlicensed spectrum.

The general idea behind unlicensed spectrum is that by providing a free spectrum commons to any device maker who complies with certain simple rules (namely, Part 15’s low power operation requirement), device makers will develop wireless services that would never have developed if the device makers had to shell out millions for licensed spectrum. For decades, unlicensed spectrum has stimulated development and sale of millions of consumer devices, including cordless phones, Bluetooth devices, wifi access points, RC cars, and microwave ovens.

Now, however, many device makers are getting nervous about new entrants. For instance, Globalstar is developing a technology, TLPS, based on wifi standards that will use some unlicensed spectrum at 2.4 GHz and mobile carriers would like to market an unlicensed spectrum technology, LTE-U, based on 4G LTE standards that will use spectrum at 5 GHz.

This resistance from various groups and spectrum incumbents, who fear interference in “their” spectrum if these new technologies catch on, was foreseeable, which makes these intractable conflicts even more regrettable. As Prof. Tom Hazlett wrote in a 2001 essay, long before today’s conflicts, when it comes to unlicensed devices, “economic success spells its own demise.” Hazlett noted, “Where an unlicensed firm successfully innovates, open access guarantees imitation. This not only results in competition…but may degrade wireless emissions — perhaps severely.”

On the other hand, the many technical filings about potential interference to existing unlicensed devices are red herrings. Prospective device makers in these unlicensed bands have no duty to protect existing users. Part 15 rules say that unlicensed users like wifi and Bluetooth “shall not be deemed to have any vested or recognizable right to continued use of any given frequency by virtue of prior registration or certification of equipment” and that “interference must be accepted.” These rules, however, put the FCC in a self-created double bind: the agency provides no interference protection to existing users but its open access policy makes interference conflicts likely. Continue reading →

One of my favorite themes, and not just in the field of tech policy, is the “Unintended Consequences of Well-Intentioned Regulations.” I believe that all laws and regulations have dynamic effects and that to fully appreciate the true impact of any particular public policy, you must always closely investigate the potential opportunity costs and unintended consequences associated with those policies. Because all too often laws and regulations are hastily put on the books with the very best of intentions in mind, only to later be shown to produce the opposite of what was intended.

Today’s case in point comes from Wall Street Journal article by Rachel Bachman and it involves how the growing wave of cycling helmet laws are having a net negative impact on public health because they discourage ridership in the aggregate. Thus, those potential riders are then either (a) just less active overall or (b) driving their cars to get where they need to go. And both of those results are, ultimately, riskier than cycling without a helmet. For that reason, Bachman reports, cycling advocates “are pushing back against mandatory bike-helmet laws in the U.S. and elsewhere. They say mandatory helmet laws, particularly for adults, make cycling less convenient and seem less safe, thus hindering the larger public-health gains of more people riding bikes.” Supporting evidence comes from this 2012 paper in the journal Risk Analysis by Piet de Jong, a professor in the department of applied finance and actuarial studies at Sydney’s Macquarie University. His paper included an empirical model that showed how mandatory bike-helmet laws “have a net negative health impact.”

This strikes me as one of the very best examples of how to do dynamic benefit-cost analysis and show the full range of societal impacts associated with well-intentioned regulations. And it reminds me of the playground example I use in several of my papers: Laws and liability threats discouraged tall playground climbing structures in the ’80s and ’90s. Continue reading →

This Wednesday, TechFreedom joined Niskanen Center and a coalition of free market groups in urging the White House to endorse the use of strong encryption and disavow efforts to intentionally weaken encryption, whether by installing “back doors,” “front doors,” or any security vulnerabilities into encryption products.

The coalition letter concludes:

We urge your Administration to consider the full ramifications of weakening or limiting encryption. There is no such thing as a backdoor that only the US government can access: any attempt to weaken encryption means making users more vulnerable to malicious hackers, identity thieves, and repressive governments. America must stand for the right to encryption — it is nothing less than the Second Amendment for the Internet.

The White House’s silence on encryption is deafening,” said Tom Struble, Policy Counsel at TechFreedom. “The President’s hitherto failure to endorse strong encryption has given ammunition to European regulators seeking to restrict cross-border data flows and require that data on EU citizens be stored in their own countries. Just yesterday, the European Court of Justice struck down a longstanding agreement that made it easier for Europeans to access American Internet services. If the White House continues to dawdle, it will only further embolden ‘digital protectionism’ across the pond.”

The letter’s signatories include: Niskanen Center, TechFreedom, FreedomWorks, R Street Institute, Students For Liberty, Citizen Outreach, Downsize DC, Institute for Policy Innovation, Less Government, Center for Financial Privacy and Human Rights, and American Commitment.

The last several months have been a busy time for tech policy. Major policies have been enacted, particularly in the areas of surveillance and Internet regulation. While we haven’t checked in here on TLF in some time,TechFreedom has been consistently fighting for the policies that make innovation possible.

  1. Internet Independence: On July 4th, we launched  the Declaration of Internet Independence, a grassroots petition campaign calling on Congress to restore the light-touch approach to Internet regulation that resulted in twenty years of growth and prosperity.
  2. Internet Regulation: This February the FCC issued its Open Internet Order, reclassifying broadbandas a communications service under Title II of the 1934 Communications Act, despite opposition from many in the tech sector, including supporters of our “Don’t Break the Net” campaign. In response, we’ve joined CARI.net and several leading internet entrepreneurs in litigation against the FCC   to ask the Court to strike down the Order.
  3. Surveillance: Section 215 of the PATRIOT Act, which authorized bulk collection of phone records, sunset this May, giving privacy advocates the opportunity to enact meaningful surveillance reform. TechFreedom voiced support for such reforms, including the USA FREEDOM Act, which will end all bulk collection of Americans’ telephone records under any authority.
  4. Broadband Deployment: Making fast, affordable Internet available to everyone is a goal that we all share. We’ve been urging government at all levels to make it easier for private companies to do just that through policies like Dig Once conduits, while cautioning that government-run broadband should only be a last resort.
  5. FTC Reform: The FTC is in dire need of reform. We’ve recommended changes to ensure that the agency fulfills its duty to protect consumers from real harm without a regulatory blank check, which stifles innovation and competition. While progress has been made, there’s still a long way to go. The agency can start by helping to unshackle the sharing economy from legacy regulations.

The big news out of Europe today is that the European Court of Justice (ECJ) has invalidated the 15-year old EU-US safe harbor agreement, which facilitated data transfers between the EU and US. American tech companies have relied on the safe harbor to do business in the European Union, which has more onerous data handling regulations than the US. [PDF summary of decision here.] Below I offer some quick thoughts about the decision and some of its potential unintended consequences.

#1) Another blow to new entry / competition in the EU: While some pundits are claiming this is a huge blow to big US tech firms, in reality, the irony of the ruling is that it will bolster the market power of the biggest US tech firms, because they are the only ones that will be able to afford the formidable compliance costs associated with the resulting regulatory regime. In fact, with each EU privacy decision, Google, Facebook, and other big US tech firms just get more dominant. Small firms just can’t comply with the EU’s expanding regulatory thicket. “It will involve lots of contracts between lots of parties and it’s going to be a bit of a nightmare administratively,” said Nicola Fulford, head of data protection at the UK law firm Kemp Little when commenting on the ruling to the BBC. “It’s not that we’re going to be negotiating them individually, as the legal terms are mostly fixed, but it does mean a lot more paperwork and they have legal implications.” And by driving up regulatory compliance costs and causing constant delays in how online business is conducted, the ruling will (again, on top of all the others) greatly limits entry and innovation by new, smaller players in the digital world. In essence, EU data regulations have already wiped out much of the digital competition in Europe and now this ruling finishes off any global new entrants who might have hoped of breaking in and offering competitive alternatives. These are the sorts of stories never told in antitrust circles: costly government rulings often solidify and extend the market dominance of existing companies. Dynamic effects matter. That is certainly going to be the case here. Continue reading →

Smart Device Paranoia

by on October 5, 2015 · 0 comments

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

Continue reading →

I recently finished Learning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.

That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:

Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)

Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →

commissioner-ohlhausenI wanted to draw your attention to yet another spectacular speech by Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written here before about Commissioner Ohlhausen’s outstanding speeches, but this latest one might be her best yet.

On Tuesday, Ohlhausen was speaking at U.S. Chamber of Commerce Foundation day-long event on “The Internet of Everything: Data, Networks and Opportunities.” The conference featured various keynote speakers and panels discussing, “the many ways that data and Internet connectiviting is changing the face of business and society.” (It was my honor to also be invited to deliver an address to the crowd that day.)

As with many of her other recent addresses, Commissioner Ohlhausen stressed why it is so important that policymakers “approach new technologies and new business models with regulatory humility.” Building on the work of the great Austrian economist F.A. Hayek, who won a Nobel prize in part for his work explaining the limits of our knowledge to plan societies and economies, Ohlhausen argues that: Continue reading →

Tech Policy Threat Matrix

by on September 24, 2015 · 1 comment

On the whiteboard that hangs in my office, I have a giant matrix of technology policy issues and the various policy “threat vectors” that might end up driving regulation of particular technologies or sectors. Along with my colleagues at the Mercatus Center’s Technology Policy Program, we constantly revise this list of policy priorities and simultaneously make an (obviously quite subjective) attempt to put some weights on the potential policy severity associated with each threat of intervention. The matrix looks like this: [Sorry about the small fonts. You can click on the image to make it easier to see.]

 

Tech Policy Issue Matrix 2015

I use 5 general policy concerns when considering the likelihood of regulatory intervention in any given area. Those policy concerns are:

  1. privacy (reputation issues, fear of “profiling” & “discrimination,” amorphous psychological / cognitive harms);
  2. safety (health & physical safety or, alternatively, child safety and speech / cultural concerns);
  3. security (hacking, cybersecurity, law enforcement issues);
  4. economic disruption (automation, job dislocation, sectoral disruptions); and,
  5. intellectual property (copyright and patent issues).

Continue reading →

Make sure to watch this terrific little MR University video featuring my Mercatus Center colleague Don Boudreaux discussing what fueled the “Orgy of Innovation” we have witnessed over the past century. Don brings in one our our mutual heroes, the economic historian Deirdre McCloskey, who has coined the term “innovationism” to describe the phenomenal rise in innovation over the past couple hundred years. As I have noted in my essay on “Embracing a Culture of Permissionless Innovation,” McCloskey’s work highlights the essential that role that values—cultural attitudes, social norms, and political pronouncements—have played in influencing opportunities for entrepreneurialism, innovation, and long-term growth. Watch Don’s video for more details: