This is the second of a series of three blog posts about broadband in America in response to Susan Crawford’s book Captive Audience and her recent blog post responding to positive assessments of America’s broadband marketplace in the New York Times. Read the first post here. This post addresses Crawford’s claim that every American needs fiber, regardless of the cost and that government should manage the rollout.
It is important to point out that fiber is extant in almost all broadband technologies and has been for years. Not only are backbones built with fiber, but there is fiber to the mobile base station and fiber in cable and DSL networks. In fact American carriers are already some of world’s biggest buyers of fiber. They made the largest heretofore purchase in 2011, some 18 million miles of fiber optic cable. In the last few years American firms bought more fiber optic cable than all of Europe combined.[1]
The debate is about a broadband technology called fiber to the home (FTTH). The question is whether and how to pay for fiber from the existing infrastructure—from the curb into the house itself as it were. Typically the it’s the last part of the journey that can be expensive given the need to secure rights of way, eminent domain, labor cost, trenching, indoor wiring and repair costs. Subscribers should have a say in whether the cost and disruption are warranted by the price and performance. There is also a question of whether the technology is so essential and proven that the government should pay for it outright, or mandate that carriers provide it.
Fiber in the corporate setting is a different discussion. Many companies use private, fiber networks. The fact of that a company or large office building offers a concentration of many subscribers paying higher fees has helped fiber grow in as the enterprise broadband choice for many companies. Households don’t have the same economics.
There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics. After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.
Broadband Subscriptions in the OCED
By way of background, the OECD Broadband Portal[2] report from December 2012 notes that the US has 90 million fixed (wired) connections, more than a quarter of the total (327 million) for 34 nations in the study. On the mobile side, Americans have three times as many mobile broadband subscriptions as fixed. The 280 million mobile broadband subscriptions held by Americans account for 35% of the total 780 million mobile subscriptions in the OECD. These are smartphones and devices which Americans use to the connect to the internet.
Continue reading →
I am American earning an industrial PhD in internet economics in Denmark, one of the countries that law professor Susan Crawford praises in her book Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. The crise du jour in America today is broadband, and Susan Crawford is echoed by journalists David Carr, John Judis and Eduardo Porter and publications such as the New York Times, New Republic, Wired, Bloomberg News, and Huffington Post. One can also read David Cay Johnston’s The Fine Print: How Big Companies Use ‘Plain English’ to Rob You Blind.
It has become fashionable to write that American broadband internet is slow and expensive and that cable and telecom companies are holding back the future—even though the data shows otherwise. We can count on the ”America is falling behind” genre of business literature to keep us in a state of alert while it ensures a steady stream of book sales and traffic to news websites.
After six months of pro-Crawford coverage, the New York Times finally published two op-eds[1] which offered a counter view to the “America is falling behind in broadband” mantra. Crawford complained about this in Salon.com and posted a 23 page blog on the Roosevelt Institute website to present “the facts”, but she didn’t mention that the New York Times printed two of her op-eds and featured her in two interviews for promotion of her book. I read Crawford’s book closely as well as her long blog post, including the the references she provides. I address Crawford’s charges as questions in four blogs.
- Do Europeans and East Asians have better and cheaper broadband than Americans?
- Is fiber to the home the network of the future (FTTH), or are there competing technologies?
- Is there really a cable/mobile duopoly in broadband?
- What is the #1 reason why older Americans use the internet?
For additional critique of the America is falling behind broadband myth, see my 10 Myths and Realities of Broadband. See also the response of one of the op-ed authors whom Crawford criticizes.
How the broadband myth got started
Crawford’s book quotes a statistic from Akamai in 2009. That year was the nadir of the average measured connection speed for the US, placing it at #22 and falling. Certainly presenting the number at its worse point strengthens Crawford’s case for slow speeds. However, Akamai’s State of the Internet Report is released quarterly, so there should have been no problem for Crawford to include a more recent figure in time for her book’s publication in December 2012. Presently the US ranks #9 for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from 22nd to 9th.
Read More
This week it is our pleasure to welcome Roslyn Layton to the TLF, who will be doing some guest blogging on broadband policy issues. Roslyn Layton is a PhD Fellow who studies internet economics at the Center for Communication, Media, and Information Technologies at Aalborg University in Copenhagen, Denmark. Her program is a partnership between the Danish Department of Research & Innovation; Aalborg University, and Strand Consult, a Danish company. Prior to her current academic position, Roslyn worked in the IT industry in the U.S., India, and Europe. Her personal page is: www.RoslynLayton.com
She’ll be rolling out three essays over the course of the week based on her extensive research research in this field, including her recent series on “10 Myths and Realities of Broadband Internet in the USA.”
What works well as an ethical directive might not work equally well as a policy prescription. Stated differently, what one ought to do it certain situations should not always be synonymous with what they must do by force of law.
I’m going to relate this lesson to tech policy debates in a moment, but let’s first think of an example of how this lesson applies more generally. Consider the Ten Commandments. Some of them make excellent ethical guidelines (especially the stuff about not coveting neighbor’s house, wife, or possessions). But most of us would agree that, in a free and tolerant society, only two of the Ten Commandments make good law: Thou shalt not kill and Thou shalt not steal.
In other words, not every sin should be a crime. Perhaps some should be; but most should not. Taking this out of the realm of religion and into the world of moral philosophy, we can apply the lesson more generally as: Not every wise ethical principle makes for wise public policy. Continue reading →
Today the Heartland Institute is publishing my policy brief, U.S. Cybersecurity Policy: Problems and Principles, which examines the proper role of government in defending U.S. citizens, organizations and infrastructure from cyberattacks, that is, criminal theft, vandalism or outright death and destruction through the use of global interconnected computer networks.
The hype around the idea of cyberterrorism and cybercrime is fast reaching a point where any skepticism risks being shouted down as willful ignorance of the scope of the problem. So let’s begin by admitting that cybersecurity is a genuine existential challenge. Last year, in what is believed to be the most damaging cyberattack against U.S. interests to date, a large-scale hack of some 30,000 Saudi Arabia-based ARAMCO personal computers erased all data on their hard drives. A militant Islamic group called the Sword of Justice took credit, although U.S. Defense Department analysts believe the government of Iran provided support.
This year, the New York Times and Wall Street Journal have had computer systems hacked, allegedly by agents of the Chinese government looking for information on the newspapers’ China sources. In February, the loose-knit hacker group Anonymous claimed credit for a series of hacks of the Federal Reserve Bank, Bank of America, and American Express, targeting documents about salaries and corporate financial policies in an effort to embarrass the institutions. Meanwhile, organized crime rings are testing cybersecurity at banks, universities, government organizations and any other enterprise that maintains databases containing names, addresses, social security and credit card numbers of millions of Americans.
These and other reports, aided by popular entertainment that often depicts social breakdown in the face of massive cyberattack, have the White House and Congress scrambling to “do something.” This year alone has seen Congressional proposals such as Cyber Intelligence Sharing and Protection Act (CISPA), the Cybersecurity Act and a Presidential Executive Order all aimed at cybersecurity. Common to all three is a drastic increase the authority and control the federal government would have over the Internet and the information that resides in it should there be any vaguely defined attack on any vaguely defined critical U.S. information assets.
Continue reading →
Over at The Switch, the Washington Post’s excellent new technology policy blog, Brian Fung has an interesting post about tethering and Google Glass, but I think he perpetuates a common misconception:
Carriers have all sorts of rules about tethering, and sorting through them can be like feeling your way down a dark alley. Verizon used to charge $20 a month for tethering before the FCC ruled it had to allow tethering for free. Now, any data you use comes out of your cellular plan’s overall data allowance. AT&T gives you a separate pool of data for tethering plans, but charges up to $50 a month for the right, much as Verizon once did.
Fung claims that due to the likely increase in tethering as devices like Google Glass come to market, “assuming the FCC didn’t require all wireless carriers to make tethering free, it’d be a huge source of potential revenue for companies like AT&T.”
In fact, the cost of tethering on AT&T is not very different from the cost of doing so on Verizon, which means by definition that AT&T is not likely to get a windfall from increased use of tethering. It’s also evidence that the FCC tethering rule for Verizon doesn’t matter very much.
Continue reading →
Jerry Ellig, senior research fellow at the Mercatus Center at George Mason University, discusses the the FCC’s lifeline assistance benefit funded through the Universal Service Fund (USF). The program, created in 1997, subsidizes phone services for low-income households. The USF is not funded through the federal budget, rather via a fee from monthly phone bills — reaching an all-time high of 17% of telecomm companies’ revenues last year. Ellig discusses the similarities between the USF fee and a tax, how the fee fluctuates, how subsidies to the telecomm industry have boomed in recent years, and how to curb the waste, fraud and abuse that comes as a result of the lifeline assistance benefit.
Download
Related Links
The 600 MHz spectrum auction “represents the last best chance to promote competition” among mobile wireless service providers, according to the written testimony of T-Mobile executive who appeared before a congressional subcommittee Jul. 23 and testified in rhetoric that is reminiscent of a bygone era.
The idea that an activist Federal Communications Commission is necessary to preserve and promote competition is a throwback to the government-sanctioned Ma Bell monopoly era. Sprint still uses the term “Twin Bells” in its FCC pleadings to refer to AT&T and Verizon Wireless in the hope that, for those who can remember the Bell System, the incantation will elicit a visceral response. The fact is most of the FCC’s efforts to preserve and promote competition have failed, entailed serious collateral damage, or both.
Unless Congress and the FCC get the details right, the implementation of an innovative auction that will free up spectrum that is currently underutilized for broadcasting and make it available for mobile communications could fail to raise in excess of $7 billion for building a nationwide public safety network and making a down payment on the national debt. Aside from ensuring that broadcasting is not disrupted in the process, one important detail concerns whether the auctioning will be open to every qualified bidder, or whether government officials will, in effect, pick winners and losers before the auctioning begins. Continue reading →
Nobel laureate Gary Becker and I are on the same page. He says patent terms should be short:
Major reforms to reduce these unproductive opportunities would include lowering typical patent length and the scope of innovations that are eligible for patents. The current patent length of 20 years (longer for drug companies) from the date of filing for a patent can be cut in half without greatly discouraging innovation. One obvious advantage of cutting patent length in half is that the economic cost from the temporary monopoly power given to patent holders would be made much more temporary. In addition, a shorter patent length gives patent holders less of an effective head start in developing follow on patents that can greatly extend the effective length of an original patent.
More importantly, he says we should carve out particularly troublesome areas, like software, from the patent system:
In narrowing the type of innovations that are patentable, one can start by eliminating the patenting of software. Disputes over software patents are among the most common, expensive, and counterproductive. Their exclusion from the patent system would discourage some software innovations, but the saving from litigation costs over disputed patent rights would more than compensate the economy for that cost. Moreover, some software innovations would be encouraged because the inability to patent software will eliminate uncertainty over whether someone else with a similar patent will sue and do battle in the courts.
[…]
In addition to eliminating patents on software, no patents should be allowed on DNA, such as identification of genes that appear to cause particular diseases. Instead, they should be treated as other scientific discoveries, and be in the public domain. The Supreme Court recently considered a dispute over whether the genes that cause BRCA1 and BRCA2 deviations and greatly raises the risk of breast cancer is patentable. Their ruling banned patenting of human DNA, and this is an important step in the right direction.
Other categories of innovations should also be excluded from the patent system. Essentially, patents should be considered a last resort, not a first resort, to be used only when market-based methods of encouraging innovations are likely to be insufficient, and when litigation costs will be manageable. With such a “minimalist” patent system, patent intermediaries would have a legitimate and possibly important role to play in helping innovators get and protect their patent rights.
It’s good to see a consensus for major reform developing among economists. I hope that legal scholars and policymakers will start to listen.
It was my pleasure last night to take part in an hour-long conversation on “Privacy, Security, and the Digital Age,” which was co-sponsored by Mediaite and the Koch Institute. The discussion focused on a wide range of issues related to government surveillance powers, Big Data, and the future of privacy. It opened with dueling remarks from former U.S. Ambassador to the U.N. John Bolton and Ben Wizner of the ACLU. You can view their respective remarks here.
I then sat on a panel that included Atlantic Media CTO Tom Cochrane and Michael R. Nelson, who is affiliated with with Bloomberg Government and Georgetown University. The entire session was expertly moderated by Andrew Kirell of Mediaite. He did an amazing job facilitating the discussion. Anyway, the videos for my panel are below, split into two parts. My comments focused heavily on the importance of separating the government uses of data from private sector uses and explaining the need to create a high and tight firewall between State and Industry when it comes to information sharing. I also argued that we will never get a handle on government-related privacy concerns until we get control of the scope of government power. I used the example of the drug war and our government’s constantly-expanding militaristic activities both abroad and here at home. So long as government is expanding without any rational, constitutional constraint, we are going to have serious surveillance and privacy problems. (See this essay, “It’s About Power, not Privacy,” by my colleague Eli Dourado for more on that theme.)
Continue reading →