The endless apocalyptic rhetoric surrounding Net Neutrality and many other tech policy debates proves there’s no downside to gloom-and-doomism as a rhetorical strategy. Being a techno-Jeremiah nets one enormous media exposure and even when such a person has been shown to be laughably wrong, the press comes back for more. Not only is there is no penalty for hyper-pessimistic punditry, but the press actually furthers the cause of such “fear entrepreneurs” by repeatedly showering them with attention and letting them double-down on their doomsday-ism. Bad news sells, for both the pundit and the press.
But what is most remarkable is that the press continues to label these preachers of the techno-apocalypse as “experts” despite a track record of failed predictions. I suppose it’s because, despite all the failed predictions, they are viewed as thoughtful & well-intentioned. It is another reminder that John Stuart Mill’s 1828 observation still holds true today: “I have observed that not the man who hopes when others despair, but the man who despairs when others hope, is admired by a large class of persons as a sage.”
This essay originally appeared on The Bridge on September 25, 2019.
It is quickly becoming one of the iron laws of technology policy that by attempting to address one problem (like privacy, security, safety, or competition), policymakers often open up a different problem on another front. Trying to regulate to protect online safety, for example, might give rise to privacy concerns, or vice versa. Or taking steps to address online privacy through new regulations might create barriers to new entry, thus hurting online competition.
In a sense, this is simply a restatement of the law of unintended consequences. But it seems to be occurring with greater regularity in the technology policy today, and it serves as another good reminder why humility is essential when considering new regulations for fast-moving sectors.
Consider a few examples.
Privacy vs security & competition
Many US states and the federal government are considering data privacy regulations in the vein of the European Union’s wide-reaching General Data Privacy Regulation (GDPR). But as early experiences with the GDPR and various state efforts can attest, regulations aimed at boosting consumer privacy can often butt against other security and competition concerns. Continue reading →
You won’t find President Trump agreeing with Hillary Clinton and Barack Obama on many issues, but the need for occupational licensing reform is one major exception. They, along with many other politicians and academics both Left and Right, have identified how state and local “licenses to work” restrict workers’ opportunities and mobility while driving up prices for consumers.
Of course, not everybody has to agree with high-profile Democrats and Republicans, but let’s at least welcome the chance to discuss something important without defaulting to our partisan bunkers.
This past week, for example, ThinkProgress published an article titled “Koch Brothers’ anti-government group promotes allowing unlicensed, untrained cosmetologists.” Centered around an Americans for Prosperity video highlighting the ways in which occupational licensing reform could lower some of the barriers that prevent people from bettering their lives, the article painted a picture of an ideologically driven, right-wing movement.
Jaron Lanier was featured in a recent New York Times op-ed explaining why people should get paid for their data. Under this scheme, he estimates the total value of data for a four person household could fetch around $20,000.
Let’s do the math on that.
Data from eMarketer finds that users spend about an hour and fifteen minutes per day on social media for a total of 456.25 hours per year. Thus, by Lanier’s estimates, the income from data would be about $10.95 per hour. That’s not too bad!
By any measure, however, the estimate is high. Since I have written extensively on this subject (see this, this, and this), I thought it might be helpful to explain the four general methods used to value an intangibles like data. They include income methods, market rates, cost methods, and finally, shadow prices. Continue reading →
Last month, Senator and presidential candidate Elizabeth Warren released a campaign document, Plan for Rural America. The lion’s share of the plan proposed government-funded and -operated health care and broadband. The broadband section of the plan proposes raising $85 billion (from taxes?) to fund rural broadband grants to governments and nonprofits. The Senator then placed a Washington Post op-ed to decrying the state of rural telecommunications in America.
While it’s commendable she has a plan, it doesn’t materially improve upon existing, flawed rural telecom subsidy programs, which receive only brief mention. In particular, the Plan places an unwarranted faith in the power of government telecom subsidies, despite red flags about their efficacy. The op-ed misdiagnoses rural broadband problems and somehow lays decades of real and perceived failure of government policy at the feet of the current Trump FCC, and Chairman Pai in particular.
As a result, the proposals–more public money, more government telecom programs–are the wrong treatment. The Senator’s plan to wire every household is undermined by “the 2% problem”–the cost to build infrastructure to the most remote homes is massive.
Other candidates (and perhaps President Trump) will come out with rural broadband plans so it’s worth diving into the issue. Doubling down on a 20 year old government policy–more subsidies to more providers–will mostly just entrench the current costly system.
How dire is the problem?
Somewhere around 6% of Americans (about 20 million people) are unserved by a 25 Mbps landline connection. But that means around 94% of Americans have access to 25 Mbps landline broadband. (Millions more have access if you include broadband from cellular and WISP providers.)
Further, rural buildout has been improving for years, despite the high costs. From 2013 to 2017, under Obama and Trump FCCs, landline broadband providers covered around 3 or 4 million new rural customers annually. This growth in coverage seems to be driven by unsubsidized carriers because, as I found in Montana, FCC-subsidized telecom companies in rural areas are losing subscribers, even as universal service subsidies increased.
This rural buildout is more impressive when you consider that most people who don’t subscribe today simply don’t want Internet access. Somewhere between 55% to 80% of nonadopters don’t want it, according to Department of Commerce and Pew surveys. The fact is, millions of rural homes are connected annually despite the fact that most nonadopters today don’t want the service.
These are the core problems for rural telecom: (1) poorly-designed, overlapping, and expensive programs and (2) millions of consumers who are uninterested in subscribing to broadband.
Tens of billions for government-operated networks
The proposed new $85 billion rural broadband fund gets most of the headlines. It resembles the current universal service programs–the fund would disburse grants to providers, except the grants would be restricted to nonprofit and government operators of networks. Most significant: Senator Warren promises in her Plan for Rural America that, as President, she will “make sure every home in America has a fiber broadband connection.”
Every home?
This fiber-to-every-farm idea had advocates 10 years ago. The idea has failed to gain traction because it runs into the punishing economics of building networks.
Costs rise non-linearly for the last few percent of households and $85 billion would bring fiber only to a small sliver of US households. According to estimates from the Obama FCC, it would cost $40 billion to build fiber to the final 2% of households. Further, the network serving those 2% of households would require an annual subsidy of $2 billion simply to maintain those networks since revenues are never expected to cover ongoing costs.
Recent history suggests rapidly diminishing returns and that $85 billion of taxpayer money will be misspent. If the economics wasn’t difficult enough, real-world politics and government inefficiency also degrade lofty government broadband plans. For example, Australia’s construction of a nationwide publicly-owned fiber network–the nation’s largest-ever infrastructure project–is billions over budget and years behind schedule. The RUS broadband grant debacle in the US only supports the case that $85 billion simply won’t go that far. As Will Rinehart says, profit motive is not the cause of rural broadband problems. Government funding doesn’t fix the economics and government efficacy.
Studies will probably be come out saying it can be done more cheaply but America has been running a similar experiment for 20 years. Since 1998, as economists Scott Wallsten and Lucía Gamboa point out, the US government has spent around $100 billion on rural telecommunications. What does that $100 billion get? Mostly maintenance of existing rural networks and about a 2% increase of phone adoption.
Would the Plan improve or repurpose the current programs and funding? We don’t know. The op-ed from Sen. Warren complains that:
the federal government has shoveled more than a billion in taxpayer dollars per year to private ISPs to expand broadband to remote areas, but these providers have done the bare minimum with these resources.
This understates the problem. The federal government “shovels” not $1 billion, but about $5 billion, annually to providers in rural areas, mostly from the Universal Service Fund Congress established in 1996.
As for the “public option for broadband”–extensive construction of publicly-run broadband networks–I’m skeptical. Broadband is not like a traditional utility. Unlike electricity, water, or sewer, a city or utility network doesn’t have a captive customer base. There are private operators out there.
As a result, public operation of networks is a risky way to spend public funds. Public and public-private operation of networks often leads to financial distress and bankruptcy, as residents in Provo, Lake County, Kentucky, and Australia can attest.
Rural Telecom Reform
I’m glad Sen. Warren raised the issue of rural broadband, but the Plan’s drafters seem uninterested in digging into the extent of the problem and in solutions aside from throwing good money after bad. Lawmakers should focus on fixing the multi-billion dollar programs already in existence at the FCC and Ag Department, which are inexplicably complex, expensive to administer, and unequal towards ostensible beneficiaries.
Why, for instance, did rural telecom subsidies break down to about $11 per rural household in Sen. Warren’s Massachusetts in 2016 when it was about $2000 per rural household in Alaska?
Alabama and Mississippi have similar geographies and rural populations. So why did rural households in Alabama receive only about 20% of what rural Mississippi households receive?
Why have administrative costs as a percentage of the Universal Service Fund more than doubled since 1998? It costs $200 million annually to administer the USF programs today. (Compare to the FCC’s $333 million total budget request to Congress in FY 2019 for everything else the FCC does.)
I’ve written about reforms under existing law, like OTARD rule reform–letting consumers freely install small, outdoor antennas to bring broadband to rural areas–and transforming the current program funds into rural broadband vouchers. There’s also a role for cities and counties to help buildout by constructing long-lasting infrastructure like poles, towers, and fiber conduit. These assets could be leased out a low cost to providers.
Conclusion
After years of planning, the FCC reformed some of the rural telecom program in 2017. However, the reforms are partial and it’s too early to evaluate the results. The foundational problem is with the structure of existing programs. Fixing that structure should be a priority for any Senator or President concerned about rural broadband. Broadband vouchers for rural households would fix many of the problems, but lawmakers first need to question the universal service framework established over 20 years ago. There are many signs it’s not fit for purpose.
Anytime someone proposes a top-down, government-directed “plan for journalism,” we should be a little wary. Journalism should not be treated like it’s a New Deal-era public works program or a struggling business sector requiring bailouts or an industrial policy plan.
Such ideas are both dangerous and unnecessary. Journalism is still thriving in America, and people have more access to more news content than ever before. The news business faces serious challenges and upheaval, but that does not mean central planning for journalism makes sense.
Unfortunately, some politicians and academics are once again insisting we need government action to “save journalism.” Senator and presidential candidate Bernie Sanders (D-VT) recently penned an op-ed for the Columbia Journalism Review that adds media consolidation and lack of union representation to the parade of horrors that is apparently destroying journalism. And a recent University of Chicago report warns that “digital platforms” like Facebook and Google “present formidable new threats to the news media that market forces, left to their own devices, will not be sufficient” to continue providing high-quality journalism.
Critics of the current media landscape are quick to offer policy interventions. “The Sanders scheme would add layers of regulatory supervision to the news business,” notes media critic Jack Shafer. Sanders promises to prevent or rollback media mergers, increase regulations on who can own what kinds of platforms, flex antitrust muscles against online distributors, and extend privileges to those employed by media outlets. The academics who penned the University of Chicago report recommend public funding for journalism, regulations that “ensure necessary transparency regarding information flows and algorithms,” and rolling back liability protections for platforms afforded through Section 230 of the Communications Decency Act.
Both plans feature government subsidies, too. Sen. Sanders proposes “taxing targeted ads and using the revenue to fund nonprofit civic-minded media” as part of a broader effort “to substantially increase funding for programs that support public media’s news-gathering operations at the local level.” The Chicago plan proposed a taxpayer-funded $50 media voucher that each citizen will then be able to spend on an eligible media operation of their choice. Such ideas have been floated before and the problems are still numerous. Apparently, “saving journalism” requires that media be placed on the public dole and become a ward of the state. Socializing media in order to save it seems like a bad plan in a country that cherishes the First Amendment. Continue reading →
Imagine a competition to design the most onerous and destructive economic regulation ever conceived. A mandate that would make all other mandates blush with embarrassment for not being burdensome or costly enough. What would that Worst Regulation Ever look like?
Unfortunately, Bill de Blasio has just floated a few proposals that could take first and second place prize in that hypothetical contest. In a new Wired essay, the New York City mayor and 2020 Democratic presidential candidate explains, “Why American Workers Need to Be Protected From Automation,” and aims to accomplish that through a new agency with vast enforcement powers, and a new tax.
Taken together, these ideas represent one of the most radical regulatory plans any America politician has yet concocted.
Politicians, academics, and many others have been panicking over automation at least since the days when the Luddites were smashing machines in protest over growing factory mechanization. With the growth of more sophisticated forms of robotics, artificial intelligence, and workplace automation today, there has been a resurgence of these fears and a renewed push for sweeping regulations to throw a wrench in the gears of progress. Mayor de Blasio is looking to outflank his fellow Democratic candidates for president with an anti-automation plan that may be the most extreme proposal of its kind. Continue reading →
The Technology Liberation Front just marked its 15th year in existence. That’s a long time in the blogosphere. (I’ve only been writing at TLF since 2012 so I’m still the new guy.)
Everything from Bitcoin to net neutrality to long-form pieces about technology and society were featured and debated here years before these topics hit the political mainstream.
Thank you to our contributors and our regular readers. Here are the most-read tech policy posts from TLF in the past 15 years (I’ve omitted some popular but non-tech policy posts).
Today is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.
The fact is that Bitcoin is inching its way into the mainstream.
There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics. After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.
Each year I am contacted by dozens of people who are looking to break into the field of information technology policy as a think tank analyst, a research fellow at an academic institution, or even as an activist. Some of the people who contact me I already know; most of them I don’t. Some are free-marketeers, but a surprising number of them are independent analysts or even activist-minded Lefties. Some of them are students; others are current professionals looking to change fields (usually because they are stuck in boring job that doesn’t let them channel their intellectual energies in a positive way). Some are lawyers; others are economists, and a growing number are computer science or engineering grads. In sum, it’s a crazy assortment of inquiries I get from people, unified only by their shared desire to move into this exciting field of public policy.
. . . Unfortunately, there’s only so much time in the day and I am sometimes not able to get back to all of them. I always feel bad about that, so, this essay is an effort to gather my thoughts and advice and put it all one place . . . .
So, how can we determine whether watching depictions of violence will turn us all into killing machines, rapists, robbers, or just plain ol’ desensitized thugs? Well, how about looking at the real world! Whatever lab experiments might suggest, the evidence of a link between depictions of violence in media and the real-world equivalent just does not show up in the data. The FBI produces ongoing Crime in the United States reports that document violent crimes trends. Here’s what the data tells us about overall violent crime, forcible rape, and juvenile violent crime rates over the past two decades: They have all fallen. Perhaps most impressively, the juvenile crime rate has fallen an astonishing 36% since 1995 (and the juvenile murder rate has plummeted by 62%).
I’m getting married next Spring, and I’m currently negotiating the contract with our photographer. The photography business is weird because even though customers typically pay hundreds, if not thousands, of dollars up front to have photos taken at their weddings, the copyright in the photographs is typically retained by the photographer, and customers have to go hat in hand to the photographer and pay still more money for the privilege of getting copies of their photographs.
A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.
Advertising is increasingly under attack in Washington. . . . This regulatory tsunami could not come at a worse time, of course, since an attack on advertising is tantamount to an attack on media itself, and media is at a critical point of technological change. As we have pointed out repeatedly, the vast majority of media and content in this country is supported by commercial advertising in one way or another-particularly in the era of “free” content and services.
Reverse engineering the CSS encryption scheme, by itself, isn’t an especially innovative activity. However, what I think Prof. Picker is missing is how important such reverse engineering can be as a pre-condition for subsequent innovation. To illustrate the point, I’d like to offer three examples of companies or open source projects that have forcibly opened a company’s closed architecture, and trace how these have enabled subsequent innovation . . . .
The cycle goes something like this. A new technology appears. Those who fear the sweeping changes brought about by this technology see a sky that is about to fall. These “techno-pessimists” predict the death of the old order (which, ironically, is often a previous generation’s hotly-debated technology that others wanted slowed or stopped). Embracing this new technology, they fear, will result in the overthrow of traditions, beliefs, values, institutions, business models, and much else they hold sacred.
The pollyannas, by contrast, look out at the unfolding landscape and see mostly rainbows in the air. Theirs is a rose-colored world in which the technological revolution du jour is seen as improving the general lot of mankind and bringing about a better order. If something has to give, then the old ways be damned! For such “techno-optimists,” progress means some norms and institutions must adapt—perhaps even disappear—for society to continue its march forward.
Given the rough-and-tumble of real world lawmaking, does the rhetoric of “delicate balancing” merit any place in copyright jurisprudence? The Copyright Act does reflect compromises struck between the various parties that lobby congress and the administration for changes to federal law. A truce among special interests does not and cannot delicately balance all the interests affected by copyright law, however. Not even poetry can license the metaphor, which aggravates copyright’s public choice affliction by endowing the legislative process with more legitimacy than it deserves. To claim that copyright policy strikes a “delicate balance” commits not only legal fiction; it aids and abets a statutory tragedy.
Generally speaking, the cyber-libertarian’s motto is “Live & Let Live” and “Hands Off the Internet!” The cyber-libertarian aims to minimize the scope of state coercion in solving social and economic problems and looks instead to voluntary solutions and mutual consent-based arrangements.
Cyber-libertarians believe true “Internet freedom” is freedom from state action; not freedom for the State to reorder our affairs to supposedly make certain people or groups better off or to improve some amorphous “public interest”—an all-to convenient facade behind which unaccountable elites can impose their will on the rest of us.
It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before.
As I am getting ready to watch the Super Bowl tonight on my amazing 100-inch screen via a Sanyo high-def projector that only cost me $1,600 bucks on eBay, I started thinking back about how much things have evolved (technologically-speaking) over just the past decade. I thought to myself, what sort of technology did I have at my disposal exactly 10 years ago today, on February 1st, 1999? Here’s the miserable snapshot I came up with . . . .
While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity. Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism. . . . Yet, countless studies have shown that regulatory capture has been at work in various arenas: transportation and telecommunications; energy and environmental policy; farming and financial services; and many others.
I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” . . . Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research.
Today marks the 15th anniversary of the launch of the Technology Liberation Front. This blog has evolved through the years and served as a home for more than 50 writers who have shared their thoughts about the intersection of technological innovation and public policy.
Many TLF contributors have moved on to start other blogs or
write for other publications. Others have gone into other professions where
they simply can’t blog anymore. Still others now just publish their daily
musings on Twitter, which has had a massive substitution effect on long-form blogging
more generally. In any event, I’m pleased that so many of them had a home here
at some point over the past 15 years.
What has unified everyone who has written for the TLF is (1)
a strong belief in technological innovation as a method of improving the human
condition and (2) a corresponding concern about impediments to technological
change. Our contributors might best be labeled “rational
optimists,” to borrow Matt Ridley’s phrase, or “dynamists,” to use Virginia
Postrel’s term. In a
recent essay, I sketched out the core tenets of a dynamist, rational optimist
worldview, arguing that we:
believe there is a symbiotic relationship
between innovation, economic growth, pluralism, and human betterment, but also
acknowledge the various challenges sometimes associated with technological
change;
look forward to a better future and reject
overly nostalgic accounts of some supposed “good ‘ol days” or bygone better
eras;
base our optimism on facts and historical
analysis, not on blind faith in any particular viewpoint, ideology, or gut
feeling;
support practical, bottom-up solutions to hard
problems through ongoing trial-and-error experimentation, but are not wedded to
any one process to get the job done;
appreciate entrepreneurs for their willingness
to take risks and try new things, but do not engage in hero worship of any
particular individual, organization, or particular technology.
Applying that vision, the contributors here through the
years have unabashedly defended a pro-growth, pro-progress, pro-freedom vision,
but they have also rejected techno-utopianism or gadget-worship of any sort. Rational
optimists are anti-utopians, in fact, because they understand that hard
problems can only be solved through ongoing trial and error, not wishful
thinking or top-down central planning.
In a new Atlantic essay, Patrick Collison and Tyler Cowen suggest that, “We Need a New Science of Progress,” which, “would study the successful people, organizations, institutions, policies, and cultures that have arisen to date, and it would attempt to concoct policies and prescriptions that would help improve our ability to generate useful progress in the future.” Collison and Cowen refer to this project as Progress Studies.
Is such a field of study possible, and would it really be a “science”? I think the answer is yes, but with some caveats. Even if it proves to be an inexact science, however, the effort is worth undertaking.
Thinking about Progress
Progress Studies is a topic I have spent much of my life thinking and writing about, most recently in my book, Permissionless Innovation as well as a new paper on “Technological Innovation and Economic Growth,” co-authored with James Broughel. My work has argued that nations that are open to risk-taking, trial-and-error experimentation, and technological dynamism (i.e., “permissionless innovation”) are more likely to enjoy sustained economic growth and prosperity than those rooted in precautionary principle thinking and policies (i.e., prior restraints on innovative activities). A forthcoming book of mine on the future of entrepreneurialism and innovation will delve even deeper into these topics and address criticisms of technological advancement.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →