Articles by Will Rinehart

Will RinehartWill Rinehart is Director of Technology and Innovation Policy at the American Action Forum, where he specializes in telecommunication, Internet, and data policy, with a focus on emerging technologies and innovation. Rinehart previously worked at TechFreedom, where he was a Research Fellow. He was also previously the Director of Operations at the International Center for Law & Economics.


Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that, “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption, Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered, “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses. Continue reading →

Recently, Noah Smith explored an emerging question in tech. Is there a kill zone where new and innovative upstarts are being throttled by the biggest players? He explains,

Facebook commissioned a study by consultant Oliver Wyman that concluded that venture investment in the technology sector wasn’t lower than in other sectors, which led Wyman to conclude that there was no kill zone.

But economist Ian Hathaway noted that looking at the overall technology industry was too broad. Examining three specific industry categories — internet retail, internet software and social/platform software, corresponding to the industries dominated by Amazon, Google and Facebook, respectively — Hathaway found that initial venture-capital financings have declined by much more in the past few years than in comparable industries. That suggests the kill zone is real.

A recent paper by economists Wen Wen and Feng Zhu reaches a similar conclusion. Observing that Google has tended to follow Apple in deciding which mobile-app markets to enter, they assessed whether the threat of potential entry by Google (as measured by Apple’s actions) deters innovation by startups making apps for Google’s Android platform. They conclude that when the threat of the platform owner’s entry is higher, fewer app makers will be interested in offering a product for that particular niche. A 2014 paper by the same authors found similar results for Amazon and third-party merchants using its platform.

So, are American tech companies making it difficult for startups? Perhaps, but there are some other reasons to be skeptical. Continue reading →

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism, O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making. Continue reading →

National Public Radio, the Robert Wood Johnson Foundation, and the Harvard T.H. Chan School of Public Health just published a new report on “Life in Rural America.” This survey of 1,300 adults living in the rural United States has a lot to say about health issues, population change, the strengths and challenges for rural communities, as well as discrimination and drug use. But I wanted to highlight two questions related to rural broadband development that might make you update your beliefs about massive rural investment. Continue reading →

Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.

Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.

But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines. Psychologists know that people love to read negative articles. Our brains are wired for pessimism Continue reading →

Last week, I had the honor of being a panelist at the Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path. Continue reading →

Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,

Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.

Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.

One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)

But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that. Continue reading →

Privacy is an essentially contested concept. It evades a clear definition and when it is defined, scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms:

By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.

Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring. Continue reading →

Dan Wang has a new post titled “How Technology Grows (a restatement of definite optimism)” and it is characteristically good. For tech policy wonks and policymakers, put it in your queue. The essay clocks in at 7500 words, but there’s a lot to glean from the piece. Indeed, he puts into words a number of ideas I’ve been wanting to write about. To set the stage, he begins first by defining what we mean by technology:

Technology should be understood in three distinct forms: as processes embedded into tools (like pots, pans, and stoves); explicit instructions (like recipes); and as process knowledge, or what we can also refer to as tacit knowledge, know-how, and technical experience. Process knowledge is the kind of knowledge that’s hard to write down as an instruction. You can give someone a well-equipped kitchen and an extraordinarily detailed recipe, but unless he already has some cooking experience, we shouldn’t expect him to prepare a great dish.

As he rightly points out, the United States has, for various reasons, set aside the focus on process knowledge. Where this is especially evident comes in our manufacturing base:

When firms and factories go away, the accumulated process knowledge disappears as well. Industrial experience, scaling expertise, and all the things that come with learning-by-doing will decay. I visited Germany earlier this year to talk to people in industry. One point Germans kept bringing up was that the US has de-industrialized itself and scattered its production networks. While Germany responded to globalization by moving up the value chain, the US manufacturing base mostly responded by abandoning production.

The US is an outlier among rich countries when it comes to manufacturing exports. It needs improvement. Continue reading →

A curious thing happened last week. Facebook’s stock, which had seem to have weathered the 2018 controversies, took a beating.

In the Washington Post, Craig Timberg and Elizabeth Dwoskin explained that the stock market drop was representative of a larger wave:

The cost of years of privacy missteps finally caught up with Facebook this week, sending its market value down more than $100 billion Thursday in the largest single-day drop in value in Wall Street history.

Jeff Chester of the Center for Digital Democracy piled on, describing the drop as “a privacy wake-up call that the markets are delivering to Mark Zuckerberg.”

But the downward pressure was driven by more fundamental changes. Simply put, Facebook missed its earnings target. But it is important to peer into why the company didn’t meet those targets. Continue reading →