Articles by Will Rinehart
Will Rinehart is Senior Research Fellow at the Center for Growth and Opportunity at Utah State University, where he specializes in telecommunication, Internet, and data policy, with a focus on emerging technologies and innovation. Rinehart previously worked at the American Action Forum and TechFreedom. He was also previously the Director of Operations at the International Center for Law & Economics. He can be contacted at will at thecgo dot org.
To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism, O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).
O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,
Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.
O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making. Continue reading →
National Public Radio, the Robert Wood Johnson Foundation, and the Harvard T.H. Chan School of Public Health just published a new report on “Life in Rural America.” This survey of 1,300 adults living in the rural United States has a lot to say about health issues, population change, the strengths and challenges for rural communities, as well as discrimination and drug use. But I wanted to highlight two questions related to rural broadband development that might make you update your beliefs about massive rural investment. Continue reading →
Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.
Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.
But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines. Psychologists know that people love to read negative articles. Our brains are wired for pessimism Continue reading →
Last week, I had the honor of being a panelist at the Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?
When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path. Continue reading →
Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,
Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.
Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.
One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)
But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that. Continue reading →
Privacy is an essentially contested concept. It evades a clear definition and when it is defined, scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms:
By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.
Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring. Continue reading →
Dan Wang has a new post titled “How Technology Grows (a restatement of definite optimism)” and it is characteristically good. For tech policy wonks and policymakers, put it in your queue. The essay clocks in at 7500 words, but there’s a lot to glean from the piece. Indeed, he puts into words a number of ideas I’ve been wanting to write about. To set the stage, he begins first by defining what we mean by technology:
Technology should be understood in three distinct forms: as processes embedded into tools (like pots, pans, and stoves); explicit instructions (like recipes); and as process knowledge, or what we can also refer to as tacit knowledge, know-how, and technical experience. Process knowledge is the kind of knowledge that’s hard to write down as an instruction. You can give someone a well-equipped kitchen and an extraordinarily detailed recipe, but unless he already has some cooking experience, we shouldn’t expect him to prepare a great dish.
As he rightly points out, the United States has, for various reasons, set aside the focus on process knowledge. Where this is especially evident comes in our manufacturing base:
When firms and factories go away, the accumulated process knowledge disappears as well. Industrial experience, scaling expertise, and all the things that come with learning-by-doing will decay. I visited Germany earlier this year to talk to people in industry. One point Germans kept bringing up was that the US has de-industrialized itself and scattered its production networks. While Germany responded to globalization by moving up the value chain, the US manufacturing base mostly responded by abandoning production.
The US is an outlier among rich countries when it comes to manufacturing exports. It needs improvement. Continue reading →
A curious thing happened last week. Facebook’s stock, which had seem to have weathered the 2018 controversies, took a beating.
In the Washington Post, Craig Timberg and Elizabeth Dwoskin explained that the stock market drop was representative of a larger wave:
The cost of years of privacy missteps finally caught up with Facebook this week, sending its market value down more than $100 billion Thursday in the largest single-day drop in value in Wall Street history.
Jeff Chester of the Center for Digital Democracy piled on, describing the drop as “a privacy wake-up call that the markets are delivering to Mark Zuckerberg.”
But the downward pressure was driven by more fundamental changes. Simply put, Facebook missed its earnings target. But it is important to peer into why the company didn’t meet those targets. Continue reading →
In cleaning up my desk this weekend, I chanced upon an old notebook and like many times before I began to transcribe the notes. It was short, so I got to the end within a couple of minutes. The last page was scribbled with the German term Öffentlichkeit (public sphere), a couple sentences on Hannah Arendt, and a paragraph about Norberto Bobbio’s view of public and private.
Then I remembered. Yep. This is the missing notebook from a class on democracy in the digital age.
Serendipitously, a couple of hours later, William Freeland alerted me to Franklin Foer’s newest piece in The Atlantic titled “The Death of the Public Square.” Foer is the author of “World Without Mind: The Existential Threat of Big Tech,” and if you want a good take on that book, check out Adam Thierer’s review in Reason.
Much like the book, this Atlantic piece wades into techno ruin porn but focuses instead on the public sphere: Continue reading →
The Supreme Court is winding down for the year and last week put out a much awaited decision in Ohio v. American Express. Some have rung the alarm with this case, but I think caution is worthwhile. In short, the Court’s analysis wasn’t expansive like some have claimed, but incomplete. There are a lot of important details to this case and the guideposts it has provided will likely be fought over in future litigation over platform regulation. To narrow the scope of this post, I am going to focus on the market definition question and the issue of two-sided platforms in light of the developments in the industrial organization (IO) literature in the past two decades. Continue reading →