Will Rinehart – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 06 Nov 2019 14:58:19 +0000 en-US hourly 1 6772528 The Entrepreneurial State, Some Brief Comments https://techliberation.com/2019/10/16/the-entrepreneurial-state-some-brief-comments/ https://techliberation.com/2019/10/16/the-entrepreneurial-state-some-brief-comments/#respond Wed, 16 Oct 2019 14:16:50 +0000 https://techliberation.com/?p=76620

Economist Mariana Mazzucato has a full spread in the Wired UK humbling suggesting that she “has a plan to fix capitalism.” The plan is an outgrowth of her 2013 book The Entrepreneurial State , which contends that government involvement in research and development (R&D), loans, and other business subsidies are the true drivers of innovation, not the private sector. Her plan is simple: governments need to do better on funding innovation.  

It goes without saying that the government is massively involved in innovation and for good reason. Open any introductory economics text and you’re likely to see an argument for why. Private actors are short sighted and often fail to plan for the long term by investing in R&D that will lead to technological progress. Basic research also might lead to advances or products outside of the company’s niche. Knowing that they won’t be able to capture all of the gains from research, private entities will choose a lower level of investment than is optimal, leading to a market failure. Governments solve this market failure by allocating resources to expanding scientific and technological knowledge.    

While Mazzucato might be finding an audience with policy makers in the UK and doers in Silicon Valley, innovation economists are a little more wary of her state first theory of innovation. Here are some things worth considering when reading her work:

  1. Innovation and invention are distinct concepts. Bringing a product to market (innovation) involves very different skill sets than ideation (invention). In the 1950s and 1960s, large companies were deeply involved in basic research, but today, businesses have shifted toward their competitive advantage, which lies in supply chain management, marketing, and customer acquisition. As I noted in 2014 of this change: “Increasingly, however, firms are becoming flexible assemblies, connecting skills, capacities, and funding from sources around the world. In this regard, US firms continue to dominate in business model and process innovation.”
  2. A key chapter in this book dissected all of the patents that went into the iPhone. It is unclear if Mazzucato has a theory as to why Apple, not Nokia, RIM or Motorola, became the innovator in the smartphone space. All had access to the same government supported patents, but it was Apple that sparked the revolution. Apple practically commercialized an idea that had been out there for some time. 
  3. Mazzucato’s work aims to bash the Great Man Theory of Innovation, which is needed, but puts in its place the Great State Theory of Innovation. As Artir explains in beautiful detail: “The State was one more actor, like IBM, Bell Labs, Sony, Goodenough, Brody, or Lechner.” 
  4. Mazzucato chides companies for appropriating so much of the total value of government investment, but as William Nordhaus calculated, innovators only capture 2.2 percent of the total surplus . Consumers are the real beneficiaries.
  5. Is Mazzucato sampling on the dependent variable ?     

There is a lot more out there if you want to read up. For starters, I would check out:

]]>
https://techliberation.com/2019/10/16/the-entrepreneurial-state-some-brief-comments/feed/ 0 76620
How Do You Value Data? A Reply To Jaron Lanier’s Op-Ed In The NYT https://techliberation.com/2019/09/23/how-do-you-value-data-a-reply-to-jaron-laniers-op-ed-in-the-nyt/ https://techliberation.com/2019/09/23/how-do-you-value-data-a-reply-to-jaron-laniers-op-ed-in-the-nyt/#comments Mon, 23 Sep 2019 21:34:33 +0000 https://techliberation.com/?p=76594

Jaron Lanier was featured in a recent New York Times op-ed explaining why people should get paid for their data. Under this scheme, he estimates the total value of data for a four person household could fetch around $20,000. 

Let’s do the math on that.

Data from eMarketer finds that users spend about an hour and fifteen minutes per day on social media for a total of 456.25 hours per year. Thus, by Lanier’s estimates, the income from data would be about $10.95 per hour. That’s not too bad!

By any measure, however, the estimate is high. Since I have written extensively on this subject (see this , this , and this ), I thought it might be helpful to explain the four general methods used to value an intangibles like data. They include income methods, market rates, cost methods, and finally, shadow prices. 

Most data valuations are accomplished through income derivations, often by simply dividing the total market capitalization or revenue of a firm by the total number of users. For those in finance, this method seems most logical since it is akin to an estimate of future cash flows. In its 2018 annual report , Facebook calculated that the average revenue per user was around $112 in the United States and Canada. Antonio Garcia-Martinez recently used this data point in Wired magazine to place an upper limit to the digital dividend idea from California Governor Gavin Newsom. Similarly, when Microsoft bought LinkedIn, reports suggested that they were buying monthly active users at a rate of $260. A. Douglas Melamed argued in a recent Senate hearing that the upper-bound value on data should at least be cognizant of the acquisition cost for advertisements —putting the total user value at around $16. 

Income-based valuations, however, are crude estimates because they are not capturing a user’s ability to marginally earn revenue for the platform. The way to understand this problem is by first recognizing how the three classes of data interact online. Volunteered data is data that is both innate to an individual’s profile, such as age and gender, and information they share, such as pictures, videos, news articles, and commentary. Observed data comes as a result of user interactions with the volunteered data; it is this class of data that platforms tend to collect in data centers. Last, inferred data is the information that comes from analysis of the first two classes, which explains how groups of individuals are interacting with different sets of digital objects. 

Inferential data is the key, as it both drives advertising decisions, and it helps determine what content is presented to users. Thus, the value of a user’s data would combine

  1. The value of that user’s data to increase all their friend’s demand for content; and 
  2. The value of that user’s data to contribute to increases in advertising demand.

I’ve seen work suggesting that Shapley values might be used to figure out these numbers. Needless to say, income based valuations are difficult.      

Market prices are another method of valuing data, and they tend to place the lowest premium on data. For example:

As with any market, it is important to pay attention to the clearing price because not all markets clear. The bankruptcy proceedings for Caesars Entertainment, a subsidiary of the larger casino company, offers a unique example of this problem. As the assets were being priced in the selloff, the Total Rewards customer loyalty program got valued at nearly $1 billion, making it “the most valuable asset in the bitter bankruptcy feud at Caesars Entertainment Corp.” But the ombudsman’s report understood that it would be a tough sell because of the difficulties in incorporating it into another company’s loyalty program. Although it was Caesar’s’ most pricey asset, its value to an outside party was an open question.

As I detailed earlier this year , data is often valued within a relationship, but practically valueless outside of it. There is a term of art for this phenomenon, as economist Benjamin Klein explained : “Specific assets are assets that have a significantly higher value within a particular transacting relationship than outside the relationship.” Asset specificity goes a long way to explain why there isn’t a thick market for data as Lanier would like.  

Third, data might be valued using cost-based methods.  But, Chloe Mawer cautioned against using cost-based routes: “This method is highly imprecise for data, because data is often created as an intermediate product of other business processes.” In practice, I assume cost-based methods would probably look like Shapley values anyway.  

Lastly, data can be valued through shadow prices. For those items that are rarely exchanged in a market, prices are often difficult to calculate and so other methods are used to appraise what is known as the shadow price. For example, a lake’s value might be determined by the total amount of time in lost wages and money spent by recreational users to get there. For each person, there is a shadow price for that lake. 

Similarly, the value of social media can be calculated by tallying all of the forgone wages in using the site. A conservative estimate from a couple years back suggests that users spend about 20 hours a month on Facebook. Since the current average wage is about $28 , this calculation indicates that people roughly value the site by about $6700 over the entire year. A study using data from 2016 using similar methods found that American adults consumed 437 billion hours of content on ad-supported media, worth at least $7.1 trillion in terms of foregone wages.

Shadow prices can also be calculated through surveys, which is where they get controversial. Depending on how the question is worded, users willingness to pay for privacy can be wildly variable. Trade association NetChoice worked with Zogby Analytics to find that only 16 percent of people are willing to pay for online platform service. Strahilevitz and Kugler found that 65 percent of email users, even though they knew their email service scans emails to serve ads, wouldn’t pay for alternative. As one seminal study noted, “most subjects happily accepted to sell their personal information even for just 25 cents.” Using differentiated smartphone apps, economists were able to estimate that consumers were willing to pay a one-time fee of $2.28 to conceal their browser history, $4.05 to conceal their list of contacts, $1.19 to conceal their location, $1.75 to conceal their phone’s identification number, and $3.58 to conceal the contents of their text messages. The average consumer was also willing to pay $2.12 to eliminate advertising. 

All of this is to say that there is no one single way to estimate the value of data.

As for the Lanier piece, here are some other things to consider:

  • A market for data already exists. It just doesn’t include a set of participants that Jaron wants to include, which are platform users.    
  • Will users want to be data entrepreneurs, looking for the best value for their data? Probably not. At best, they will hire an intermediary to do this, which is basically the job of the platforms already.
  • An underlying assumption is that the value of data is greater than the value advertisers are willing to pay for a slice of your attention. I’m not sure I agree with that.
  • Finally, how exactly do you write these kinds of laws?
]]>
https://techliberation.com/2019/09/23/how-do-you-value-data-a-reply-to-jaron-laniers-op-ed-in-the-nyt/feed/ 3 76594
There are good reasons to be skeptical that automation will unravel the labor market https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/ https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/#respond Mon, 08 Jul 2019 20:19:33 +0000 https://techliberation.com/?p=76520

When it comes to the threat of automation, I agree with Ryan Khurana: “From self-driving car crashes to failed workplace algorithms, many AI tools fail to perform simple tasks humans excel at, let alone far surpass us in every way.” Like myself, he is skeptical that automation will unravel the labor market, pointing out that “[The] conflation of what AI ‘may one day do’ with the much more mundane ‘what software can do today’ creates a powerful narrative around automation that accepts no refutation.”

Khurana marshals a number of examples to make this point:

Google needs to use human callers to impersonate its Duplex system on up to a quarter of calls, and Uber needs crowd-sourced labor to ensure its automated identification system remains fast, but admitting this makes them look less automated…

London-based investment firm MMC Ventures found that out of the 2,830 startups they identified as being “AI-focused” in Europe, 40% used no machine learning tools, whatsoever.

I’ve been collecting examples of the AI hype machine as well. Here are some of my favorites.

From Rodney Brooks comes this corrective:

Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”

Judea Pearl, a pioneer in statistics, said last year that “All the impressive achievements of deep learning amount to just curve fitting,” a technique that was developed decades ago.

Earlier this year, IBM shut down its Watson AI tool for drug discovery.

Mike Mallazzo said it this way: “The investors know it’s bullshit. When venture capitalists say they are looking to add ‘A.I. companies’ to their portfolio, what they really want is a technological moat built around access to uniquely valuable data. If it’s beneficial for companies to sprinkle in a little sex appeal and brand this as ‘A.I.,’ there’s no incentive to stop them from doing so.”

And there is the problem of cost:

As I explained before, the large pecuniary costs in big data technologies don’t speak to the equally expensive task of overhauling management techniques to make the new systems work. New technologies can’t be seamlessly adopted within firms, they need management and process innovations to make the new data-driven methods profitable. And to be honest, we just aren’t there yet.

]]>
https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/feed/ 0 76520
Reviving the Office of Technology Assessment would require a set of specific conditions in Congress. We have simply not arrived at that time. https://techliberation.com/2019/06/15/reviving-the-office-of-technology-assessment-would-require-a-set-of-specific-conditions-in-congress-we-have-simply-not-arrived-at-that-time/ https://techliberation.com/2019/06/15/reviving-the-office-of-technology-assessment-would-require-a-set-of-specific-conditions-in-congress-we-have-simply-not-arrived-at-that-time/#comments Sat, 15 Jun 2019 13:20:47 +0000 https://techliberation.com/?p=76504

Cato Unbound is taking on the issue of tech expertise this month and the lead essay came from Kevin Kosar, who argues for the revival of the Office of Technology Assessment . As he explains ,

[N]o one wants Congress enacting policies that make us worse off, or that delay or stifle technologies that improve our lives. And yet this kind of bad policy happens with lamentable frequency. Pluralistic politics inevitably features some self-serving interests that are more powerful and politically persuasive than others. This is why government often undertakes bailouts and other actions that are odious to the public writ large.  

He continues, “Congress’s ineptitude in [science and technology policy] has been richly displayed.” To help embed expertise in science and technology policy, Kosar argues for the revival of the Office of Technology Assessment, which was established in 1972 and defunded in 1995.

I have been on the OTA beat for a little while now, and so I offered some criticism of Kosar’s proposal, which you can find here . I’ll lay out my cards: I’ve been skeptical of reving the OTA in the past and I remain so. Here is my key graf on that:

Elsewhere, I have argued that the OTA should be seen as a last resort; there are other ways of embedding expertise in Congress, like boosting staff and reforming hiring practices. The following essay makes a slightly different argument, namely, that the history of the OTA shows the razor wire on which a revived version of agency will have to balance. In its early years, the OTA was dogged by accusations of partiality. Having established itself as a neutral party throughout the 1980s, the OTA was abolished because it failed to distinguish itself among competing agencies. There is an underlying political economy to expertise that makes the revival of the OTA difficult, undercutting it as an option for expanding tech expertise. In a modern political environment where scientific knowledge is politicized and budgets are tight, the OTA would likely face the hatchet once again.   

The OTA wasn’t supposed to be just a tech assessment office, but also an outside government agency that could check the Executive. The legislative history underpins that goal:

While members wanted the OTA to help understand an increasingly complex world, congressional architects also thought it would redress an imbalance of federal power that favored the White House. Speaking in favor of the creation of the OTA in May 1970, Missouri Democrat James Symington emphasized that , “We have tended simply to accede to administration initiatives, which themselves from time to time may have been hastily or inaccurately promoted.” When the bill came to the floor in 1972, Republican Representative Charles Mosher noted , “Let us face it, Mr. Chairman, we in the Congress are constantly outmanned and outgunned by the expertise of the executive agencies.” Writing on the eve of its demise, a historian of the agency explained that, “the most important factor in establishing the OTA was a desire on the part of Congress for technical advice independent of the executive branch.”

The demand for the OTA at its genesis was twofold, for knowledge as Kosar explains, but also for power. I doubt Democrats or the Republicans today would “simply to accede to administration initiatives,” as both had a tendency to do in the 1960s. The agency was created at a time when both sides of Congress wanted to check the president. It was a solution to an intertwined set of problems. While the tenor in Congress could swiftly change, a groundswell of bipartisan support for an OTA would be needed before any efforts to revive it could be effective. A revival would need to be a compromise that both parties and both chambers of Congress could agree to. We have simply not arrived at that time.   

But there are solutions to sidestep the reinvigoration of a standalone agency. In line with my suggestions from last year , the General Accounting Office is expanding their tech assessment program. Congress also needs to reform their staffing processes to encourage stability and reduce turnover. None of these proposals, however, will make headway in changing congressional offices back toward their orientation in the early 1990s. In a follow up post over at Cato, I will explore the challenges that any reform will face when trying to solve the problem of tech expertise.

]]>
https://techliberation.com/2019/06/15/reviving-the-office-of-technology-assessment-would-require-a-set-of-specific-conditions-in-congress-we-have-simply-not-arrived-at-that-time/feed/ 3 76504
Rural Broadband Isn’t Like Rural Electrification, It Is So Much Harder https://techliberation.com/2019/04/08/rural-broadband-isnt-like-rural-electrification-it-is-so-much-harder/ https://techliberation.com/2019/04/08/rural-broadband-isnt-like-rural-electrification-it-is-so-much-harder/#comments Mon, 08 Apr 2019 20:51:21 +0000 https://techliberation.com/?p=76464

Many have likened efforts to build out rural broadband today to the accomplishments of rural electrification in the 1930s. But the two couldn’t be further from each other. From the structure of the program and underlying costs, to the impact on productivity, rural electrification is drastically different than current efforts to get broadband in rural regions. My recent piece at ReaclClearPolicy explores some of those differences, but there is one area I wasn’t able to explore, the question of cost. If a government agency, any government agency for that matter, was able to repeat the dramatic reduction in cost for broadband, the US wouldn’t have a deployment problem.

It is helpful to set the stage. 

By the 1930s, electricity had become common within cities, but the service was largely unavailable in rural areas where farmers and ranchers worked and lived. So, in 1936, President Franklin Delano Roosevelt worked with Congress to pass the 1936 Rural Electrification Act, which in 4 years helped to expand the number of farms with electricity from 11.6 percent to 25 percent .  

Fast forward to today, rural broadband access lags behind urban access. While the gap is narrowing, broadband is available to only only 65 percent of rural regions, compared to 98 percent in urban areas . Moreover, rural areas has fallen behind urban regions in job creation since the recovery. As such, leaders across the country see connections between today and the 1930s.

A couple months back, Washington State Representative Mike Chapman said , “we kind of equate rural broadband with the rural electrification in the 20th century.” In November, Senator Tina Smith said that Internet connectivity should be approached like rural electrification. Federal Communications Commissioner Jessica has also made the connection. As she recently explained “We were able to get electrification to happen in rural, hard-to-reach parts of this nation. We need to be able to do the same with broadband.” But there are some key differences.

Importantly, the Rural Electrification Administration was actively involved in reducing the price of electricity to achieve economies of scale in rural areas. As administrators noted in 1937, “Sometimes a difference of a fraction of a cent per kilowatt hour in the wholesale rate will represent the difference between a sound and unsound project.” By 1939, the Rural Electrification Administration reported that the cost of building rural electricity lines decreased more than a third of the cost, in just a few years.

This price drop isn’t happening in broadband, but it explains why electricity expanded so quickly. Indeed, broadband projects aren’t lost or made on pennies, but on thousands of dollars per household.  Getting fiber broadband into the ground or on poles is a fixed cost that can range from $20,000 to $60,000 per mile . As the number of households becomes less dense, the cost to hook up a new household increases. One project in rural Wisconsin was projected to cost roughly $8,000 per household , while in rural Tennessee that number hovers around $5,000 per household . In contrast, Verizon was able to build FiOS at around $900 per household , by deploying in urban cores. If broadband deployment costs were reduced by 1/3, a lot of projects would be viable. 

It makes sense, then, that the Rural Electrification Act was strictly a loan program. The Act tasked the Rural Electrification Administration, now known as the Rural Utilities Service, to give out loans, most often with electric coops, typically with 20-year terms at a 2.88 percent interest.

In contrast, most broadband programs today are grant programs that help to offset the high initial costs. The American Recovery and Reinvestment Act of 2009 directed nearly $4 billion to the Broadband Technology Opportunities Program (BTOP), which simply doled out money to needy projects. Minnesota’s Border-to-Border Broadband Development Grant Program , among the most successful state programs, has been useful in defraying project development costs but isn’t in the business of loan repayments. Moreover, the Trump Administration’s proposal for boosting rural broadband would have achieved it through grants.

When leaders and activists call for a national effort to get broadband everyone, we all recognize the reference. The Rural Electrification Act seems to be an unmitigated success. But aspirations need to meet reality. While rural broadband development will be pursued, that doesn’t mean it will be as easy as electrification was in the 1930s.

]]>
https://techliberation.com/2019/04/08/rural-broadband-isnt-like-rural-electrification-it-is-so-much-harder/feed/ 2 76464
An Esoteric Reading of LM Sacasas https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/ https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/#respond Tue, 26 Feb 2019 14:54:15 +0000 https://techliberation.com/?p=76459

After reading LM Sacasas’ recent piece on moral communities , I couldn’t help but wonder if the piece was written in the esoteric mode .

Let me explain by some meandering.

Now, I am surely going to butcher his argument, so take a read of it yourself, but there is a bit of an interesting call and response structure to the piece. He begins with commentary on “frequent deployment of the rhetorical we ,” in discussions over the morality of technology. Then, channeling Langdon Winner, he notes approvingly that “What matters here is that this lovely ‘we’ suggests the presence of a moral community that may not, in fact, exist at all, at least not in any coherent, self-conscious form.”

He is right, the use of the rhetorical we helps to construct a community, which he thens deploys later in the piece. To see this in action,   

…The idea that technical forms are merely neutral has proven hard to shake. For a very long time, it has been a cornerstone principle of our thinking about technology and society. Or, more to the point, we have taken it for granted and have consequently done very little thinking about technology with regards to society.

I’ll note in passing that the liberal democratic structures of modern political culture and the development of technology are deeply intertwined, and they have both depended upon the presumption of their ostensible neutrality. I tempted to think that our present crisis is a function of a growing realization that neither our political structures nor our technologies are, in fact, merely neutral instruments.

Before becoming a policy analyst, I went to graduate school at the University of Illinois at Chicago and studied communication, which at the time was transitioning away from the influence of former dean Stanley Fish and becoming a new media study program. The staff was and still is excellent, but at the time it was deeply heterodox, including both old school rhetoricians and literary scholars as well as communication historians, and communication sociologists.

All of this background is to say that Sacasas’ charge that “we have taken it for granted and have consequently done very little thinking about technology with regards to society,” depends a lot on the kind of community you call your own and how you understand community.

My former community, communication scholars, has a long history of exploring these questions. Indeed, one of my favorite classes was an introductory survey course on democracy and technology . But Sacasas all too well knows that community. I don’t think he was intending to suggest those kind of counterpublics when suggesting community. As he notes, “There is no moral community or public space in which technological issues are topics for deliberation, debate, and shared action.” Here, he means moral community as it comes to us from Durkheim. Just as a reminder, moral community in this tradition generally references “those beings that you need to think ‘but is this right’ before you do something that could affect them.” In other words, questions over the morality of technology are not attended by the kinds of questions that constitute a moral community. I want to come back to this point later.

Where does this leave us? He further explains,

We are, at present, stuck in an unhelpful tendency to imagine that our only options with regard to how we govern technology are, on the one hand, individual choices and, on the other, regulation by the state. What’s worse, we’ve also tended to oppose these to one another. But this way of conceptualizing our situation is both a symptom of the deepest consequences of modern technology and part of the reason why it is so difficult to make any progress.

Technology operates at different scales and effective mechanisms of governance need to correspond to the challenges that arise at each scale. Mechanism of governance that makes sense at one end of the spectrum will be ineffective at the other end and vice versa.

Our problem is basically this: technologies that operate at the macro-level cannot be effectively governed by micro-level mechanisms, which basically amount to individual choices. At the macro-level, however, governance is limited by the degree to which we can arrive at public consensus, and the available tools of governance at the macro-level cannot address all of the ways technologies impact individuals. What is required is a cocktail of strategies that address the consequences of technology as they manifest themselves across the spectrum of scale.

In other words, Sacasas sets up a governance gap problem . There are micro-level solutions and macro-level solutions, but nothing in the middle that might emanate from a moral community. But, again, the fundamental criticism of this entire argument hinges on accepting the rhetorical we and the notion of a community. Or, to say it another way, a community must first be constructed for a governance gap to exist. If we don’t agree to the rhetorical construction of community, if there is no we, then there is no gap to fill. This is no small feat. Even Durkheim’s original understanding of moral community was a subjective understanding of the ethics of an imagined community.

But even separate from the construction problem, it is not clear to me that there isn’t already “a cocktail of strategies that address the consequences of technology as they manifest themselves across the spectrum of scale.” For example, Facebook changed its policy on breastfeeding photos after a group of mothers organized and pushed the #FreeTheNipple campaign . I cannot help but wonder if that is the kind of community driven strategy that Sacasas would want to promote.

That notoriously nebulous concept of civil society is worth invoking here. Organizations like EFF and EPIC and FreePress sue platforms and local governments, and help enact change. And what about all of the reports from journalists in the last decade? They have impacted both Facebook and Google, forcing them to change. Same with Apple and AT&T and Verizon. All of this is to say, I’m not exactly convinced this vision of the world is the appropriate yardstick of critique.   

]]>
https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/feed/ 0 76459
The Kids Are Going To Be Alright https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/ https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/#respond Thu, 17 Jan 2019 19:24:02 +0000 https://techliberation.com/?p=76449

Catchy headlines like “Heavy Social Media Use Linked With Mental Health Issues In Teens” and “Have Smartphones Destroyed a Generation?” advance a common trope of generational decline. But a new paper in Nature uses a new and rigorous analytical method to understand the relationship between adolescent well-being and digital technology, finding a “negative but small [link], explaining at most 0.4% of the variation in well-being.”

What really sets apart the new paper from Amy Orben and Andy Przybylski is that it aims to capture a more complete picture of how variables interact. The problem that Orden and Przybylski tackle is endemic one in social science. Sussing out the causal relationship between two variables will always be confounded by other related variables in the dataset. So how do you choose the right combination of variables to test?

An analytical approach first developed by Simonsohn, Simmons and Nelson outlines a method for solving this problem. As Orben and Przybylski wrote, “Instead of reporting a handful of analyses in their paper, [researchers] report all results of all theoretically defensible analyses.” The result is a range of possible coefficients, which can then be plotted along a curve, a specification curve. Below is the specification curve from one of the datasets that Orben and Przybylski analyzed.

Amy Orben and Andrew Przybylski explain why this method is important to policy makers who are interested in the tech use question:

Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context.  In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.

Truthfully this is the first time I have encountered specification curve analysis and am quite impressed with its power. For more details, check out the OSF page, the writeup in Nature, and the full paper. Also, Michael Scharkow details how to apply SCA to variance and includes some R code.

]]>
https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/feed/ 0 76449
If you’re worried about net neutrality, put your reputation on the line and make a prediction about the future https://techliberation.com/2018/12/17/if-youre-worried-about-net-neutrality-put-your-reputation-on-the-line-and-make-a-prediction-about-the-future/ https://techliberation.com/2018/12/17/if-youre-worried-about-net-neutrality-put-your-reputation-on-the-line-and-make-a-prediction-about-the-future/#comments Mon, 17 Dec 2018 14:10:05 +0000 https://techliberation.com/?p=76437

It is now been a year since network neutrality rules supported by Title II were officially repealed, marking the end of the Obama-era legislation. Writing in Wired, Klint Finley noted that, “The good news is that the internet isn’t drastically different than it was before. But that’s also the bad news: The net wasn’t always so neutral to begin with.”

At the time, many worried what would happen. Apple co-founder Steve Wozniak and former FCC Commissioner Michael Copps suggested that two worlds were possible. “Will consumers and citizens control their online experiences, or will a few gigantic gatekeepers take this dynamic technology down the road of centralized control, toll booths and constantly rising prices for consumers?”

Katrina Vanden Heuvel, editor & publisher of The Nation warned that, “A broadband carrier like AT&T, if it wanted, might even practice internet censorship akin to that of the Chinese state, blocking its critics and promoting its own agenda.”

Senator Ed Markey even addressed the issue of apocalyptic messaging: “Don’t be fooled by the voices that say this is all doom and gloom & that the ISPs would NEVER block or throttle content. Mark my words, without #NetNeutrality, these are not alarmist & hypothetical harms. They are real, & without #NetNeutrality they may become the new normal.”

Each of these statements is a testable prediction. And those that deeply care about the issue should be willing to make accurate predictions that can be tested at some near point in the future. What bothers me the most is that very few people are willing to bear reputational cost if they fail to correctly predict the future. To borrow a phrase Nassim Taleb, more people should have skin in the policy game.

Here is a set of questions to get the ball rolling. In three years from this week, we should be willing to come back to settle up and see who was right.

  • A large ISP, as defined by more than 1 million subscribers, will explicitly block political speech.  
  • A large ISP will explicitly throttle an upstream content site.
  • A large ISP will demand additional payment from an upstream content site, separate from transit negotiations.
  • Beginning in January 2019, the Consumer Price Index for “Internet services and electronic information providers” (SEEE03) will begin to rise faster than the total CPI.

Why does this matter? Making nuanced predictions seems to diminish extreme views. A new paper from Barbara Mellers, Philip Tetlock, and Hal R. Arkes gives some context:  

People often express political opinions in starkly dichotomous terms, such as “Trump will either trigger a ruinous trade war or save U.S. factory workers from disaster.” This mode of communication promotes polarization into ideological in-groups and out-groups. We explore the power of an emerging methodology, forecasting tournaments, to encourage clashing factions to do something odd: to translate their beliefs into nuanced probability judgments and track accuracy over time and questions. In theory, tournaments advance the goals of “deliberative democracy” by incentivizing people to be flexible belief updaters whose views converge in response to facts, thus depolarizing unnecessarily polarized debates. We examine the hypothesis that, in the process of thinking critically about their beliefs, tournament participants become more moderate in their own political attitudes and those they attribute to the other side. We view tournaments as belonging to a broader class of psychological inductions that increase epistemic humility and that include asking people to explore alternative perspectives, probing the depth of their cause-effect understanding and holding them accountable to audiences with difficult-to-guess views.

The issue of network neutrality has become polarized. One way to mitigate that bifurcation is to put your reputation on the line and make a prediction about the future.

]]>
https://techliberation.com/2018/12/17/if-youre-worried-about-net-neutrality-put-your-reputation-on-the-line-and-make-a-prediction-about-the-future/feed/ 1 76437
Three Short Responses To The Pacing Problem https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/ https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/#respond Tue, 27 Nov 2018 17:16:38 +0000 https://techliberation.com/?p=76419

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that , “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption , Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered , “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses.

Technological Determinism

Part of what drives the worry about a pacing problem is rooted in a belief in technological determinism . Determinism aligns human actors and technological objects in a causal relationship. Technology acts on society as an outside force. In this view of the world, technology is separate from society and thus can advance by leaps and bounds before society and regulation can catch up. In other words, technology is made an independent variable with acts upon us all.

Yet, that doesn’t describe the world in which technological objects are created and sustained. The iPhone was created by Apple following the success of the iPod in melding the hardware platform with the content of the mobile web, ultimately for the purpose of boosting sales. And people became enamored with it, lining up days before its release to grab one. Technologies aren’t alien objects. They are molded by particular interests and institutional goals, and rooted in society, especially the bourgeois virtues.

Technologies exist within human ecology, just as economic systems do. To make technology an outside force misplaces the role of human values in the creation and adoption of innovation. As separated from society, determinism allows for technology to be both mythologized and demonized. Technologies cannot outpace our ability to adapt. Rather, the speed of change, of innovation, is rate limited by society’s ability to adapt. As Robin Hanson explained , “society’s ability to adapt is the primary constraint on how fast we adopt new technologies.”

The Technological Accident

The pacing problem also gains purchase because new technologies create the possibility for new accidents. As philosopher Paul Virilio wrote ,

To invent the sailing ship or the steamer is to invent the shipwreck. To invent the train is to invent the rail accident of derailment. To invent the family automobile is to produce the pile-up on the highway.

Every newly created technology comes with the potential for problems. So the possibility set for accidents increases dramatically when a new technology comes onto the scene. But it isn’t the case that all of those risk will be manifested. Only a subset of potential problems will ever become realized. As such, it isn’t is that social and regulatory responses systems need to have all answers. Rather, there needs to be in place flexible systems to deal with actualized issues.      

Regulation as a Real Option

Perhaps, however, we have been thinking about the pacing problem incorrectly. Maybe the pacing problem isn’t a problem as much as it is a reflection of uncertainty. Again, Vivek Wadhwa pithilty explained this problem, saying, “We haven’t come to grips with what is ethical, let alone with what the laws should be , in relation to technologies such as social media.” Consider that phrase I have highlighted. There is little agreement as to how we should regulate social media. In other words, there is regulatory uncertainty. The concept of real option might help make sense of this.

Real options are the investment choices that a company’s management will makes in order “to expand, change or curtail projects based on changing economic, technological or market conditions.” While originally used in strictly financial terms, economists Avinash Dixit and Robert Pindyck have adapted this concept to understand how firms invest, or not, in the face of regulatory uncertainty. As you read this paragraph from the first chapter of their book on the subject , replace the term investment with regulation and see what you think,  

Most investment decisions share three important characteristics it varying degrees. First, the investment is partially or completely irreversible. In other words, the initial cost of investment is at least partially sunk; you cannot recover it all should you change your mind. Second, there is uncertainty over the future rewards from the investment. The best you can do is to assess the probabilities of the alternative outcomes that can mean greater or smaller profit (or loss) for your venture. Third, you have some leeway about the timing of your investment. You can postpone action to get more information (but never, of course, complete certainty) about the future.   

There are strong corollaries. First, most regulatory decisions are difficult to reverse. It is rare for regulations to be stricken from the books, and even if they are, the affected industries are often impacted in more subtle ways. Second off, the potential benefits from a regulatory action are uncertain as Wadhwa pointed out. And finally, government bodies do have some leeway about the timing of their regulatory. Putting all of this together then, regulation might be thought of as a real option.

As economists Bronwyn H. Hall and Beethika Khan explained ,  

The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.

In the same way, government regulation isn’t about regulating now or not regulating at all, but about regulating now or deferring the decision until later. That sounds a lot to me like the pacing problem.  

]]>
https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/feed/ 0 76419
Is There a Kill Zone in Tech? https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/ https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/#comments Wed, 07 Nov 2018 22:24:53 +0000 https://techliberation.com/?p=76409

Recently, Noah Smith explored an emerging question in tech. Is there a kill zone where new and innovative upstarts are being throttled by the biggest players? He explains ,

Facebook commissioned a study by consultant Oliver Wyman that concluded that venture investment in the technology sector wasn’t lower than in other sectors, which led Wyman to conclude that there was no kill zone.

But economist Ian Hathaway noted that looking at the overall technology industry was too broad. Examining three specific industry categories — internet retail, internet software and social/platform software, corresponding to the industries dominated by Amazon, Google and Facebook, respectively — Hathaway found that initial venture-capital financings have declined by much more in the past few years than in comparable industries. That suggests the kill zone is real.

A recent paper by economists Wen Wen and Feng Zhu reaches a similar conclusion. Observing that Google has tended to follow Apple in deciding which mobile-app markets to enter, they assessed whether the threat of potential entry by Google (as measured by Apple’s actions) deters innovation by startups making apps for Google’s Android platform. They conclude that when the threat of the platform owner’s entry is higher, fewer app makers will be interested in offering a product for that particular niche. A 2014 paper by the same authors found similar results for Amazon and third-party merchants using its platform.

So, are American tech companies making it difficult for startups? Perhaps, but there are some other reasons to be skeptical.

First off, the nature of the venture capital market has changed due to the declining costs of computing. Not too long ago, much of a tech company’s Series A and B would be dedicated to buying  server racks and computing power. But with the advent of Amazon Web Services (AWS) and other cloud computing technologies, this need has dried up.

What does this mean for the ecosystem? Ben Thompson explained the impact back in 2015 :

In fact, angels have nearly completely replaced venture capital at the seed stage, which means they are the first to form critical relationships with founders. True, this has led to an explosion in new companies far beyond the levels seen previously, which is entirely expected — lower barriers to entry to any market means more total entries — but this has actually made it even more difficult for venture capitalists to invest in seed rounds: most aren’t capable of writing massive numbers of seed checks; the amounts are just too small to justify the effort.

Instead, venture capitalists have gone up-market: firms may claim they invest in Series’ A and B, but those come well after one or possibly two rounds of seed investment; in other words, today’s Series A is yesteryear’s Series C. This, by the way, is the key to understanding the so-called “Series A crunch”: it used to be that Series C was the make-or-break funding round, and in fact it still is — it just has a different name now. Moreover, the fact more companies can get started doesn’t mean that more companies will succeed; venture capitalists just have more companies to choose from.

Research is only now catching up with Thompson’s hunch. In a newly released NBER working paper , economists David Byrne, Carol Corrado, Daniel E. Sichel find that prices for computing, database, and storage services offered by AWS dropped dramatically from 2009 to 2016. As they concluded, “cloud service providers are undertaking large amounts of own-account investment in IT equipment and that some of this investment may not be captured in GDP.”

Second, a decline in startups was predicted by Nobel winning economist Robert Lucas back in 1978 . Over time, Lucas surmised, productivity increases will yield wage increases, which in turn will incentivize marginal entrepreneurs to become employees. This will increase productivity at the company, but also increases the size of the firm. Over time, as productivity and wages inch upwards, working at a firm gets incentivized over starting a company. Entrepreneurs as a portion of the economy will thus decline and industries with higher productivity rates will see bigger firms.

Recent analysis of 50 separate national economies confirmed the inverse relationship between entrepreneurship rates and Gross Domestic Product (GDP), which has also been confirmed by the World Bank Group Entrepreneurship Survey as well. Time series analysis also hints at this relationship. Employment within large firms tends to grow over time as a country gets wealthier. Analysis of the Census Business Dynamics Statistics (BDS) illustrates this, as does groundwork conducted in American manufacturing from 1850 to 1880. But the United States isn’t the only country where this relationship can be found. The same trend exists for Canada , Germany , Indonesia , Japan , South Korea , and Thailand .

Moreover, the distribution of firms tends to change as a country becomes wealthier. As economist Markus Poschke noted, “richer countries thus feature fewer, larger firms, with a firm size distribution that is more dispersed and more skewed.” So, it not just the United States that has large firms. Sweden, the Netherlands and Ireland all have large firms, but they too are relatively wealthy by international standards. Productivity goes a long way to explain the distributional changes.

Nicholas Kozeniauskas, a recent minted economist from NYU, also has been working on research showing the skewed nature of entrepreneurism, which adds some depth to this conversation. As he found , the decline in entrepreneurship has been more pronounced for higher education levels. Overall, “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.”  

As of right now, I think we should be unsatisfied with the evidence of a kill zone. The research doesn’t point in the same direction. But as new insight comes in, we will need to update, as always.  

]]>
https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/feed/ 2 76409
Book Review: Cathy O’Neil’s “Weapons of Math Destruction” https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/ https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/#comments Wed, 07 Nov 2018 17:01:28 +0000 https://techliberation.com/?p=76408

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism , O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making.

For example, O’Neil begins her book with a story about Sarah Wysocki, a teacher who got fired from the D.C. public school system because of how the teacher evaluation system ranked her abilities. O’Neil writes,

Yet at the end of the 2010-11 school year, Wysocki received a miserable score on her IMPACT evaluation. Her problem was a new scoring system known as value-added modeling, which purported to measure her effectiveness in teaching math and language skills. That score, generated by an algorithm, represented half of her overall evaluation, and it outweighed the positive reviews from school administrators and the community. This left the district with no choice but to fire her, along with 205 other teachers who has IMPACT scores below the minimal threshold.

In the ensuing pages, O’Neil describes the scoring system, how it was designed, and how it affected Wysocki. But the broader politics behind the scoring system that ousted Wysocki are just as important.

Why, for example, was the value-added score such a prominent feature in the teacher evaluation as compared to administrative and parent input? Well, research from the Bill & Melinda Gates Foundation found that a teacher’s value-added track record is among the strongest predictors of student achievement gains. So, the school district changed around their evaluations to make it a central feature. As Jason Kamras, chief of human capital for D.C. schools, told the Washington Post , “We put a lot of stock in it.” But that decision wasn’t without its critics, including Washington Teachers’ Union President Nathan Saunders who said, “You can get me to walk down the road with you to say value-added is relevant, but 50 percent is too weighted.”

Moreover, the weights changed in 2009 because the Chancellor of D.C. public schools, Michelle Rhee, had negotiated a new deal with the teachers union. In exchange for 20 percent pay raises and bonuses of $20,000 to 30,000 for effective teachers, the district was given more leeway to fire teachers for poor performance, which they did using the IMPACT system. In part, this fight was spurred on because Obama-era Education Secretary Arne Duncan was doling out $3.4 billion in Race to the Top grants that focused on teacher effectiveness measures. Moreover, Rhee was a Chancellor because D.C. Mayor Adrian Fenty had passed legislation that would bypass the Board of Education and give him control of the schools.           

Yes, Wysocki might have been a false positive, but what about all of the poor performing teachers that the previous system hadn’t let go? By focusing on the teachers, O’Neil steers the conversation away from what should be the central concern, did the change actually help students learn and achieve?

Truth be told, my quibbles with Weapons of Math Destruction fit into two types. The first class relates to questions of emphasis and scope, which become important when the reader tallies off the costs and benefits of algorithms. Perhaps it is the case that “The U.S. News college ranking has great scale, inflicts widespread damage, and generates an almost endless spiral of destructive feedback loops.” But on the other hand, lower ranked colleges have decreased their net tuition and accepted a larger share of applicants. Yes, credit scores “open doors for some of us, while slamming them in the face of others,” but in which proportion? In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012. The change was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters, leading to a 3.5 percent decrease in lending and a reduction in aggregate welfare. It could be case that “the payday loan industry operates WMDs,” but it is unclear where low-income Americans will find short-term loans if they are outlawed.

Second, Weapons of Math Destruction continuously toys with important questions regarding the moral agency of technologies but never explicitly lays them out. How much value should be ascribed to technologies? To what degree are technologies value-neutral or value-laden? All technologies, including the algorithms that O’Neil describes, are designed and implemented for certain kinds of instrumental outcomes by companies and government agencies. An institution has to take on the task on adopting an algorithm for decision-making purposes, and thus, the algorithm reflects the institutional goals.

Should the algorithm be blamed, the institutional structures that put it into place, or some combination of the both? Reading with a careful eye, one will easily see that this is the fundamental question of the book, especially since O’Neil wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.” But the real answer isn’t in this binary. Algorithmic problems are pluralist.

]]>
https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/feed/ 1 76408
Should The New NPR Poll On Rural America Make You Reconsider Your View of Rural Broadband Development? https://techliberation.com/2018/10/17/should-the-new-npr-poll-on-rural-america-make-you-reconsider-your-view-of-rural-broadband-development/ https://techliberation.com/2018/10/17/should-the-new-npr-poll-on-rural-america-make-you-reconsider-your-view-of-rural-broadband-development/#respond Wed, 17 Oct 2018 15:42:54 +0000 https://techliberation.com/?p=76393

National Public Radio, the Robert Wood Johnson Foundation, and the Harvard T.H. Chan School of Public Health just published a new report on “ Life in Rural America .” This survey of 1,300 adults living in the rural United States has a lot to say about health issues, population change, the strengths and challenges for rural communities, as well as discrimination and drug use. But I wanted to highlight two questions related to rural broadband development that might make you update your beliefs about massive rural investment.

To begin with, it should be noted that this survey didn’t feature rural broadband development that much, but the topic did arise in the context of improving rural economies. Specifically, question 44 asked, “Recently, a number of leadership groups have recommended different approaches for improving the economy of communities like yours. For each of the following, please tell me how helpful you think this approach would be for improving the economy of your local community…[insert item]. Do you think this would be very helpful, somewhat helpful, not too helpful, or not at all helpful?”

Here is a breakdown of how people responded, saying that these changes would be very helpful:

  1. Creating better long-term job opportunities 64%
  2. Improving the quality of local public schools 61%
  3. Improving access to health care 55%
  4. Improving access to advanced job training or skills development 51%
  5. Improving local infrastructure like roads, bridges, and public buildings 48%
  6. Improving the use of advanced technology in local industry and farming 44%
  7. Improving access to small business loans and investments 44%
  8. Improving access to high-speed internet 43%

Notice where improving access to Internet ranks. It is at the bottom. And improving the use of advanced technology in local industry and farming doesn’t do much better. What ranks higher than both of these, however, is improving access to advanced job training or skills development. As I have been saying for some time , digital literacy efforts are seriously underrated.

The poll also asked an open ended question in Question 2, what is the biggest problem facing your community, and again, access to high speed Internet was near the bottom of the list. In fact, racism, access to good doctors and hospitals, access to public transportation, law enforcement, and access to grocery stores all ranked as more pressing concerns. What topped the list were opioid addiction and jobs.

Rural broadband development should be pursued. But this polling suggests that some might want to revise their estimates about a big economic bump from rural broadband.

]]>
https://techliberation.com/2018/10/17/should-the-new-npr-poll-on-rural-america-make-you-reconsider-your-view-of-rural-broadband-development/feed/ 0 76393
In Defense of Techno-optimism https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/ https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/#comments Wed, 10 Oct 2018 18:05:15 +0000 https://techliberation.com/?p=76391

Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.

Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.

But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines.  Psychologists know that people love to read negative articles. Our brains are wired for  pessimism .

In the shadow of a year of bad news, it helpful to remember that Facebook and Google and Reddit and Twitter also support humane conversations. Most people  aren’t going online to talk about politics and if you are, then you are rare. These sites are places where families and friends can connect. They offer a space of solace – like when chronic pain sufferers  find others on Facebook , or when widows vent, rage, laugh and cry without judgement through  the Hot Young Widows Club . Let’s also not forget that Reddit, while sometimes a place of rage and spite, is also where a weight lifter with  cerebral palsy can become a hero and where those with  addiction can find healing . And in the hardest to reach places in Canada, in Iqaluit,  people say that “Amazon Prime has done more toward elevating the standard of living of my family than any territorial or federal program. Full stop. Period”

Three-fourths of Americans say major technology companies’ products and services have been more good than bad for them personally. But when it comes to the whole of society, they are more skeptical about technology bringing benefits. Here is how I read that disparity: Most of us think that we have benefited from technology, but we worry about where it is taking the human collective. That is an understandable worry, but one that shouldn’t hobble us to inaction.

Nor is technology making us stupid. Indeed, quite the opposite is happening. Technology use in those aged 50 and above seems to have caused them to be cognitively younger than their parents to the tune of 4 to 8 years. While the use of Google does seem to reduce our ability to recall information, studies find that it has boosted other kinds of memory, like retrieving information. Why remember a fact when you can remember where it is located? Concerned how audiobooks might be affecting people, Beth Rogowsky, an associate professor of education, compared them to physical reading and was surprised to find “no significant differences in comprehension between reading, listening, or reading and listening simultaneously.” Cyberbullying and excessive use might make parents worry, but NIH supported work found that “Heavy use of the Internet and video gaming may be more a symptom of mental health problems than a cause. Moderate use of the Internet, especially for acquiring information, is most supportive of healthy development.” Don’t worry. The kids are going to be alright.

And yes, there is a lot we still need to fix. There is cruelty, racism, sexism, and poverty of all kinds embedded in our technological systems. But the best way to handle these issues is through the application of human ingenuity. Human ingenuity begets technology in all of its varieties.

When Scott Alexander over at Star Slate Codex  recently looked at 52 startups being groomed by startup incubator Y Combinator, he rightly pointed out that many of them were working for the betterment of all:  

Thirteen of them had an altruistic or international development focus, including Neema , an app to help poor people without access to banks gain financial services; Kangpe , online health services for people in Africa without access to doctors; Credy , a peer-to-peer lending service in India; Clear Genetics , an automated genetic counseling tool for at-risk parents; and Dost Education , helping to teach literacy skills in India via a $1/month course.

Twelve of them seemed like really exciting cutting-edge technology, including CBAS , which describes itself as “human bionics plug-and-play”; Solugen , which has a way to manufacture hydrogen peroxide from plant sugars; AON3D , which makes 3D printers for industrial uses; Indee , a new genetic engineering system; Alem Health , applying AI to radiology, and of course the obligatory drone delivery startup .

Eighteen of them seemed like boring meat-and-potatoes companies aimed at businesses that need enterprise data solution software application package analytics targeting management something something something “the cloud”.

As for the other companies, they were the kind of niche products that Silicon Valley has come to be criticized for supporting. Perhaps the Valley deserves some criticism, but perhaps it deserves more credit than it’s been receiving as-of-late.

Contemporary tech criticism displays a kind of anti-nostalgia. Instead of being reverent for the past, anxiety for the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today. We need to work diligently together to piece together a better world. But if we constantly live in fear of what comes next, that future won’t be built. Optimism needn’t be pollyannaish. It only needs to be hopeful of a better world.  

]]>
https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/feed/ 2 76391
Should the US Adopt the GDPR? https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/ https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/#comments Mon, 01 Oct 2018 16:50:16 +0000 https://techliberation.com/?p=76389

Last week, I had the honor of being a panelist at the  Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path.

GDPR imposes three kinds of costs on firms. First, the regulation forces firms to retool data processes to realign with the new demands. This is generally one time fixed cost that raises the cost of all information using entities. Second, the regime adds risk compliance costs, causing companies to staff up to ensure compliance. Finally, the law will change the dynamics of the industry, as companies adapt to the new requirements.

Right now, the retooling costs and the risk compliance costs are going hand in hand, so it is difficult to suss out the costs of each. Still, they are substantial. A McDermott-Ponemon survey on GDPR preparedness found that almost two-thirds of all companies say the regulation will “significantly change” their informational workflows. For the just over 50 percent of companies expecting to be ready for the changes, the average budget for getting to compliance tops $13 million, by this estimate. Among all the new requirements, this survey found that companies were struggling with the data-breach notification the most. The inability to comply with the notification requirement was cited by 68 percent of companies as posing the greatest risk because of the size of levied fines.

The International Association of Privacy Professionals (IAPP) estimated the regulation will cost Fortune 500 companies around $7.8 billion to get up to speed with the law. And these won’t be one time costs since, “Global 500 companies will be hiring on average five full-time privacy employees and filling five other roles with staff members handling compliance rules.” A PwC survey on the rule change found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.

It might take some time to truly understand the impact of GDPR, but the law will surely change the dynamics of countless industries. For example, when the EU adopted the e-Privacy Directive in 2002, Goldfarb and Tucker found that advertising became far less effective. The impact seems to have reverberated throughout the ecosystem as venture capital investment in online news, online advertising, and cloud computing dropped by between 58 to 75 percent . Information restrictions shift consumer choices. In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012, which was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters. Overall the law lead to a 3.5 percent decrease in lending and reduced aggregate welfare.  

As the Chilean example suggests, some might benefit from a GDPR-like privacy regime. But as Daniel Castro, my co-panelist pointed out, strong privacy laws haven’t done much to sway public opinion. As he wrote with Alan McQuinn ,

The biannual Eurobarometer survey, which interviews 100 individuals from each EU country on a variety of topics, has been tracking European trust in the Internet since 2009. Interestingly, European trust in the Internet remained flat from 2009 through 2017, despite the European Union strengthening its ePrivacy regulations in 2009 (implementation of which occurred over the subsequent few years) and significantly changing its privacy rules, such as the court decision that established the right to be forgotten in 2014. Similarly, European trust in social networks, which the Eurobarometer started measuring in 2014, has also remained flat, albeit low

In other words, it doesn’t seem as though strong regulations have done anything to make people feel as though they are getting a better deal with Internet companies.   

One of my top concerns with the GDPR that wasn’t really discussed relates to the consent requirement in the law. Now, people must affirmatively say that data processors can use their data. As I explained at the American Action Forum ,

Affirmative consent is also known as an opt-in privacy regime. Opt-in is frequently described as giving consumers more privacy protection, but opt-out regimes give an individual the same option to exit data processing without the added burdens. Indeed, most of the large companies already provide a method of opting out of certain data processing and collection. Setting the default by regulation simply biases consumer choices in a particular direction.

Overall, I think I think there was general agreement among the panelists that the US should not adopt the GDPR. But, both Amie Stepanovich of Access Now and Justin Brookman of Consumer’s Union were generally in favor of implementing a couple of the fundamental elements of the GDPR, assuming they were adopted to the US legal system. Indeed, Access Now released a paper on exactly this topic. 

The big question is whether the GDPR or something similar is a set of optimal rules. For countless reasons, I’m skeptical they will really improve consumer experience without imposing substantial costs. 

For more on this topic, check out:

]]>
https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/feed/ 2 76389
Is Facebook Now Over-moderating Content? https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/ https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/#comments Mon, 10 Sep 2018 14:30:32 +0000 https://techliberation.com/?p=76376

Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,

Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.

Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.

One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)

But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that.

Detecting content that violates a hate speech code or a disinformation standard leads into a massive operationalization problem. A company like Facebook isn’t going to be perfect. It could produce a detection regime that was either underbroad or overbroad. It is of course only minimal evidence, but I have been seeing a lot of my friends on Facebook post about how their own posts have been taken down and it was clear they were non-political.

Over-moderation could explain why many conservatives have been worried about Twitter and Facebook engaging in soft censorship. Paula Bolyard made a convincing case in the Washington Post ,

There have been plenty of credible reports over the past two years claiming anti-conservative bias at the Big Three Internet platforms, including the 2016 revelation that Facebook had routinely suppressed conservative outlets in the network’s “trending” news section. Further, when Alphabet-owned YouTube pulls down and demonetizes mainstream conservative content from sites such as PragerU, it certainly gives the impression that the company has its thumb on the scale.

Bolyard hints at one of the biggest problems in the conversation today. Users cannot peer behind the veil and are thus forced to impute intentions about how the network operates in practice. Here is how Sarah Myers West, a postdoc researcher at the AI Now Institute, described the process,    

Many social network users develop “folk theories” about how platforms work: in the absence of authoritative explanations, they strive to make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed

West goes on to cite a study of moderation efforts , which found that users thought Facebook was “powerful, perceptive, and ultimately unknowable.” Both Vaidhyanathan and Bolyard could pushing similar folk theories. They are both astute in their comments and offer a lot to consider, but everyone in this discussion, including the operators at Facebook and Twitter, is hobbled by a fundamental knowledge problem.

Still, each platform has to create its own means of detecting this content, which will need to conform to the specifics of the platform. Evelyn Douek’s report on the Senate Hearing , which you should absolutely go read, helps to fill out some of the details on this point,

[Twitter CEO Jack] Dorsey stated that Twitter does not focus on whether political content originates abroad in determining how to treat it. Because Twitter, unlike Facebook, has no “real name” policy, Twitter cannot prioritize authenticity. Dorsey instead described Twitter as focusing on the use of artificial intelligence and machine learning to detect “behavioural patterns” that suggest coordination between accounts or gaming the system. In a sense, this is also a proxy for a lack of authenticity, but on a systematic rather than an individual scale. Twitter’s focus, according to Dorsey, is on how people game the system in the “shared spaces [on Twitter] where anyone can interject themselves,” rather than the characteristics of profiles that users choose to follow.

Dorsey seems to set up a comparison between the two companies. Facebook’s method of detecting nefarious content deals with the profile, as an authenticated person, in relation to the content that is shared. Twitter, on the other hand, is looking for people to game the system in the “shared spaces [on Twitter] where anyone can interject themselves.” It might be a misread, but Dorsey suggests that Twitter is emphasizing the actions of users, which would lead to a more structural approach.

It goes without saying that Facebook’s social network is different from Twitter’s, leading to different approaches in moderation. Facebook creates dyadic connections. The relationships on Facebook run both ways. Becoming friends means we are in a mutual relationship. Twitter, however, allows for people to follow others without reciprocity. The result are distinct network structures. Pew, for example, was able to distinguish between six different broad structures , including polarized crowds, tight crowds, brand clusters, community clusters, broadcast networks, and support networks. Combined, these features make it difficult for both researchers and operators to understand the scope of the problem and how solutions are working, or not working.

So what are the broad incentives pushing platforms to either over-moderate or under-moderate content? Here is what I could come up with:  

  • If content moderation is too broad, it will spark the ire of content creators who might get inadvertently caught up in a filter.
  • More content going over the network means more users and more engagement, and thus more advertising dollars, making the platform sensitive to over-moderation.
  • Content has both an extensive marginal and an intensive marginal. Facebook will want to expand the overall amount of content to attract people, but they will want to keep the content on the network high quality. Low quality will drive people and advertisers to exit, so they might have an incentive to over moderate.
  • Given the current political environment and the California privacy bill, it might make better long term sense to over-moderate or at least engage in the perception of over-moderation to reduce the chance of legal or regulatory pressures in the future.
  • The technical filtering solutions could have ambiguous effects on moderation. It could be that a platform simply is not that good at content moderation and has been under providing it.
  • Or, the filtering system could be providing an expansive program that has swept up too many people and too much content.
  • Given that people think these platforms are “powerful, perceptive, and ultimately unknowable,” the platforms might err on the side of under-moderation simply to reduce the overall experience of content moderation.

Content moderation at scale is difficult. And messy. In creating a technical regime to deal with this problem, we shouldn’t expect platforms to get it perfect. While many have criticized platforms for under-moderation, they might now being over-moderating. Still, there is a massive knowledge problem in trying to understand if the current level of moderation is optimal.    

]]>
https://techliberation.com/2018/09/10/is-facebook-now-over-moderating-content/feed/ 1 76376
How Should Privacy Be Defined? A Roadmap https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/ https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/#comments Mon, 06 Aug 2018 12:00:45 +0000 https://techliberation.com/?p=76335

Privacy is an essentially contested concept . It evades a clear definition and when it is defined , scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms :

By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.

Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring.

Hagemann’s colleague Will Wilkinson laid out the benefits of this kind of philosophical exercise, which comes to me via Paul Crider . Wilkinson traces it back to very beginnings of liberal thought, which takes a bit to wind up:

Thomas Reid, the Scottish Enlightenment philosopher, pointed out that there are two ways to construct an account of what it means to really know something, rather than just believing it to be true. The first way is to develop an abstract theory of knowledge—a general criterion that separates the wheat of knowledge from the chaff of mere opinion—and then see which of our opinions qualify as true knowledge. Reid noted that this method tends to lead to skepticism, because it’s hard, if not impossible, to definitively show that any of our opinions check off all the boxes these sort of general criteria tend to set out.

That’s why Descartes ends up in a pickle and Hume leaves us in a haze of uncertainty. It’s all a big mistake, Reid said, because the belief that I have hands, for example, is on much firmer ground than any abstract notions about the nature of true knowledge that I might dream up. If my theory implies that I don’t really know that I have hands, that’s a reason to reject the theory, not a reason to be skeptical about the existence of my appendages.

According to Reid, a better way to come up with a theory of knowledge is to make a list of the things we’re very sure that we really know. Then, we see if we can devise a coherent theory that explains how we know them.

The 20th century philosopher Roderick Chisholm called these two ways of theorizing about knowledge “methodism”—start with a general theory, apply it, and see what, if anything, counts as knowledge according to the theory—and “particularism”—start with an inventory of things that we’re sure we know and then build a theory of knowledge on top of it.

Hagemann is right to build privacy on the particularism of Wilkinson, Reid and Chisholm. Given the changing nature of technology, we should take a regular “inventory of things that we’re sure we know” about privacy and then build theories on top of it.

Indeed, privacy scholarship finds its genesis in this method. While many have gotten hung up on the rights talk in the “Right to Privacy”, Warren and Brandeis actually aim “to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is.” The article looks to previous law to construct a principle for “recent inventions and business methods.” This is particularism applied to privacy.

Only a handful of court cases that are actually reviewed in the article, the most important of which is Marian Manola v. Stevens & Myers . Marian Manola was a classically trained comic opera prima donna that had a string of altercations with her company where Stevens was the manager. About a year before the case, the New York Times carried a story describing a dispute between Manola and another actor in the McCaull Opera Company. She refused to go on stage after the actor pushed her on stage and Benjamin Stevens, apparently “ignored her until she returned to her duty.” About a year later, Stevens set up the photographer Myers in a box, as a stunt to boost sales. Manola sued the both of them. Today, the case would be cited in the right to publicity literature.

Still, Warren and Brandeis were trying to survey the land of privacy harms and then build a principle on top of it.

Be it either particularism or methodism, these ways of constructing knowledge frame the moral ground, creating a field where privacy advocates and privacy scholars can converse. What unites these two groups, then, is their common rhetoric about the contours of  privacy harms. And so, what constitutes a harm is still the central question in privacy policy.

]]>
https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/feed/ 1 76335
The Definition of Technology Matters For Tech Policy And Growth https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/ https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/#comments Thu, 02 Aug 2018 19:22:52 +0000 https://techliberation.com/?p=76331

Dan Wang has a new post titled “How Technology Grows (a restatement of definite optimism)” and it is characteristically good. For tech policy wonks and policymakers, put it in your queue. The essay clocks in at 7500 words, but there’s a lot to glean from the piece. Indeed, he puts into words a number of ideas I’ve been wanting to write about. To set the stage, he begins first by defining what we mean by technology:

Technology should be understood in three distinct forms: as processes embedded into tools (like pots, pans, and stoves); explicit instructions (like recipes); and as process knowledge, or what we can also refer to as tacit knowledge, know-how, and technical experience. Process knowledge is the kind of knowledge that’s hard to write down as an instruction. You can give someone a well-equipped kitchen and an extraordinarily detailed recipe, but unless he already has some cooking experience, we shouldn’t expect him to prepare a great dish.

As he rightly points out, the United States has, for various reasons, set aside the focus on process knowledge. Where this is especially evident comes in our manufacturing base:

When firms and factories go away, the accumulated process knowledge disappears as well. Industrial experience, scaling expertise, and all the things that come with learning-by-doing will decay. I visited Germany earlier this year to talk to people in industry. One point Germans kept bringing up was that the US has de-industrialized itself and scattered its production networks. While Germany responded to globalization by moving up the value chain, the US manufacturing base mostly responded by abandoning production.

The US is an outlier among rich countries when it comes to manufacturing exports. It needs improvement.

Two comments on this.

First off, I couldn’t agree more with Dan’s emphasis on the localization of knowledge. Local knowledge networks made Silicon Valley what it is. By far the best dive into this topic is still Annalee Saxenian’s “Regional Advantage,” which charts the computer industry’s genesis in both Silicon Valley and along Boston’s Route 128. As she details throughout the book, the culture of work and the resulting firm structures in Silicon Valley differed significantly from those in Boston, giving it critical advantages to become the preeminent region of technology development.

When I read it a couple of years back, I highlighted the importance of regional knowledge hubs:

As a side comment, Saxenian mentions that many Silicon Valley workers far more rooted in the region than others. While the company man of the 1950s might move among the various arms of the firm to gain experience, which could be in different states, in the Valley, you would just move down the street. To me, that speaks volumes to the importance of regional knowledge hubs.

Without them, an industry can lose dominance.

Green tech is best and most recent example. Some have lamented that the US isn’t in the lead in producing photovoltaic tech, that we import too much of the stuff from China. Yet, China doesn’t have a labor or productivity advantage here. It comes down to scale and supply-chain management, according to research:

We find that the historical price advantage of a China-based factory relative to a U.S.-based factory is not driven by country-specific advantages, but instead by scale and supply-chain development. Looking forward, we calculate that technology innovations may result in effectively equivalent minimum sustainable manufacturing prices for the two locations. In this long-run scenario, the relative share of module shipping costs, as well as other factors, may promote regionalization of module-manufacturing operations to cost-effectively address local market demand. Our findings highlight the role of innovation, importance of manufacturing scale, and opportunity for global collaboration to increase the installed capacity of PV worldwide.

Second, Dan looks towards Germany as a model of high tech manufacturing, but there are some caveats. Most of Germany’s manufacturing prowess comes from small to medium sized firms called Mittelstand. And the reason that Mittelstand dominate seems to come from the cozy relationship German manufacturing has with the Fraunhofer Society for the Advancement of Applied Research, often just called the Fraunhofer Institutes. Sixty nine of these research institutes are scattered throughout Germany and work on applied optics, chemicals, high-speed dynamics, materials, and wind energy, just to name a few.

I haven’t done a deep dive yet into Dan’s writings to see if he has looked at this important link between research and output, but I hope he does. There is a lot to be learned from the German model and I am still hopeful that the lessons could be applied to US policy.

]]>
https://techliberation.com/2018/08/02/the-definition-of-technology-matters-for-tech-policy-and-growth/feed/ 2 76331
Why Did The Facebook Stock Drop Last Week? Some Economics Of Decision-making https://techliberation.com/2018/07/31/why-did-the-facebook-stock-drop-last-week-some-economics-of-decision-making/ https://techliberation.com/2018/07/31/why-did-the-facebook-stock-drop-last-week-some-economics-of-decision-making/#respond Tue, 31 Jul 2018 22:15:44 +0000 https://techliberation.com/?p=76329

A curious thing happened last week. Facebook’s stock, which had seem to have weathered the 2018 controversies, took a beating.

In the Washington Post, Craig Timberg and Elizabeth Dwoskin explained that the stock market drop was representative of a larger wave:

The cost of years of privacy missteps finally caught up with Facebook this week, sending its market value down more than $100 billion Thursday in the largest single-day drop in value in Wall Street history.

Jeff Chester of the Center for Digital Democracy piled on, describing the drop as “a privacy wake-up call that the markets are delivering to Mark Zuckerberg.”

But the downward pressure was driven by more fundamental changes. Simply put, Facebook missed its earnings target. But it is important to peer into why the company didn’t meet those targets.

As Zuckerberg noted in the earning call ,

Now, perhaps one of the most important things we’ve done this year to bring people closer together is to shift News Feed to encourage connection with friends and family over passive consumption of content. We’ve launched multiple changes over the last half to News Feed that encourage more interaction and engagement between people, and we plan to keep launching more like this.

Later in the call, Facebook CFO David Wehner signaled total revenue growth rate would decelerate due to the choices made by Zuckerberg,  

We plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization, and we are also giving people who use our services more choices around data privacy, which may have an impact on our revenue growth.

Moreover, the costs would continue to rise as they also embedded more privacy and security features into the platform:

Turning now to expenses; we continue to expect that full-year 2018 total expenses will grow in the range of 50% to 60% compared to last year. In addition to increases in core product development and infrastructure, this growth is driven by increasing investment in areas like safety and security, AR/VR, marketing, and content acquisition. Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.

So, Facebook got hammered because it invested more in privacy and security, while also transitioning to less revenue generating source of content. At first glance, this might seem to signal from the market to not invest in these sort of changes. Indeed, as Blake Reid noted ,

They got punished by the market for investing in less-monetized content and spending more on privacy and security. Doesn’t that send a signal to not do that?

Yes and no.

It has been widely accepted that corporations often adopt short term strategies that attempt to maximize earnings. As one well cited survey of financial executive explained, “Because of the severe market reaction to missing an earnings target, we find that firms are willing to sacrifice economic value in order to meet a short-run earnings target.”

This preference for the near term, especially for payoffs in the near term, seems to be a common feature among humans . People tend to prefer small rewards that occur now over much larger rewards that come later. This is known as hyperbolic discounting and it helps to explain why households under-save , why smokers find it tough to quit , and why firms prefer near term earnings.

Pulling together the insights from finance and behavioral psychology, two economists pointed out “that a firm exhibiting hyperbolic discounting preferences faces an underinvestment problem, i.e. there exists another feasible investment plan that improves all periods’ present values.” Conversely, a firm exhibiting time invariant preferences would invest, even if it meant a short term hit.   

Facebook is probably playing the long game. Zuckerberg has an overwhelming controlling stake in the company and wants to build value in the long term . And if these changes lead to more durability, that is, if users stay on the site longer in the next 5 or 10 years, then it makes sense to take the short term hit. It would be better to do this than have a massive exodus at some point down the road.

In the same kind of way, Amazon has been criticized for years for spending too much money on company investments to the detriment of returns. But, Amazon’s Q2 2018 numbers came in this week and they were double expectations . Bezos’ 1997 shareholder letter laid out the strategy, “We believe that a fundamental measure of our success will be the shareholder value we create over the long term.” Bezos is also more concerned with building for the long term.

I’m working on a more formal model of this, but I think there are reasons to believe that Facebook would be especially sensitive to privacy concerns. And Facebook’s missing earnings also point to a real concern about the long term viability of the platform.

]]>
https://techliberation.com/2018/07/31/why-did-the-facebook-stock-drop-last-week-some-economics-of-decision-making/feed/ 0 76329
The Online Public Sphere or: Facebook, Google, Reddit, and Twitter also support positive communities https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/ https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/#comments Wed, 11 Jul 2018 15:40:46 +0000 https://techliberation.com/?p=76314

In cleaning up my desk this weekend, I chanced upon an old notebook and like many times before I began to transcribe the notes. It was short, so I got to the end within a couple of minutes. The last page was scribbled with the German term Öffentlichkeit (public sphere), a couple sentences on Hannah Arendt, and a paragraph about Norberto Bobbio’s view of public and private.

Then I remembered. Yep. This is the missing notebook from a class on democracy in the digital age .   

Serendipitously, a couple of hours later, William Freeland alerted me to Franklin Foer’s newest piece in The Atlantic titled “ The Death of the Public Square .” Foer is the author of “World Without Mind: The Existential Threat of Big Tech,” and if you want a good take on that book, check out Adam Thierer’s review in Reason .

Much like the book, this Atlantic piece wades into techno ruin porn but focuses instead on the public sphere:

Nobody designed the public sphere from a dorm room or a Silicon Valley garage. It just started to organically accrete, as printed volumes began to pile up, as liberal ideas gained currency and made space for even more liberal ideas. Institutions grew, and then over the centuries acquired prestige and authority. Newspapers and journals evolved into what we call media. Book publishing emerged from the printing guilds, and eventually became taste-making, discourse-shaping enterprises.

In recent years, this has been eviscerated by Facebook and Google, Foer continues,  

It took centuries for the public sphere to develop—and the technology companies have eviscerated it in a flash. By radically remaking the advertising business and commandeering news distribution, Google and Facebook have damaged the economics of journalism. Amazon has thrashed the bookselling business in the U.S. They have shredded old ideas about intellectual property—which had provided the economic and philosophical basis for authorship.

Philosopher Jurgen Habermas, who is cited throughout the piece, coined the term Öffentlichkeit, which has been translated into English as public sphere. However, Habermas used the term to describe not only the “process by which people articulate the needs of society with the state” but also the “public opinion needed to legitimate authority in any functioning democracy.” So, the public bridges the practices of democracy with mass communication methods like broadcast television, newspapers, and magazines.   

While Foer doesn’t explore it fully, the public sphere forms a basis for legitimate authority, which in turn implicates political power.

Nancy Fraser provided the classic critique of public sphere because even in Habermas’ own conception of the term, countless voices were excluded from the public sphere. “This network of clubs and associations – philanthropic, civic, professional, and cultural – was anything but accessible to everyone,” Fraser explained. “On the contrary, it was the arena, the training ground and eventually the power base of a stratum of bourgeois men who were coming to see themselves as a ‘universal class’ and preparing to assert their fitness to govern.”

In parallel to the public sphere, Fraser observed that numerous counterpublics formed “where members of subordinated social groups invent and circulate counter discourses to formulate oppositional interpretations of their identities, interests, and needs.” And it is through these oppositional interpretations that the public conversation around politics changed. Think about civil rights and the environmental movement, and even deregulation as examples.

Foer might be right to focus on the public sphere, but I’m not sure his analysis goes far enough. He explains:

This assault on the public sphere is an assault on free expression. In the West, free expression is a transcendent right only in theory—in practice its survival is contingent and tenuous. We’re witnessing the way in which public conversation is subverted by name-calling and harassment. We can convince ourselves that these are fringe characteristics of social media, but social media has implanted such tendencies at the core of the culture. They are in fact practiced by mainstream journalists, mobs of the well meaning, and the president of the United States. The toxicity of the environment shreds the quality of conversation and deters meaningful participation in it. In such an environment, it becomes harder and harder to cling to the idea of the rational individual, formulating opinions on the basis of conscience. And as we lose faith in that principle, the public will lose faith in the necessity of preserving the protections of free speech.

But Foer’s lament, if it is about the public sphere, is ultimately about the old friction, between the public sphere and counterpublics, in new form. Foer’s worries about theological zealots, demagogic populists, avowed racists, trollish misogynists, filter bubbles, the false prophets of disruption, and invisible manipulation, just to name a couple techno-golems, echoes the “counter discourses [that] formulate oppositional interpretations” of Fraser.

It is all quite inhumane, yes.

But let’s also remember that Facebook and Google and Reddit and Twitter also support humane counterpublics. Like when chronic pain sufferers find solace on Facebook . Or when widows vent, rage, laugh and cry without judgement through the Hot Young Widows Club . Let’s also not forgot that Reddit, while sometimes being a place of rage and spite, is also where a weight lifter with cerebral palsy became a hero and where those with addiction can find healing

Let’s also not forget that most Americans think  these companies have on the whole been beneficial in their lives. And that most of us don’t post political content on either Facebook or Twitter. And that people are the least likely to get their news from social networking sites compared to every other source.

Focusing on democracy and on politics tightens the critical vision, causing us to miss the multiplicities of experiences online. Yet those experiences, those counterpublics are just as representative. They constitute a reality far more real than those constructed by critics.

]]>
https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/feed/ 2 76314
Did The Supreme Court Get The Market Definition Correct In The Amex Case? https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/ https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/#comments Fri, 06 Jul 2018 20:22:22 +0000 https://techliberation.com/?p=76308

The Supreme Court is winding down for the year and last week put out a much awaited decision in Ohio v. American Express . Some have rung the alarm with this case, but I think caution is worthwhile. In short, the Court’s analysis wasn’t expansive like some have claimed, but incomplete. There are a lot of important details to this case and the guideposts it has provided will likely be fought over in future litigation over platform regulation. To narrow the scope of this post, I am going to focus on the market definition question and the issue of two-sided platforms in light of the developments in the industrial organization (IO) literature in the past two decades.

Just to review, Amex centers on what is known as anti-steering provisions. These provisions limit merchants who take the credit card payment from implying a preference for non-Amex cards; dissuading customers from using Amex cards; persuading customers to use other cards; imposing any special restrictions, conditions, disadvantages, or fees on Amex cards; or promoting other cards more than Amex. Importantly, these provisions never limited merchants from steering customers toward debit cards, checks, or cash.

In October 2010, the Department of Justice (DoJ) and several states sued Amex, Visa, and Mastercard for these contract provisions, and Amex was the only one among the three to take it to court. Initially, the District Court ruled in favor of the DoJ and states, explaining that the credit card platforms should be treated as two separate markets, one for merchants and one for cardholders. In that analysis, the court cleaved off the merchant side and declared the anti-steering provisions as being anticompetitive under Section 1 of the Sherman Act.

On appeal, the Court of Appeals for the Second Circuit reversed that decision because “without evidence of the [anti-steering provisions’] net effect on both merchants and cardholders, the District Court could not have properly concluded that the [provisions] unreasonably restrain trade in violation” of Section 1 of the Sherman Act. The Department of Justice petitioned the Appeals Court to reconsider the case en banc , but that was rejected and then headed to the Supreme Court.

The Supreme Court agreed with this two-sided theory as “credit-card networks are best understood as supplying only one product—the transaction—that is jointly consumed by a cardholder and a merchant.” Even though the DoJ was able to show that the provisions did increase merchant fees, “evidence of a price increase on one side of a two-sided transaction platform cannot, by itself, demonstrate an anticompetitive exercise of market power.” To prove this, the DoJ would have to prove that Amex increased of the cost of credit-card transactions above a competitive level, reduced the number of credit-card transactions, or otherwise stifled competition in the two-sided credit-card market.

The decision only briefly mentions why this is important, so consider a platform with two sides, users and advertisers. If users experience an increase in price or a reduction in quality, then they are likely to exit or use the platform less. Yet, advertisers are on the other side because they can reach users. So in response to the decline in user quality, advertiser demand will drop even if the ad prices stay constant. The result echoes back.  When advertisers drop out, the total amount of content also recedes and user demand falls because the platform is less valuable to them. Demand is tightly integrated between the two side of the platform. Changes in user and advertiser preferences have far outsized effects on the platforms because each side responds to the other. In other words, small changes in price or quality tends to be far more impactful in chasing off both groups from the platforms as compared to one-sided goods. In the economic parlance, these are called demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazine price changes confirms this theory.   

In the last two decades, economics has been adapting to the insights and the challenges of two-sided markets. In the case of a one-sided business, like a laundromat or a mining company, there is one downstream or upstream consumer, so demand is fairly straightforward. But platforms are more complex since value must be balanced across the different participants in a platform, which leads to demand interdependencies.

In an article cited in the decision, economists David Evans and Richard Schmalensee explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments.

While it didn’t employ the language of demand interdependencies, the Court did agree with that general assessment:

To be sure, it is not always necessary to consider both sides of a two-sided platform. A market should be treated as one sided when the impacts of indirect network effects and relative pricing in that market are minor. Newspapers that sell advertisements, for example, arguably operate a two-sided platform because the value of an advertisement increases as more people read the newspaper. But in the newspaper-advertisement market, the indirect networks effects operate in only one direction; newspaper readers are largely indifferent to the amount of advertising that a newspaper contains. Because of these weak indirect network effects, the market for newspaper advertising behaves much like a one-sided market and should be analyzed as such.

Why does this bit matter?

In a piece in the New York Times in April, Law scholar Lina Khan worried that this case would “effectively [shield] big tech platforms from serious antitrust scrutiny.” Law professor Tim Wu followed up with an op-ed just this past week in the Times expressing similar concern,

To reach this strained conclusion, the court deployed some advanced economics that it seemed not to fully understand, nor did it apply the economics in a manner consistent with the goals of the antitrust laws. Justice Stephen Breyer’s dissent mocks the majority’s economic reasoning, as will most economists, including the creators of the “two-sided markets” theory on which the court relied. The court used academic citations in the worst way possible — to take a pass on reality.

Respectfully, I have to disagree with Wu’s assessment and Khan’s worries. Both Google and Facebook more evidently fall into the newspaper category than the payments category under the majority’s opinion. Moreover, the opinion didn’t define what “weak indirect network effects” actually means in practice, so this case doesn’t leave Google and Facebook off the hook by any means.

How the Court reached that conclusion is worth exploring, however.

In contrast to newspapers, credit card payment platforms “cannot make a sale unless both sides of the platform simultaneously agree to use their services,” so, “two-sided transaction platforms exhibit more pronounced indirect network effects and interconnected pricing and demand.” The Court seems to connect two-sidedness with the simultaneity requirement. On this front, Wu is correct. They didn’t seem to fully understand the economic reasoning. It isn’t the simultaneous nature of credit cards that makes them two-sided markets, but their demand interdependencies. Newspapers also have strong demand interdependencies even though they may not feature the simultaneity of credit cards. Yet, the Court was correct in defining the market as a transactional one, where cardholders and merchants are intimately connected.  

That being said, Breyer’s economic reasoning isn’t any sharper than the majority’s:

But while the market includes substitutes, it does not include what economists call complements: goods or services that are used together with the restrained product, but that cannot be substituted for that product. See id., ¶565a, at 429; Eastman Kodak Co. v. Image Technical Services, Inc., 504 U. S. 451, 463 (1992). An example of complements is gasoline and tires. A driver needs both gasoline and tires to drive, but they are not substitutes for each other, and so the sale price of tires does not check the ability of a gasoline firm (say a gasoline monopolist) to raise the price of gasoline above competitive levels. As a treatise on the subject states: “Grouping complementary goods into the same market” is “economic nonsense,” and would “undermin[e] the rationale for the policy against monopolization or collusion in the first place.” 2B Areeda & Hovenkamp ¶565a, at 431.

Here, the relationship between merchant-related card services and shopper-related card services is primarily that of complements, not substitutes. Like gasoline and tires, both must be purchased for either to have value. Merchants upset about a price increase for merchant related services cannot avoid that price increase by becoming cardholders, in the way that, say, a buyer of newspaper advertising can switch to television advertising or direct mail in response to a newspaper’s advertising price increase.

Breyer makes a bit of a mess when it comes to the idea of demand complementarity. It isn’t the case that “both must be purchased for either to have value.” That is perfect complementarity, which is rare. Rather, when the price of gasoline increases, then the demand for tires is likely to decrease as well. However, it doesn’t need to run the other way. When the price of tires decreases, the demand for gasoline doesn’t typically inch up. This kind of asymmetric demand relationship is counter to the kind of relationship on platforms where demand in linked on both sides.

Still, Breyer buries the lede. Attributing a price increase to firms in the tire market might be wrong if demand fluctuations in the adjacent gasoline market partially caused those prices changes. In other words, the reason why complementary demand matters in the first place is to ensure that the court’s analysis is correct. Going back to Evans and Schmalensee, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. You get the assessments wrong.        

To his credit, Breyer does rightly point out the thin definition offered by the majority:

I take from that definition that there are four relevant features of such businesses on the majority’s account: they (1) offer different products or services, (2) to different groups of customers, (3) whom the “platform” connects, (4) in simultaneous transactions.

Having simultaneous transactions isn’t the defining feature of two-sidedness and if the lower courts come to rely on this feature to define platforms, then some assessments of competitive effects are likely to be wrong.

Amex offers up a lot for the antitrust community to consider, but in key ways, the decision is incomplete. Importantly, the Court didn’t address the validity of many new analytical tools that have popped up in the past decade to understand platform market power. Take a quick glance at the papers cited in the majority opinion and you will notice how many of references dates from after 2010 when this case was first brought. In other words, Amex hardly shuts the door for future litigation.     

]]>
https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/feed/ 2 76308
What We Learn From Past Government-Imposed Corporate Breakups Is That They Don’t Work https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/ https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/#respond Thu, 28 Jun 2018 18:42:13 +0000 https://techliberation.com/?p=76306

Voices from all over the political and professional spectrum have been clamoring for tech companies to be broken up. Tech investor Roger McNamee,  machine learning pioneer Yoshua Bengio NYU professor Scott Galloway, and even Marco Rubio’s 2016 presidential digital director have all suggested that tech companies should be forcibly separated. So, I took a look at some of the past efforts in a new survey of corporate breakups and found that they really weren’t all that effective at creating competitive markets.

Although many consider  Standard Oil and AT&T as classic cases, I think United States v. American Tobacco Company is far more instructive. 

Like Standard Oil, the American Tobacco Company was organized as a trust and came to acquire nearly 75 percent of the total market by buying both the Union Tobacco Company and the Continental Tobacco Company. But unlike Standard Oil, as soon as these companies were bought, they were integrated within American Tobacco. In 1908 the federal government filed and eventually won a lawsuit under the Sherman Act, which dissolved the trust into three companies, which in theory matched the original three companies.

Yet, the breakup wasn’t as easy as simply splitting the larger company into its original three companies, since the successor companies had intertwined processes. A single purchasing department managed the leaf purchasing. Processing plants has been assigned to specific products without any concern for their previous ownership. For eight months over tense negotiations, the government pulled apart factories, distribution and storage facilities, and name brands. Office by office, the company was pulled apart by government fiat.

Historian Allan M. Brandt had this to say in  The Cigarette Century,

It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.

While some might think that breaking up companies would be a clean operation, American Tobacco suggests the opposite. And I’m not alone in this assessment. Here is what Robert Crandall had to say a couple of years back  in a piece for the Brookings Institution:

[W]ith one exception, the breakup of AT&T in 1984, there is very little evidence that such relief is successful in increasing competition, raising industry output, and reducing prices to consumers. The exception turns out to be a case of overkill because the same results could have been obtained through a simple regulatory rule, obviating the need for vertical divestiture of AT&T.

In other words, this method simply does not achieve competitive markets.

If you’re interested in the longer piece, you can find it over at American Action Forum.

]]>
https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/feed/ 0 76306
A Roundup of Commentary on the Supreme Court’s Carpenter v. United States Decision https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/ https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/#comments Mon, 25 Jun 2018 13:08:42 +0000 https://techliberation.com/?p=76289

On Friday, the Supreme Court ruled on Carpenter v. United States, a case involving the cell-site location information. In the 5 to 4 decision, the Court declared that “The Government’s acquisition of Carpenter’s cell-site records was a Fourth Amendment search.” What follows below is a roundup of reactions and comments to the decision. 

Ashkhen Kazaryan, Legal Fellow at TechFreedom, had this to say about the ruling:

This ruling recognizes the immensely sensitive nature of cell phone location data, and rightly requires a showing of probable cause before law enforcement can obtain location information from mobile carriers. Our country’s Founders would have expected no lesser safeguards to apply to non-stop surveillance. Indeed, the American Revolution was first instigated over surveillance that was far less invasive.

Ryan Radia at Competitive Enterprise Institute commended the decision:

Although the court’s opinion was narrowly crafted to address the particular facts in this case, its decision underscores the court’s willingness to apply rigorous scrutiny to governmental surveillance involving new technologies. In the United States, the Constitution protects people from unreasonable searches and seizures, and Fourth Amendment protection should apply to private information held on or collected through our personal devices.

Curt Levy, president of Committee for Justice, penned an op-ed in Fox News:

Rapid technological change inevitably outpaces the glacial evolution of the law and the Carpenter case is a perfect example. The location data in question was obtained under the Stored Communications Act (SCA), which did not require prosecutors to meet the “probable cause” standard of a warrant.

So Timothy Carpenter turned to the Constitution. But the Justice Department argued that the Fourth Amendment didn’t apply because of the Supreme Court’s Third-Party Doctrine. That doctrine holds that no search or seizure occurs when the government obtains data that the accused has voluntarily conveyed to a third party – in this case, one’s wireless provider.

The Third-Party Doctrine made some sense when it was invented 40 years ago. However, when applied to today’s modern technology, the doctrine results in a gaping hole in the Fourth Amendment…

The good news is that the Supreme Court took a big step towards repairing that hole Friday. In an opinion by Chief Justice John Roberts, the court acknowledged that Fourth Amendment doctrines must evolve to account for “seismic shifts in digital technology.”

Orin Kerr runs through nine questions you might have on the decision over at the Volokh Conspiracy:

(9) Does This Reasoning Apply Just For Physical Location Tracking, Or Does It Apply More Broadly?

That’s the big question. On one hand, the reasoning of the opinion is largely about tracking a person’s physical location. The opinion takes as a given that you have a reasonable expectation of privacy in the “whole” of your “physical movements.” The Court has never held that, so it’s sort of an unusual thing to just assume! But the Court seems to be getting it mostly from Justice Alito’s Jones concurrence, and the idea, as Alito wrote in Jones, that “society’s expectation has been that law enforcement agents and others would not— and indeed, in the main, simply could not—secretly monitor and catalogue every single movement of an individual’s car for a very long period.” …

On the other hand, there’s lots of language in the opinion that cuts the other way. Although the Court “decides no more than the case before us,” it also recasts a lot of doctrine in ways that could be used to argue for lots of other changes. Its use of equilibrium-adjustment will open the door to lots of new arguments about other records that are also protected. For example, what is the scope of this reasonable expectation of privacy in the “whole” of physical movements? Why is there? The Jones concurrences were really light on that, and Carpenter doesn’t do much beyond citing them for it: What is this doctrine and where did it come from? (And what other reasonable expectations of privacy in things do people have that we didn’t know about, and what will violate them?)

Cato’s Ilya Shapiro and Julian Sanchez comment on the Supreme Court’s decision in this Cato Daily podcast.

Columbia Law Professor Eben Moglen of the Software Freedom Law Center also opined on the decision:

The decision in Carpenter v. United States is a groundbreaking change in the application of the Fourth Amendment in digital society. By stating that the pervasive geographic location data assembled by cellular providers is not insulated from the warrant requirement even though it is information collected by third parties, the Court has fundamentally changed the principles underlying the application of the Amendment before today. The Court has stated that its present decision is narrow and factual, but a flood of further cases will seek to widen the meaning of today’s opinion.

]]>
https://techliberation.com/2018/06/25/a-roundup-of-commentary-on-the-supreme-courts-carpenter-v-united-states-decision/feed/ 1 76289
A Roundup of Reactions to the Supreme Court’s Decision for Online Sales Tax https://techliberation.com/2018/06/22/a-roundup-of-reactions-to-the-supreme-courts-decision-for-online-sales-tax/ https://techliberation.com/2018/06/22/a-roundup-of-reactions-to-the-supreme-courts-decision-for-online-sales-tax/#comments Fri, 22 Jun 2018 17:12:57 +0000 https://techliberation.com/?p=76286

Yesterday, the Supreme Court dropped a decision in Wayfair v. South Dakota, a case on the issue of online sales tax. As always, the holding is key: “Because the physical presence rule of Quill is unsound and incorrect, Quill Corp. v. North Dakota, 504 U. S. 298, and National Bellas Hess, Inc. v. Department of Revenue of Ill., 386 U. S. 753, are overruled.” What follows below is a roundup of reactions and comments to the decision.

Joseph Bishop-Henchman at the Tax Foundation thinks this decision sets up a new political fight in Congress and in the states:

All eyes will now turn to Congress and the states. Congress has been stymied between alternate versions of federal solutions: the Remote Transactions Parity Act (RTPA) or Marketplace Fairness Act (MFA), which lets states collect if they agree to simplify their sales taxes, and a proposal from retiring Rep. Bob Goodlatte (R-VA) that would make the sales tax a business obligation rather than a consumer obligation, and have it collected based on the tax rate where the company is located but send the revenue to the jurisdiction where the customer is located. RTPA and MFA are more workable and more likely to pass, but Goodlatte controls what makes it to the House floor, so nothing has happened. Maybe today’s decision will change that.

Berin Szoka at TechFreedom noted:

For the last twenty-six years, the Internet has flourished because of the legal certainty created by Quill. Now, no retailer can know whether it must collect taxes, and smaller retailers face huge challenges. As Chief Justice Roberts notes, the majority ‘breezily disregards the costs that its decision will impose on retailers.’ The majority insists that software will fix the problem of calculating the correct state and local sales tax for every transaction, but with over 10,000 jurisdictions taxing similar products differently, the problem is nightmarishly complicated.

My colleague Doug Holtz-Eakin explains the tension:

What is the economic upshot of this decision? Certainly, it puts in-state and brick-and-mortar retailers on a level playing field with online sellers. In isolation, that is an improvement in the efficiency of the economy because people will shop based on the product and experience and not the tax consequences. Recall, however, that in many states a resident is liable for the “use tax” on her out-of-state purchases. If the sales tax is now being collected, it will be important for states either to drop the use tax or to make sure that there is no double taxation in some other way. If not, then the result of this decision will be less efficiency.

Another aspect of the decision is the impact on federalism and the notion of representation. The decision means that South Dakota can now dictate some of the business operations of firms that have no representation in the South Dakota legislature. Is that fair? Moreover, firms can no longer shop among states to find the sales tax regime that they like best — they will be subject to the same sales taxes across the country regardless of where they operate.

Grover Norquist at American for Tax Reform had this to say:

Today the Supreme Court said ‘yes—you can be taxed by politicians you do not elect and who act knowing you are powerless to object.’ This power can now be used to export sales taxes, personal and corporate income taxes, and opens the door for the European Union to export its tax burden onto American businesses—as they have been demanding…

We fought the American Revolution in large part to oppose the very idea of taxation without representation. Today, the Supreme Court announced, ‘oops’ governments can now tax those outside their borders—those who have no political power, no vote, no voice.

Adam Michel of the Heritage Foundation also focused on federalism at The Daily Signal:

The new status quo under Wayfair is untenable, creating a Wild West for state sales taxes. Some will point to seemingly easy solutions that have been promoted for decades. One example is the Remote Transactions Parity Act, sponsored by Rep. Kristi Noem, R-S.D.

Noem’s bill would maintain the new expanded power of state tax collectors, while imposing nominal limits and simplifications on states’ tax rules.

Such proposals that force sellers to track their sales to the consumer’s destination and comply with laws in other jurisdictions are fundamentally at odds with the principles of local government and American federalism.

Rob Port is concerned about the interstate commerce implications:

The purpose of the interstate commerce clause is to prevent the nightmare of fifty states squabbling with one another over trade wars between their constituent industries, or trying to exert political influence on one another. Congress, and not the states, is to regulate interstate commerce.

I feel like the Supreme Court, by overturning Quill and giving the states new powers to tax beyond their borders, has weakened interstate commerce protections and cracked open the lid to a real can of worms.

All Things SCOTUS has a list of reactions from conservatives.

]]>
https://techliberation.com/2018/06/22/a-roundup-of-reactions-to-the-supreme-courts-decision-for-online-sales-tax/feed/ 2 76286
Mandating AI Fairness May Come At The Expense Of Other Types of Fairness https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/ https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/#respond Thu, 21 Jun 2018 18:12:25 +0000 https://techliberation.com/?p=76285

Two years ago, ProPublica initiated a conversation over the use of risk assessment algorithms when they concluded that a widely used “score proved remarkably unreliable in forecasting violent crime” in Florida. Their examination of the racial disparities in scoring has been cited countless times, often as a proxy for the power of automation and algorithms in daily life. Indeed, as the authors concluded, these scores are “part of a part of a larger examination of the powerful, largely hidden effect of algorithms in American life.”

As this examination continues, two precepts are worth keeping in mind. First, the social significance of algorithms needs to be considered, not just their internal model significance. While the accuracy of algorithms are important, more emphasis should be placed on how they are used within institutional settings. And second, fairness is not a single idea. Mandates for certain kinds of fairness could come at the expense of others forms of fairness. As always, policymakers need to be cognizant of the trade offs.  

Statistical significance versus social significance

The ProPublica study arrived at a critical junction in the conversation over algorithms. In the tech space, TensorFlow, Google’s artificial intelligence (AI) engine, had been released in 2015, sparking interest in algorithms and the application of AI for commercial applications. At the same time, in the political arena, sentencing reform was gaining steam. Senators Rand Paul and Cory Booker helped bring wider attention to the need for reforms to the criminal justice system through their efforts in passing the REDEEM Act. Indeed, when the Koch Brothers’ political network announced more than $5 million in spending for criminal justice reform, the Washington Post noted that it underscored “prison and sentencing reform’s unique position as one of the nation’s most widely discussed policy proposals as well as one with some of the most broad political backing.”

Model selection is a critical component of any study, so it is no wonder that criticism of risk assessment algorithms have focused on this aspect of the process. Error bars might reflect precision, but they tell us little about a model’s applicability. More importantly however, the implementation isn’t frictionless. People have to use them to make decisions. Algorithms must be integrated within a set of processes that involve the messiness of human relations. Because of the variety of institutional settings, there is sure to be significant variability in how they come to be used. The impact of real decision-making processes isn’t constrained only by the accuracy of the models, but also the purposes to which they are applied.

In other words, the social significance of these models, how they come to be used in practice, is a pertinent question for policy makers just the same as their statistical significance is.

Angèle Christin, a professor at Stanford who studies these topics, made the issue abundantly clear when she noted,

Yet it is unclear whether these risk scores always have the meaningful effect on criminal proceedings that their designers intended. During my observations, I realized that risk scores were often ignored. The scores were printed out and added to the heavy paper files about defendants, but prosecutors, attorneys, and judges never discussed them. The scores were not part of the plea bargaining and negotiation process. In fact, most of judges and prosecutors told me that they did not trust the risk scores at all. Why should they follow the recommendations of a model built by a for-profit company that they knew nothing about, using data they didn’t control? They didn’t see the point. For better or worse, they trusted their own expertise and experience instead. (emphasis added)

Christin’s on the ground experience urges scholars to consider how these algorithms have come to be implemented in practice. As she points out, institutions engage in various kinds rituals to appear modern, chief among them being the acquisition of new technological tools. Changing practices within workplaces is a much more difficult task than reformers would like to imagine. Instead, a typical reaction by those who have long worked within a system is to  manipulate the tool to look compliant.

The implementation of pretrial risk assessment instruments highlights the potential variability when algorithms are deployed. These instruments can help guide judges when decisions are made about what is going to happen to a defendant before a trial. Will the defendant be put on bail and what will be the cost? The most popular of these instruments is known as the Public Safety Assessment or simply the PSA, which was developed by the Laura and John Arnold Foundation and has been adopted in over 30 jurisdictions in the last five years.

The adoption of the PSA across regions helps to demonstrate just how disparate implementation can be. In New Jersey, the adoption of the PSA seems to have correlated with a dramatic decline in the pretrial detention rate. In Lucas County, Ohio the pretrial detention rate increased after the PSA was put into place. In Chicago, judges seem to be simply ignoring the PSA. Indeed, there appears to be little agreement on how well the PSA’s high-risk classification corresponds to reality, as re-arrest can be as low as 10 percent or as high as 42 percent, depending on how the PSA is integrated in a region.

And in the most comprehensive study of its kind, George Mason University law professor Megan Stevenson looked at Kentucky after it implemented the PSA and found significant changes in bail-setting practices, but only a small increase in pretrial release. Over time these changes eroded as judges returned to their previous habits. If this tendency to revert back to the mean is widespread, then why even implement these pre-trial risk instruments?

Although it was focused on pretrial risk assessments, Stevenson’s call for a broader understanding of these tools applies to the entirety of algorithm research:

Risk assessment in practice is different from risk assessment in the abstract, and its impacts depend on context and details of implementation. If indeed risk assessment is capable of producing large benefits, it will take research and experimentation to learn how to achieve them. Such a process would be evidence-based criminal justice at its best: not a flocking towards methods that bear the glossy veneer of science, but a careful and iterative evaluation of what works and what does not.

Algorithms are tools. While it is important to understand how well calibrated the tool is, researchers needs to be focused on how that tool impacts real people working with and within institutions with embedded cultural and historic practices.

Trade offs in fairness determinations

Julia Angwin and her team at ProPublica helped to spark a new interest in algorithmic decision-making when they dove deeper into a commonly used post trial sentencing tool known as COMPAS. Instead of predicting behavior before a trial takes place, COMPAS purports to predict a defendant’s risk of committing another crime in the sentencing phase after a defendant has been found guilty. As they discovered, the risk system was biased against African-American defendants, who were more likely to be incorrectly labeled as higher-risk than they actually were. At the same time, white defendants were labeled as lower-risk than they was actually the case.

Superficially, that seems like  a simple problem to solve. Just add features to the algorithm that consider race and rerun the tool. If only the algorithm payed attention to this bias, the outcome could be corrected. Or so goes the thinking.

But let’s take a step back and consider really what these tools represent. The task of the COMPAS tool is to estimate the degree to which people possess a likeliness for future risk. In this sense, the algorithm aims for calibration, one of at least three distinct ways we might understand fairness. Aiming for fairness through calibration means that people were correctly identified as having some probability of committing an act. Indeed, as subsequent research has found, the number of people who committed crimes were correctly distributed within each group. In other words, the algorithm did correctly identify a set of people as having a probability of committing a crime.

Angwin’s criticism is of another kind, as Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan explain in “Inherent Trade-Offs in the Fair Determination of Risk Scores.” The kind of fairness that Angwin aligns with might be understand as a balance for the positive class. To violate this kind of fairness notion, people would be later identified as being part of the class, yet they were predicted initially as having a lower probability by the algorithm. For example, as the ProPublica study found, white defendants that did commit crimes in the future were assigned lower risk scores. This would be a violation of balance for the positive class.

Similarly, balance for a negative class is the negative correlate. To violate this kind of fairness notion, people that would be later identified as not being part of the class would be predicted initially as having a higher probability of being part of it by the algorithm. Both of these conditions try to capture the idea that groups should have equal false negative and false positive rates.

After formalizing these three conditions for fairness, Kleinberg, Mullainathan, and Raghavan proved that it isn’t possible to satisfy all constraints simultaneously except in highly constrained special cases. These results hold regardless of how the risk assignment is computed, since “it is simply a fact about risk estimates when the base rates differ between two groups.”

What this means is that some views of fairness might simply be incompatible with each other. Balancing for one kind of notion of fairness is likely to come at the expense of another.

This trade off is really a subclass of a larger problem that is of central focus in data science, econometrics, and statistics. As Pedro Domingos noted:

You should be skeptical of claims that a particular technique “solves” the overfitting problem. It’s easy to avoid overfitting (variance) by falling into the opposite error of underfitting (bias). Simultaneously avoiding both requires learning a perfect classifier, and short of knowing it in advance there is no single technique that will always do best (no free lunch).

Internalizing these lessons about fairness requires a shift in framing. For those working in the AI field, actively deploying algorithms, and especially for policy makers, fairness mandates will likely create trade offs. If most algorithms cannot achieve multiple notions of fairness simultaneously, then every decision to balance for class attributes is likely to take away from efficiency elsewhere. This isn’t to say that we shouldn’t strive to optimize fairness. Rather, it is simply important to recognize that mandating of one type of fairness may necessarily come at the expense of a different type of fairness.

Understanding the internal logic of risk assessment tools is not the end of the conversation. Without data of how they are used, it could be that these algorithm entrench bias, uproot it, or have ambiguous effects. To have an honest conversation, we need to understand how they nudge decisions in the real world.

]]>
https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/feed/ 0 76285
Is STELA the Vehicle for Video Reform? https://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/ https://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/#respond Fri, 08 Aug 2014 18:21:31 +0000 http://techliberation.com/?p=74674

Even though few things are getting passed this Congress, the pressure is on to reauthorize the Satellite Television Extension and Localism Act (STELA) before it expires at the end of this year. Unsurprisingly, many have hoped this “must pass bill” will be the vehicle for broader reform of video. Getting video law right is important for our content rich world, but the discussion needs to expand much further than STELA.

Over at the American Action Forum, I explore a bit of what would be needed, and just how far the problems are rooted:

The Federal Communications Commission’s (FCC) efforts to spark localism and diversity of voices in broadcasting stands in stark contrast to relative lack of regulation governing non-broadcast content providers like Netflix and HBO, which have revolutionized delivery and upped the demand for quality content. These amorphous social goals also have limited broadcasters. Without any consideration for the competitive balance in a local market, broadcasters are barred in what they can own, are saddled with various programming restrictions, and are subject to countless limitations in the use of their spectrum. Moreover, the FCC has sought to outlaw deals between broadcasters who negotiate jointly for services and ads.

In the effort to support specific “public interest” goals, the FCC has implemented certain regulations which have cabined both broadcasters and paid TV distributors. In turn, these regulations forced companies to develop in proscribed ways, and in turn prompted further regulatory action when they have tried to innovate. Speaking about this cat-and-mouse game in the financial sector, Professor Edward Kane termed the relationship, the “regulatory dialectic.”

But unwrapping the regulatory dialectic in video law will require a vehicle far more expansive than STELA. Ultimately, I conclude,

Both the quality of programming and the means of accessing it have undergone dramatic changes in the past two decades but the regulations have not. Consumer preferences and choices are shifting, which needs to be met by alterations in the regulatory regime. STELA is one part of the puzzle, but like so many other areas of telecommunication law, a comprehensive look at the body of laws ruling video is needed. It is increasingly clear that the laws governing programming must be updated to meet the 21st century marketplace.

On this site especially, there has been a vigorous debate on just what this framework would entail. For a more comprehensive look, check out:

  • Geoffrey Manne’s testimony on STELA before the House of Representatives’ Energy and Commerce;
  • Adam Thierer’s and Brent Skorup’s paper on video law entitled, “Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals”;
  • Ryan Radia’s blog post entitled, “A Free Market Defense of Retransmission Consent”;
  • Fred Campbell’s white paper on the “Future of Broadcast Television,” as well as his various posts on the subject;
  • And Hance Hanley’s posts on video law.
]]>
https://techliberation.com/2014/08/08/is-stela-the-vehicle-for-video-reform/feed/ 0 74674