Moral Panics – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Fri, 07 Apr 2023 17:36:05 +0000 en-US hourly 1 6772528 On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#comments Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others  calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon  Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 2 77111
Why Isn’t Everyone Already Unemployed Due to Automation? https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/ https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/#comments Sat, 11 Mar 2023 14:16:41 +0000 https://techliberation.com/?p=77099

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:

To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties. Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.

I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/feed/ 3 77099
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#comments Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 2 77033
Should All Kids Under 18 Be Banned from Social Media? https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/ https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/#respond Mon, 18 Apr 2022 15:00:00 +0000 https://techliberation.com/?p=76966

This weekend, The Wall Street Journal ran my short letter to the editor entitled, “We Can Protect Children and Keep the Internet Free.” My letter was a response to columnist Peggy Noonan’s April 9 oped, “Can Anyone Tame Big Tech?” in which she proposed banning everyone under 18 from all social-media sites. She specifically singled out TikTok, Youtube, and Instagram and argued “You’re not allowed to drink at 14 or drive at 12; you can’t vote at 15. Isn’t there a public interest here?”

I briefly explained why Noonan’s proposal is neither practical nor sensible, noting how it:

would turn every kid into an instant criminal for seeking access to information and culture on the dominant medium of their generation. I wonder how she would have felt about adults proposing to ban all kids from listening to TV or radio during her youth. Let’s work to empower parents to help them guide their children’s digital experiences. Better online-safety and media-literacy efforts can prepare kids for a hyperconnected future. We can find workable solutions that wouldn’t usher in unprecedented government control of speech.

Let me elaborate just a bit because this was the focus of much of my writing a decade ago, including my book, Parental Controls & Online Child Protection: A Survey of Tools & Methods, which spanned several editions. Online child safety is a matter I take seriously and the concerns that Noonan raised in her oped have been heard repeatedly since the earliest days of the Internet. Regulatory efforts were immediately tried. They focused on restricting underage access to objectionable online content (as well as video games), but were immediately challenged and struck down as unconstitutionally overbroad restrictions on free speech and a violation of the First Amendment of the U.S. Constitution.

But practically speaking, most of these efforts were never going to work anyway. There was almost no way to bottle up all the content flowing in the modern information ecosystem without highly repressive regulation, and it was going to be nearly impossible to keep kids off the Internet altogether when it was the dominant communications and entertainment medium of their generation. The first instinct of every moral panic wave–from the waltz to comic books to rock or rap music to video games–has often been to take the easy way out by proposing sweeping bans on all access by kids to the content or platforms of their generation. It never works.

Nor should it. There is a huge amount of entirely beneficial speech, content, and communications that kids would be denied by such sweeping bans. That would make such ban highly counter-productive. But, again, usually such efforts just were not practically enforceable because kids are often better at the cat-and-mouse game than adults give them credit for. Moreover, imposing age limitations of speech or content are far more difficult than age-related bans on specific tangible products, like tobacco or other dangerous physical products.

Acknowledging these realities, most sensible people quickly move on from extreme proposals like flat bans of all kids using the popular media platforms and systems of the day. Over the past half century in the U.S., this has led to a flowering of more decentralized governance approach to kids and media that I have referred to as the “3E approach.” That stands for empowerment (of parents), education (of youth), and enforcement (of existing laws). The 3E approach includes a variety of mechanisms and approaches, including: self-regulatory codes, private content rating systems, a wide variety of different parental control technologies, and much more.

Over the past two decades, many multistakeholder initiatives and blue-ribbon commissions were created to address online safety issues in a holistic fashion. I summarized their conclusions in my 2009 report, “Five Online Safety Task Forces Agree: Education, Empowerment & Self-Regulation Are the Answer.” The crucial takeaway from all these task forces and commissions is that no silver-bullet solutions exist to hard problems. Child safety demands a vigilant but adaptive approach, rooted in a variety of best practices, educational approaches, and technological empowerment solutions to address various safety concerns. Digital literacy is particularly crucial to building wiser, more resilient kids and adults, who can work together to find constructive approaches to hard problems.

Importantly, our task here is never done. This is an ongoing and evolving process. Issues like underage access to pornography or violent content have been with us for a very long time and will never be completely “solved.” We must constantly work to improve existing online safety mechanisms while also devising new solutions for our rapidly evolving information ecosystem. Nothing should be off the table except the one solution that Noonan suggested in her essay. Just proposing outright bans on kids on social media or any other new media platform (VR will be next) is an unworkable and illogical response that we should dismiss fairly quickly. No matter how well-intentioned such proposals may be, moral panic-induced prohibitions on kids and media ultimately are not going to help them learn to live better, safer, and more enriching lives in the new media ecosystems of today or the future. We can do better.

 

]]>
https://techliberation.com/2022/04/18/should-all-kids-under-18-be-banned-from-social-media/feed/ 0 76966
Remembering the ‘Japan Inc.’ Industrial Policy Scare of the 1980s & 1990s https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/ https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/#respond Tue, 29 Jun 2021 16:12:22 +0000 https://techliberation.com/?p=76892

Discourse magazine has just published my latest essay, “‘Japan Inc.’ and Other Tales of Industrial Policy Apocalypse.” It is a short history of the hysteria surrounding the growth of Japan in the 1980s and early 1990s and its various industrial policy efforts. I begin by noting that, “American pundits and policymakers are today raising a litany of complaints about Chinese industrial policies, trade practices, industrial espionage and military expansion. Some of these concerns have merit. In each case, however, it is easy to find identical fears that were raised about Japan a generation ago.” I then walk through many of the leading books, opeds, movies, and other things from that past era to show how that was the case.

“Hysteria” is not too strong a word to use in this case. Many pundits and politicians were panicking about the rise of Japan economically and more specifically about the way Japan’s Ministry of International Trade and Industry (MITI) was formulating industrial policy schemes for industrial sectors in which they hoped to make advances. This resulted in veritable “MITI mania” here in America. “U.S. officials and market analysts came to view MITI with a combination of reverence and revulsion, believing that it had concocted an industrial policy cocktail that was fueling Japan’s success at the expense of American companies and interests,” I note. Countless books and essays were being published with breathless titles and predictions. I go through dozens of them in my essay. Meanwhile, the debate in policy circles and Capitol Hill even took on an ugly racial tinge, with some lawmakers calling the the Japanese “leeches.” and suggesting the U.S. should have dropped more atomic bombs on Japan during World War II. At one point, several members of Congress gathered on the lawn of the U.S. Capitol in 1987 to smash Japanese electronics with sledgehammers.

All this hysteria about Japan and MITI bore little semblance to reality. In fact, as I note in the essay, the MITI industrial planning model fell apart after it made a host of horrible bad bets and the stock market tanked in the late 1980s. Corruption also became a huge problem within many state-led efforts. A 2000 report by the Policy Research Institute within Japan’s Ministry of Finance concluded that “the Japanese model was not the source of Japanese competitiveness but the cause of our failure.” MITI was renamed the Ministry of Economy, Trade and Industry at about the same time, and its mission shifted more toward market-oriented reforms.

Industrial policy came to be viewed as a bit of a joke in America after that, but now it is back with a vengeance, thanks largely to the rise of Chinese economic power. Thus, because “we hear echoes from the Japan Inc. era debates in today’s policy discussions about China and industrial policy planning,” I end my essay with some lessons from the ‘Japan Inc.’ era for today’s industrial policy debates:

This similarity demonstrates the first lesson we can learn from the previous era: It is important to separate serious geopolitical and economic analysis from breathless fear-mongering and borderline xenophobia. The former has a serious place in policy discussions; the latter needs to be called out and shunned. After all, there are many legitimate worries about rising Chinese power, particularly when it involves Chinese Communist Party efforts to squash human rights domestically or to engage in industrial espionage, trade mercantilism and military adventurism abroad. Separating serious matters from trivial or imaginary ones is crucial, especially to help keep peace between nations. Avoiding hysteria is especially pertinent today with a wave of anti-Asian sentiment and attacks on the rise in the U.S. A second lesson from the Japan Inc. experience relates to today’s renewed interest in industrial policy: Forecasting the future of nations and economies—and trying to plan for it—is a tricky business. A huge range of variables affects global competitiveness and technological advancement. A nonexhaustive list of some of the most important factors would include legal and political stability, physical and intellectual property rights, tax burdens, competition policy, trade and investment laws, monetary policy, research and development efforts, and even demographic factors and access to certain natural resources. Understanding how these and other factors all work together is an inexact science. When targeted industrial policy mechanisms are added to the mix, it becomes even harder to untangle which variables are making the most difference. Both in the past and today, a less visible group of scholars has suggested that an embrace of entrepreneurialism and free trade was the fundamental factor driving Japanese economic expansion in the past and China’s amazing growth today. Openness to markets, they say, drove the enormous economic expansions—which also happened during times of much-needed catch-up modernization in both countries. But these perspectives have usually been shouted out of the room by louder voices, who either bombastically blast or praise industrial policy mechanisms as the prime mover in the economic rejuvenation of both nations. We need to tamp down on the magical thinking that governments can easily achieve technological innovation and economic growth by simply spinning a few industrial policy gauges. A few big bets may pay off, but that doesn’t justify governments engaging in casino economics regularly. History more often shows that grandiose industrial policy schemes simply result in cost overruns, cronyism and even corruption.

I also conclude by noting that:

Perhaps the most ironic indictment of industrial policy punditry lies in the way all the earlier books and essays about Japanese planning not only failed to forecast the many flops associated with it, but also did not foresee China as a potential future economic juggernaut. Korea, Singapore and Taiwan were mentioned as potential Asian challengers, but no one gave China much consideration. What might that tell us about the ability of experts to predict the future course of countries and economies? It is a reminder of the wisdom of another great Yogi Berra quote: “It’s tough to make predictions, especially about the future.”

You can read the entire piece, as well as several others listed below, over at Discourse.


Recent writing on industrial policy:
]]>
https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/feed/ 0 76892
The Conservative Crack-Up Over the Fairness Doctrine & FCC Regulation https://techliberation.com/2020/08/08/the-conservative-crack-up-over-the-fairness-doctrine-fcc-regulation/ https://techliberation.com/2020/08/08/the-conservative-crack-up-over-the-fairness-doctrine-fcc-regulation/#comments Sat, 08 Aug 2020 21:01:16 +0000 https://techliberation.com/?p=76799

There is a war going on in the conservative movement over free speech issues and FCC Commissioner Mike O’Reilly just became a causality of that skirmish. Neil Chilson and I just posted a new essay about this over on the Federalist Society blog. As we note there:

Plenty of people claim to favor freedom of expression, but increasingly the First Amendment has more fair-weather friends than die-hard defenders. Michael O’Rielly, a Commissioner at the Federal Communications Commission (FCC), found that out the hard way this week. Last week, O’Rielly delivered an important speech before the Media Institute highlighting a variety of problematic myths about the First Amendment, as well as “a particularly ominous development in this space.” In a previous political era, O’Rielly’s remarks would have been mainstream conservative fare. But his well-worded warnings are timely with many Democrats and Republicans – including some in the White House – looking to resurrect analog-era speech mandates and let Big Government reassert control over speech decisions in the United States.

Shortly after delivering his remarks, the White House yanked O’Rielly’s nomination to be reappointed to the agency. It was a shocking development that was likely motivated by growing animosities between Republicans on the question of how much control the federal government–and the FCC in particular–should exercise over speech platforms, including platforms that the FCC has no authority to regulate.

For the 30 years that I have been covering media and technology policy, I’ve heard conservatives rail against the Fairness Doctrine, Net Neutrality and arbitrary Big Government only to see many of them now reverse suit and become the biggest defenders of these things as it pertains to speech controls and FCC regulation. It will certainly be interesting to see what a potential future Biden Administration does with the various new regulations that some in the GOP are seeking to impose.

But all hope is not lost. There are still brave voices in Republican and conservative circles who continue to stand up the the First Amendment, freedom of speech, and limits on federal regulatory meddling with speech platforms and outcomes. Commissioner O’Reilly basically lost his job because he acted as the equivalent of an intellectual whistle-blower; he called out the ideological rot seen in recent statements and actions by the White House, Senator Josh Hawley, and many other Republicans.

There is nothing remotely “conservative” about calls for reinvigorating the Fairness Doctrine and FCC speech controls. That represents repressive regulation that betrays the First Amendment and which will ultimately backfire badly and come back to haunt conservatives down the road.

Read my new essay with Neil for more details. And down below I have listed all my recent writing on this topic.

Additional Reading:

]]>
https://techliberation.com/2020/08/08/the-conservative-crack-up-over-the-fairness-doctrine-fcc-regulation/feed/ 1 76799
15 Years of the Tech Liberation Front: The Greatest Hits https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/ https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/#comments Thu, 15 Aug 2019 14:34:51 +0000 https://techliberation.com/?p=76579

The Technology Liberation Front just marked its 15th year in existence. That’s a long time in the blogosphere. (I’ve only been writing at TLF since 2012 so I’m still the new guy.)

Everything from Bitcoin to net neutrality to long-form pieces about technology and society were featured and debated here years before these topics hit the political mainstream.

Thank you to our contributors and our regular readers. Here are the most-read tech policy posts from TLF in the past 15 years (I’ve omitted some popular but non-tech policy posts).

No. 15: Bitcoin is going mainstream. Here is why cypherpunks shouldn’t worry. by Jerry Brito, October 2013

Today is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.

The fact is that Bitcoin is inching its way into the mainstream.

No. 14: Is fiber to the home (FTTH) the network of the future, or are there competing technologies? by Roslyn Layton, August 2013

There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics.  After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.

No. 13: So You Want to Be an Internet Policy Analyst? by Adam Thierer, December 2012

Each year I am contacted by dozens of people who are looking to break into the field of information technology policy as a think tank analyst, a research fellow at an academic institution, or even as an activist. Some of the people who contact me I already know; most of them I don’t. Some are free-marketeers, but a surprising number of them are independent analysts or even activist-minded Lefties. Some of them are students; others are current professionals looking to change fields (usually because they are stuck in boring job that doesn’t let them channel their intellectual energies in a positive way). Some are lawyers; others are economists, and a growing number are computer science or engineering grads. In sum, it’s a crazy assortment of inquiries I get from people, unified only by their shared desire to move into this exciting field of public policy.

. . . Unfortunately, there’s only so much time in the day and I am sometimes not able to get back to all of them. I always feel bad about that, so, this essay is an effort to gather my thoughts and advice and put it all one place . . . .

No. 12: Violent Video Games & Youth Violence: What Does Real-World Evidence Suggest? by Adam Thierer, February 2010

So, how can we determine whether watching depictions of violence will turn us all into killing machines, rapists, robbers, or just plain ol’ desensitized thugs? Well, how about looking at the real world! Whatever lab experiments might suggest, the evidence of a link between depictions of violence in media and the real-world equivalent just does not show up in the data. The FBI produces ongoing Crime in the United States reports that document violent crimes trends. Here’s what the data tells us about overall violent crime, forcible rape, and juvenile violent crime rates over the past two decades: They have all fallen. Perhaps most impressively, the juvenile crime rate has fallen an astonishing 36% since 1995 (and the juvenile murder rate has plummeted by 62%).

No. 11: Wedding Phtography and Copyright Release by Tim Lee, September 2008

I’m getting married next Spring, and I’m currently negotiating the contract with our photographer. The photography business is weird because even though customers typically pay hundreds, if not thousands, of dollars up front to have photos taken at their weddings, the copyright in the photographs is typically retained by the photographer, and customers have to go hat in hand to the photographer and pay still more money for the privilege of getting copies of their photographs.

This seems absurd to us . . . .

No. 10: Why would anyone use Bitcoin when PayPal or Visa work perfectly well? by Jerry Brito, December 2013

A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.

No. 9: The Hidden Benefactor: How Advertising Informs, Educates & Benefits Consumers by Adam Thierer & Berin Szoka, February 2010

Advertising is increasingly under attack in Washington. . . . This regulatory tsunami could not come at a worse time, of course, since an attack on advertising is tantamount to an attack on media itself, and media is at a critical point of technological change. As we have pointed out repeatedly, the vast majority of media and content in this country is supported by commercial advertising in one way or another-particularly in the era of “free” content and services.

No. 8: Reverse Engineering and Innovation: Some Examples by Tim Lee, June 2006

Reverse engineering the CSS encryption scheme, by itself, isn’t an especially innovative activity. However, what I think Prof. Picker is missing is how important such reverse engineering can be as a pre-condition for subsequent innovation. To illustrate the point, I’d like to offer three examples of companies or open source projects that have forcibly opened a company’s closed architecture, and trace how these have enabled subsequent innovation . . . .

No. 7: Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society by Adam Thierer, January 2010

The cycle goes something like this. A new technology appears. Those who fear the sweeping changes brought about by this technology see a sky that is about to fall. These “techno-pessimists” predict the death of the old order (which, ironically, is often a previous generation’s hotly-debated technology that others wanted slowed or stopped). Embracing this new technology, they fear, will result in the overthrow of traditions, beliefs, values, institutions, business models, and much else they hold sacred.

The pollyannas, by contrast, look out at the unfolding landscape and see mostly rainbows in the air. Theirs is a rose-colored world in which the technological revolution du jour is seen as improving the general lot of mankind and bringing about a better order. If something has to give, then the old ways be damned! For such “techno-optimists,” progress means some norms and institutions must adapt—perhaps even disappear—for society to continue its march forward.

No. 6: Copyright Duration and the Mickey Mouse Curve by Tom Bell, August 2009

Given the rough-and-tumble of real world lawmaking, does the rhetoric of “delicate balancing” merit any place in copyright jurisprudence? The Copyright Act does reflect compromises struck between the various parties that lobby congress and the administration for changes to federal law. A truce among special interests does not and cannot delicately balance all the interests affected by copyright law, however. Not even poetry can license the metaphor, which aggravates copyright’s public choice affliction by endowing the legislative process with more legitimacy than it deserves. To claim that copyright policy strikes a “delicate balance” commits not only legal fiction; it aids and abets a statutory tragedy.

No. 5: Cyber-Libertarianism: The Case for Real Internet Freedom by Adam Thierer & Berin Szoka, August 2009

Generally speaking, the cyber-libertarian’s motto is “Live & Let Live” and “Hands Off the Internet!” The cyber-libertarian aims to minimize the scope of state coercion in solving social and economic problems and looks instead to voluntary solutions and mutual consent-based arrangements.

Cyber-libertarians believe true “Internet freedom” is freedom from state action; not freedom for the State to reorder our affairs to supposedly make certain people or groups better off or to improve some amorphous “public interest”—an all-to convenient facade behind which unaccountable elites can impose their will on the rest of us.

No. 4: Here’s why the Obama FCC Internet regulations don’t protect net neutrality by Brent Skorup, July 2017

It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before.

No. 3: 10 Years Ago Today… (Thinking About Technological Progress) by Adam Thierer, February 2009

As I am getting ready to watch the Super Bowl tonight on my amazing 100-inch screen via a Sanyo high-def projector that only cost me $1,600 bucks on eBay, I started thinking back about how much things have evolved (technologically-speaking) over just the past decade. I thought to myself, what sort of technology did I have at my disposal exactly 10 years ago today, on February 1st, 1999? Here’s the miserable snapshot I came up with . . . .

No. 2: Regulatory Capture: What the Experts Have Found by Adam Thierer, December 2010

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity. Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism. . . . Yet, countless studies have shown that regulatory capture has been at work in various arenas: transportation and telecommunications; energy and environmental policy; farming and financial services; and many others.

No. 1: Defining “Technology” by Adam Thierer, April 2014

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” . . . Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research.

]]>
https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/feed/ 1 76579
Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet https://techliberation.com/2019/08/01/sen-hawleys-radical-paternalistic-plan-to-remake-the-internet/ https://techliberation.com/2019/08/01/sen-hawleys-radical-paternalistic-plan-to-remake-the-internet/#comments Thu, 01 Aug 2019 18:00:17 +0000 https://techliberation.com/?p=76530

Sen. Josh Hawley (R-MO) recently delivered remarks at the National Conservatism Conference and a Young America’s Foundation conference in which he railed against political and academic elites, arguing that, “the old era is ending and the old ways will not do.” “It’s time that we stood up to big government, to the people in government who think they know better,” Hawley noted at the YAF event. “[W]e are for free competition… we are for the free market.”

That’s all nice-sounding rhetoric but it sure doesn’t seem to match up with Hawley’s recent essays and policy proposals, which are straight out of the old era’s elitist and highly paternalistic Washington-Knows-Best playbook. Specifically, Hawley has called for a top-down, technocratic regulatory regime for the Internet and the digital economy more generally. Hawley has repeatedly made claims that digital technology companies have gotten a sweetheart deal from government and they they have censored conservative voices. That’s utter nonsense, but those arguments have driven his increasingly fanatic rhetoric and command-and-control policy proposals. If he succeeds in his plan to empower unelected bureaucrats inside the Beltway to reshape the Internet, it will destroy one of the greatest American success stories in recent memory. It’s hard to understand how that could be labelled “conservative” in any sense of the word.

I’ve been tracking Sen. Hawley’s increasingly radical plans for the digital economy in a series of essays, including:

In these articles, I have documented how Sen. Hawley has been whipping up a panic about digital technology companies and social media platforms to soften to ground for massive intervention by DC elites. Consider his hotly-worded USA Today op-ed from May in which he argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to sites like Facebook, Instagram, and Twitter as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide). He has even suggested that, “we’d be better off if Facebook disappeared” and seems to hope the same for other sites.

More insultingly, he has argued that the entire digital economy was basically one giant mistake. He says that America’s recent focus on growing the Internet and information technology sectors has “encouraged a generation of our brightest engineers to enter a field of little productive value,” which he regards as “an opportunity missed for the nation.” “What marvels might these bright minds have produced,” Hawley asks, “had they been oriented toward the common good?”

Again, this isn’t the sort of rhetoric that conservatives are usually known for. This is elitist, paternalistic tripe that we usually hear from market-hating neo-Marxists. It takes a lot of hubris for Sen. Hawley to suggest that he knows best which professions or sectors are in “the common good.” As I responded in one of my essays:

Had some benevolent philosopher kings in Washington stopped the digital economy from developing over the past quarter century, would all those tech workers really have chosen more noble-minded and worthwhile professions? Could he or others in Congress really have had the foresight to steer us in a better direction?

Why would Sen. Hawley think DC elites could do a better job centrally planning the economy? He doesn’t really tell us, instead preferring to fall back on conspiratorial rhetoric about evil “Big Tech” companies “censoring” conservatives voices. That’s the same card he played when he joined President Trump at the White House for the surreal, rambling “Social Media Summit” that took place last month. Trump used the same approach that Sen. Hawley and Sen. Ted Cruz (R-TX) have been using during recently Senate Judiciary Committee hearings: brow-beat witnesses and make wild claims about the whole digital world being out to muzzle conservative voices. As Andrea O’Sullivan and I noted in our essay about the Social Media Summit:

The President and other conservatives are tapping another approach: indirect censorship through both subtle and direct threats. This is an old playbook that goes by many different names: “jawboning,” “administrative arm-twisting,” “agency threats,” and “regulation by raised eyebrow.” These were the names given to broadcast-era efforts to pressure old radio and TV outlets to bring their programming choices in line with the desires of politicians and bureaucrats.

This is an old DC playbook that elites have used for decades to “work the refs” and try to extract promises from various parties under threat of more far-reaching regulation if they fail to comply with the demands of politicians. Again, there’s nothing remotely “conservative” about it.

Brushing aside such concerns, Sen. Hawley has started sketching out what a comprehensive regulatory regime for the Internet and social media might look like. He does so in two new bills, the “Ending Support for Internet Censorship Act” (co-sponsored by Sen. Cruz) and the “Social Media Addiction Reduction Technology (SMART) Act.” These two measures, if implemented, would radically remake the digital economy and lead to a remarkably intrusive regulatory regime for online speech and commerce.

The ridiculously named “Ending Support for Internet Censorship Act” would actually encourage the exact opposite result than its title suggests. The proposal would mandate that regulators at the Federal Trade Commission evaluate whether platforms have engaged in “politically biased moderation,” which is defined as moderation practices that are supposedly, “designed to negatively affect” or those that “disproportionately [restrict] or promote access to … a political party, political candidate, or political viewpoint.” Social media providers would need to petition the FTC for “immunity certifications” to then get regular audits to ensure they are moderating content in a government-approved manner. If they didn’t, they would lose their platform liability protections, which could effectively run them out of business.

This is permission slip-based regulation and it makes the old Federal Communications Commission licensing regime for broadcast radio and television look like child’s play by comparison. Hawley’s “Mother, May I?” licensing scheme for the Net would have unelected FTC bureaucrats make speech decisions for the entire Internet. It’s a massive First Amendment violation, and it would almost certainly face constitutional challenge if implemented.

What makes this all the more shocking, as I noted in response, this measure combines core elements of the old Fairness Doctrine as well as “net neutrality” mandates that conservatives have traditionally decried. The bill would also empower insider-the-Beltway lawyers, lobbyists, and consultants, who would be needed to navigate the maze of red tape this measure would give rise to. Worst of all, the measure is a massive gift to the trial lawyers Republicans love to hate because Hawley’s new regulatory regime would empower them to file an endless string of frivolous suits aimed at simply shaking down companies through early settlements. Again, how is this “conservative”?

Then there’s Hawley’s new “SMART Act,” which as Andrea O’Sullivan and I argue in our latest essay is really quite stupid. The highly technocratic measure lists a variety of business practices that would be automatically verboten. As Andrea and I summarize:

On the chopping block are infinite scrolling, video autoplay, and “gamification” features like offering badges or streaks for accomplishing certain feats. The bill would also require that social media companies build default time limits and pop-up notifications telling users how long they’ve been on a platform within six months of the bill passing. Weirdly, the bill specifies a time limit of 30 minutes on all social media platforms on all devices per day, after which point they will be locked out. The user would be able to raise that limit through platform settings, but it would reset to 30 minutes at the beginning of each month.

Who would have ever thought we now be living in a world where conservatives are calling for paternalistic, Washington-knows-best nannyism that lets agency bureaucrats forcibly shut down your social media access each day after just 30 minutes of use? Hell, why stop there? Perhaps Sen. Hawley could next impose daily limits how many Netflix shows we stream, how many podcasts we listen to, or how much time we spend playing video games. After all, he clearly thinks he knows what is in our own best interest.

No matter how much Sen. Hawley rails against elites and big government, what he has been saying and proposing represent elitism and regulatory paternalism of the very highest order. He may say that “the old era is ending and the old ways will not do” in his speeches, but through his actions he has whole-hardheartedly embraced the old order. And while he can mouth lines about how “it’s time that we stood up to big government, to the people in government who think they know better,” and while he might claim that he is “for free competition [and] the free market,” in reality, Sen. Hawley has become the most aggressive Republican booster of Big Government and managed markets that I have seen in my 30 years covering technology policy.

Hopefully, the real conservatives left out there will make a stand against Sen. Hawley’s abominable corruption of their movement and ideals.

]]>
https://techliberation.com/2019/08/01/sen-hawleys-radical-paternalistic-plan-to-remake-the-internet/feed/ 3 76530
An Epic Moral Panic Over Social Media https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/ https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/#comments Thu, 30 May 2019 17:36:14 +0000 https://techliberation.com/?p=76493

[This essay originally appeared on the AIER blog on May 28, 2019. The USA TODAY also ran a shorter version of this essay as a letter to the editor on June 2, 2019.]

In a hotly-worded USA Today op-ed last week, Senator Josh Hawley (R-Missouri) railed against social media sites Facebook, Instagram, and Twitter. He argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to these sites as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide), leading him to declare that, “we’d be better off if Facebook disappeared.”

As far as moral panics go, Sen. Hawley’s will go down as one for the ages. Politicians have always castigated new technologies, media platforms, and content for supposedly corrupting the youth of their generation. But Sen. Hawley’s inflammatory rhetoric and proposals are something we haven’t seen in quite some time.

He sounds like those fire-breathing politicians and pundits of the past century who vociferously protested everything from comic books to cable television, the waltz to the Walkman, and rock-and-roll to rap music. In order to save the youth of America, many past critics said, we must destroy the media or media platforms they are supposedly addicted to. That is exactly what Sen. Hawley would have us do to today’s leading media platforms because, in his opinion, they “do our country more harm than good.”

We have to hope that Sen. Hawley is no more successful than past critics and politicians who wanted to take these choices away from the public. Paternalistic politicians should not be dictating content choices for the rest of us or destroying technologies and platforms that millions of people benefit from.

Addiction Panics: We’ve Been Here Before

Ironically, Sen. Hawley isn’t even right about what the youth of America are apparently obsessed with. Most kids view Facebook and Twitter as places where old people hang out. My teenage kids laugh when I ask them about those sites. Pew Research polling finds that many younger users are increasingly deleting Facebook (if they used it at all) or flocking to other platforms, such as Snapchat or YouTube.

But shouldn’t we be concerned with kids overusing social media more generally? Yes, of course we should—but that’s no reason to call for their outright elimination, as Sen. Hawley recommends. Such rhetoric is particularly concerning at a time when critics are proposing a “break up” of tech companies. Sen. Hawley sits on the U.S. Senate Judiciary Committee’s Subcommittee on Antitrust, Competition Policy and Consumer Rights. It is likely he and others will employ these arguments to fan the flames of regulatory intervention or antitrust action against at least Facebook.

Forcing social media sites to “disappear” or be broken up is one of the worst ways to deal with these concerns. It is always wise to mentor our youth and teach them how to achieve a balanced media diet. Many youths—and many adults—are probably overusing certain technologies (smartphones, in particular) and over-consuming some types of media. For those truly suffering from addiction, it is worth considering targeted strategies to address that problem. However, that is not what antitrust law is meant to address.

Moreover, concerns about addiction and distraction have popped up repeatedly during past moral panics and we should take such claims with a big grain of salt. Sociologist Frank Furedi has documented how, “inattention has served as a sublimated focus for apprehensions about moral authority” going back to at least the early 1700s. With each new form of media or means of communication, the older generation taps into the same “kids-these-days!” fears about how the younger generation has apparently lost the ability to concentrate or reason effectively.

For example, in the past century, critics said the same thing about radio and television broadcasting, comparing them to tobacco in terms of addiction and suggesting that media companies were “manipulating” us into listening or watching. Rock-and-roll and rap music got the same treatment, and similar panics about video games are still with us today.

Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.

Missed Opportunity, or Marvelous Achievement?

Sen. Hawley makes another audacious assertion in his essay when he suggests that social media has not provided any real benefit to American workers or consumers. He says the rise of the Digital Economy has “encouraged a generation of our brightest engineers to enter a field of little productive value,” which he regards as “an opportunity missed for the nation.”

This is an astonishing statement, made more troubling by Hawley’s claim that all these digital innovators could have done far more good by choosing other professions. “What marvels might these bright minds have produced,” Hawley asks, “had they been oriented toward the common good?”

Why is it that Sen. Hawley gets to decide which professions are in “the common good”? This logic is insulting to all those who make a living in these sectors, but there is a deeper hubris in Sen. Hawley’s argument that social media does not serve “the common good.” Had some benevolent philosopher kings in Washington stopped the digital economy from developing over the past quarter century, would all those tech workers really have chosen more noble-minded and worthwhile professions? Could he or others in Congress really have had the foresight to steer us in a better direction?

In reality, U.S. tech companies produce high-quality jobs and affordable, collaborative communications platforms that are popular across the globe. In response to Sen. Hawley’s screed, the Internet Association, which represents America’s leading digital technology companies, noted that, in Sen. Hawley’s home state of Missouri alone, the Internet supports 63,000 jobs at 3,400 companies and contributed $17 billion in GDP to the state’s economy. Presumably, Sen. Hawley would not want to see those benefits “disappear” along with the social media sites that helped give rise to them.

But the Internet and social media have an equally profound impact on the entire U.S. economy, adding over 9,000 jobs and nearly 570 businesses to each metropolitan statistical area. The Digital Economy is a great American success story that is the envy of the world, not something to be lamented and disparaged as Sen. Hawley has.

For someone who believes that Facebook is a “drug” and a “parasite,” it is curious how active Sen. Hawley is on Facebook, as well as on Twitter. If he really believes that “we’d be better off if Facebook disappeared,” then he should lead by example and get off the sites. But that is a decision he will have to make for himself. He should not, however, make it for the rest of us.

]]>
https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/feed/ 2 76493
The Kids Are Going To Be Alright https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/ https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/#respond Thu, 17 Jan 2019 19:24:02 +0000 https://techliberation.com/?p=76449

Catchy headlines like “Heavy Social Media Use Linked With Mental Health Issues In Teens” and “Have Smartphones Destroyed a Generation?” advance a common trope of generational decline. But a new paper in Nature uses a new and rigorous analytical method to understand the relationship between adolescent well-being and digital technology, finding a “negative but small [link], explaining at most 0.4% of the variation in well-being.”

What really sets apart the new paper from Amy Orben and Andy Przybylski is that it aims to capture a more complete picture of how variables interact. The problem that Orden and Przybylski tackle is endemic one in social science. Sussing out the causal relationship between two variables will always be confounded by other related variables in the dataset. So how do you choose the right combination of variables to test?

An analytical approach first developed by Simonsohn, Simmons and Nelson outlines a method for solving this problem. As Orben and Przybylski wrote, “Instead of reporting a handful of analyses in their paper, [researchers] report all results of all theoretically defensible analyses.” The result is a range of possible coefficients, which can then be plotted along a curve, a specification curve. Below is the specification curve from one of the datasets that Orben and Przybylski analyzed.

Amy Orben and Andrew Przybylski explain why this method is important to policy makers who are interested in the tech use question:

Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context.  In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.

Truthfully this is the first time I have encountered specification curve analysis and am quite impressed with its power. For more details, check out the OSF page, the writeup in Nature, and the full paper. Also, Michael Scharkow details how to apply SCA to variance and includes some R code.

]]>
https://techliberation.com/2019/01/17/the-kids-are-going-to-be-alright/feed/ 0 76449
Three Short Responses To The Pacing Problem https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/ https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/#respond Tue, 27 Nov 2018 17:16:38 +0000 https://techliberation.com/?p=76419

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that , “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption , Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered , “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses.

Technological Determinism

Part of what drives the worry about a pacing problem is rooted in a belief in technological determinism . Determinism aligns human actors and technological objects in a causal relationship. Technology acts on society as an outside force. In this view of the world, technology is separate from society and thus can advance by leaps and bounds before society and regulation can catch up. In other words, technology is made an independent variable with acts upon us all.

Yet, that doesn’t describe the world in which technological objects are created and sustained. The iPhone was created by Apple following the success of the iPod in melding the hardware platform with the content of the mobile web, ultimately for the purpose of boosting sales. And people became enamored with it, lining up days before its release to grab one. Technologies aren’t alien objects. They are molded by particular interests and institutional goals, and rooted in society, especially the bourgeois virtues.

Technologies exist within human ecology, just as economic systems do. To make technology an outside force misplaces the role of human values in the creation and adoption of innovation. As separated from society, determinism allows for technology to be both mythologized and demonized. Technologies cannot outpace our ability to adapt. Rather, the speed of change, of innovation, is rate limited by society’s ability to adapt. As Robin Hanson explained , “society’s ability to adapt is the primary constraint on how fast we adopt new technologies.”

The Technological Accident

The pacing problem also gains purchase because new technologies create the possibility for new accidents. As philosopher Paul Virilio wrote ,

To invent the sailing ship or the steamer is to invent the shipwreck. To invent the train is to invent the rail accident of derailment. To invent the family automobile is to produce the pile-up on the highway.

Every newly created technology comes with the potential for problems. So the possibility set for accidents increases dramatically when a new technology comes onto the scene. But it isn’t the case that all of those risk will be manifested. Only a subset of potential problems will ever become realized. As such, it isn’t is that social and regulatory responses systems need to have all answers. Rather, there needs to be in place flexible systems to deal with actualized issues.      

Regulation as a Real Option

Perhaps, however, we have been thinking about the pacing problem incorrectly. Maybe the pacing problem isn’t a problem as much as it is a reflection of uncertainty. Again, Vivek Wadhwa pithilty explained this problem, saying, “We haven’t come to grips with what is ethical, let alone with what the laws should be , in relation to technologies such as social media.” Consider that phrase I have highlighted. There is little agreement as to how we should regulate social media. In other words, there is regulatory uncertainty. The concept of real option might help make sense of this.

Real options are the investment choices that a company’s management will makes in order “to expand, change or curtail projects based on changing economic, technological or market conditions.” While originally used in strictly financial terms, economists Avinash Dixit and Robert Pindyck have adapted this concept to understand how firms invest, or not, in the face of regulatory uncertainty. As you read this paragraph from the first chapter of their book on the subject , replace the term investment with regulation and see what you think,  

Most investment decisions share three important characteristics it varying degrees. First, the investment is partially or completely irreversible. In other words, the initial cost of investment is at least partially sunk; you cannot recover it all should you change your mind. Second, there is uncertainty over the future rewards from the investment. The best you can do is to assess the probabilities of the alternative outcomes that can mean greater or smaller profit (or loss) for your venture. Third, you have some leeway about the timing of your investment. You can postpone action to get more information (but never, of course, complete certainty) about the future.   

There are strong corollaries. First, most regulatory decisions are difficult to reverse. It is rare for regulations to be stricken from the books, and even if they are, the affected industries are often impacted in more subtle ways. Second off, the potential benefits from a regulatory action are uncertain as Wadhwa pointed out. And finally, government bodies do have some leeway about the timing of their regulatory. Putting all of this together then, regulation might be thought of as a real option.

As economists Bronwyn H. Hall and Beethika Khan explained ,  

The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.

In the same way, government regulation isn’t about regulating now or not regulating at all, but about regulating now or deferring the decision until later. That sounds a lot to me like the pacing problem.  

]]>
https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/feed/ 0 76419
On Isolation & Inattention Panics https://techliberation.com/2018/11/26/on-isolation-inattention-panics/ https://techliberation.com/2018/11/26/on-isolation-inattention-panics/#respond Mon, 26 Nov 2018 21:33:31 +0000 https://techliberation.com/?p=76414

Last week, science writer Michael Shermer tweeted out this old xkcd comic strip that I had somehow missed before. Shermer noted that it represented, “another reply to pessimists bemoaning modern technologies as soul-crushing and isolating.” Similarly, there’s this meme that has been making the rounds on Twitter and which jokes about how newspapers made us as antisocial in the past much as newer technologies supposedly do today.

‏The sentiments expressed by the comic and that image make it clear how people often tend to romanticize past technologies or fail to remember that many people expressed the same fears about them as critics do today about newer ones. I’ve written dozens of articles about “moral panics” and “techno-panics,” most of which are cataloged here. The common theme of those essays is that, when it comes to fears about innovations, there really is nothing new under the sun. Academics, social critics, religious leaders, politicians and even average parents tend to panic over the same problems time and time again. The only thing that changes is the particular medium or technology that is the object of their collective ire.

Isolation and inattention panics are some of the most common “fear cycles” that we have seen repeatedly play out through the ages. Indeed, sociologist Frank Furedi reminds us that panics over isolation, distraction, or inattention have been quite common. Consistent with that xkcd comic, Furedi has documented how “inattention has served as a sublimated focus for apprehensions about moral authority” going back to at least the early 1700s and continuing on through the next two centuries. During those years, he notes:

Inattention was increasingly perceived as an obstacle to the socialisation of young people. Countering the habit of inattention among children and young people became the central concern of pedagogy in the 18th century […]  During the 19th century, the state of inattention became thoroughly moralised. Inattentiveness was perceived as a threat to industrial progress, scientific advance and prosperity.

Today, however, the panic over inattention has ramped up, Furedi argues:

Unlike in the 18th century when it was perceived as abnormal, today inattention is often presented as the normal state. The current era is frequently characterised as the Age of Distraction, and inattention is no longer depicted as a condition that afflicts a few. Nowadays, the erosion of humanity’s capacity for attention is portrayed as an existential problem, linked with the allegedly corrosive effects of digitally driven streams of information relentlessly flowing our way.

While I generally agree these panics are overblown, one must also admit that there is some degree of truth to  all of them in the sense that each new technology presents us with some added level of potential distraction. And today we have more of those potential distractions than ever before. So, something’s gotta give, right?

“What information consumes is rather obvious,” Nobel Prize-winning economist and psychologist Herbert Simon remarked in 1971: “the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” Almost a half century later, we are confronted with a “wealth of information” that Simon could not have imagined, and that’s what has many critics worried about the potentially socially-destructive consequences of new technologies.

But social critics who write about this supposed “poverty of attention” problem have taken matters to the extreme and concocted some entertaining rhetorical ploys in an attempt to one-up each other on the panic meter. In a 2005 book, I discussed dozens of colorful book and article titles and terms like: “information overload;” “cognitive overload;” “information anxiety;” “information fatigue syndrome;” “information paralysis;” “techno-stress;” “information pollution;” “data smog;” and even “data asphyxiation.”

And that was all pre-Facebook and pre-Twitter! A dozen years later, this isolation-is-killing-us theme is becoming even more prevalent in books and articles. There are far to many books of this ilk to list here, but a quick sampling of the most popular ones would include: Nick Carr ( The Shallows), Franklin Foer (World Without Mind), Maggie Jackson (Distracted), Sherry Turkle (Alone Together), Eli Pariser (The Filter Bubble), John Freeman (The Tyranny of E-Mail), and Cass Sunstein (Republic.com), among many others. I have an entire bookshelf in my office filled with nothing but books of this variety, all penned over just the past 20 years.

Perhaps the sheer volume of panicky tracts suggests that there must be something to these fears. Let’s be clear: isolation, distraction, or inattention  are problems. But to some extent, these are problems that have always been with us and are not going away any time soon.

Social critics and cranky intellectuals love to complain about new technologies, and that’s never going to end. The best of that criticism will incorporate practical strategies for living a better life and suggest steps for how we all can find a better balance with the technologies that dominate our lives–today, tomorrow, and on into the future.

Sadly, most critics take a different approach which implicitly suggests we have somehow departed a golden age of living and that only a dystopian hellscape awaits us from here on out (if we’re not already living in it). It’s utter poppycock. As I’ve written before, pastoral myths and public square fantasies about some supposedly glorious but no-lost “good old days” are a lot of fun right up until you realize that the old days were, in fact, eras of abject misery. By almost every meaningful metric, we are better today than we were in the past, and that is probably just as true for things that we don’t have metrics for, including “attentiveness” or “distractability.”

We’d all like to think that people–especially kids–were somehow more attentive, more social, and more civil in the past than they are in today’s seemingly more cluttered, cacophonous, hurly-burly modern era. But there is absolutely no concrete evidence suggesting that is true and, as Furedi shows, there exists plenty of anecdotal evidence that when it comes to inattention, things really haven’t changed that much at all. We can and should strive to do better and find constructive solutions to problems such as these, but we should not go overboard with rhetorical threat inflation about the nature or severity of this problem. Nor should we pursue impractical or highly destructive solutions that would undermine the many other benefits associated with our new technological capabilities.

Ironically, at their very worst, isolation or inattention panics accomplish the exact opposite of what some social critics suggest that they desire. The critics often claim that they are just looking out for the next generation and trying to chart a better path for them. In reality, however, those critics are often just engaging in the same sort of fear-mongering and youth-shaming that countless other generations have before with their “KIDS THESE DAYS!” complaints. It’s always easy for intellectuals to tap into the worst fears of parents and policymakers by suggesting that the younger generation has lost the ability to reason or communicate effectively. And yet, each generation somehow figures out how to muddle through. We are an imperfect species, but we are also a highly resilient one.

Of course, that won’t stop an entirely new generation of critics from panicking about whatever future technology is apparently distracting the next generation to death. Fear sells and panics get attention. The calmer truths that history teaches us take longer to appreciate.

Bill Maudlin, Life magazine, Jan. 1950

 


Additional Reading:

 

]]>
https://techliberation.com/2018/11/26/on-isolation-inattention-panics/feed/ 0 76414
Smart Device Paranoia https://techliberation.com/2015/10/05/smart-device-paranoia/ https://techliberation.com/2015/10/05/smart-device-paranoia/#comments Mon, 05 Oct 2015 21:16:04 +0000 http://techliberation.com/?p=75822

The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future , the ACLU tech blog.

Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:

When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.

While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.

Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.

More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples . In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.

An hour versus a second may seem like a trivial advantage, but as an object or problem becomes more complex the opacity and limitations of “dumb” things only grow. Tom Brady’s “dumb” football is a case in point. After deflategate, I have much more confidence in the unbiasedness of the virtual ball in Madden. And to eliminate any doubt, I can once again run simulations –  a standard practice among video game designers . This is what allows balance to be achieved in complex, asymmetrical video game maps, for example, while American football is stuck with a rectangle and switching ends at half-time.

In other words, despite Stanley’s repeated assertion that smart devices inevitably sacrifice equity for ruthless efficiency (like a hypothetical traffic light that turns green when it detects surgeons and corporate VPs), embedding algorithms is a demonstrably useful tool for achieving equity in the face of complexity that mirrors the real world. Think, for instance, of the algorithms that draw congressional districts to eliminate gerrymandering.

Yet even if smart devices and algorithms can improve both efficiency and equity, nonetheless they require a dose of human intention and therein lies the danger. Or does it?

Imagine a person, running late for something crucial, sitting at a seemingly interminable red light getting tense and angry. Today he may rail at his bad luck and at the universe, but in the future he will feel he’s the victim of a mind—and of whatever political entities are responsible for the shape of that signal’s logic.

In this future world of omnipresent agency, Stanley essentially imagines a pandemic of paranoid schizophrenia, where conspiracies lurk in every corner, and strings of bad luck are interpreted as punishment by the puppet masters. But this seems to get things exactly backwards. Smart devices are useful precisely because they remove agency, both in terms our personal cognitive effort (like when the lights turn on as you enter a room), and in terms of discretionary influence over our lives.

In this respect, one of Stanley’s own examples directly contradicts his thesis. He points to

an award-winning image of a Gaza City funeral procession, which was challenged due to manual adjustments the photographer made to its tone. I suspect that if the adjustments had been made automatically by his camera (being today little more than a specialized computer), the photo would not have been questioned.

Exactly! The smart focus and light balance of a modern point and click camera not only makes us all better photographers, but it removes worry of unfair and manipulative human input. Afterall, before normal traffic lights was the traffic guard, who let drivers through at his or her discretion. The move to automated lights condensed that human agency to the point of initial creation, thus dramatically reducing the potential for abuse. If smart devices mean we can automatically detect an ambulance or adjust camera aperture, it’s precisely the same sort of improvement.

The fact is that a benign rationality is already replete in the world around us, embedded not just in our technology, but also in our laws and institutions. Externalizing intelligence into rules and structures is the stuff of civilization what’s called “ extended cognition ”. In the words of philosopher Andy Clark :

Advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.

And yet we go through life without constantly looking over our shoulders. This is because we have adapted to the point where we are happily ignorant of the intelligence surrounding us. The hiddenness is a feature, not a bug, as it allows our attention to move on to more pressing things.

Critics of new technology always fail to appreciate this adaptability of human beings, implicitly answering 21st century thought experiments with 20th century prejudices. The enduring lesson of extended cognition is that smart devices promise to make not just our stuff but us , as living creatures, in a very real way more intelligent, expanding our own capabilities rather than subordinating us to the whim of invisible others.

To that end, I can’t help be reminded of the tagline at TechLiberation.com : “The problem is not whether machines think, but whether men do.”

]]>
https://techliberation.com/2015/10/05/smart-device-paranoia/feed/ 1 75822
New ITIF Study on “Privacy Panics” https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/ https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/#comments Sat, 12 Sep 2015 02:02:16 +0000 http://techliberation.com/?p=75718

It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)

As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:

Privacy Panic Cycle - 1

 

The report can be seen as an extension of the literature on “moral panics” and “techno-panics.” Some relevant texts in this field include Stanley Cohen’s Folk Devils and Moral Panics, Erich Goode and Nachman Ben-Yehuda’s Moral Panics: The Social Construction of Deviance, Cass Sunstein’s Laws of Fear, and Barry Glassner’s Culture of Fear. But there’s a rich body of academic writing on this topic and I’ve tried to make a small contribution to this literature in recent years, most notably with a lengthy 2013 law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” In that paper, I try to connect the literature on moral panic theory (which mostly focuses on panics about speech and cultural changes) to other scholarship about how panics and threat inflation are used in many other contexts, including the fields of national security policy, cybersecurity, and more.

I define “technopanic,” as “intense public, political, and academic responses to the emergence or use of media or technologies, especially by the young.”  “Threat inflation” has been defined by national security policy experts Jane K. Cramer and A. Trevor Thrall as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”

Castro and McQuinn’s new study on privacy panic cycles fits neatly within this analytical framework and makes an important contribution to the literature. They warn of the real dangers associated with these privacy panics, especially in terms of lost opportunities for innovation. “Policymakers should not get caught up in the panics that follow in the wake of new technologies,” they argue, “and they should not allow hypothetical, speculative, or unverified claims to color the policies they put in place. Similarly, they should not allow unsubstantiated claims put forth by privacy fundamentalists to derail legitimate public sector efforts to use technology to improve society,” they say. (p. 28)

I think one of the most important takeaways from the study is that, as Castro and McQuinn note, “history has shown, many of the overinflated claims about loss of privacy have never materialized.” (p. 28) They identify many reasons why that may be the case but, most notably, they explain how societal attitudes often quickly adjust and also that “social norms dissuade many practices that are feasible but undesirable.” (p. 28) I have spent a lot of time thinking through this process of individual and social acclimation to new technologies and, most recently, wrote an essay on this topic entitled, “Muddling Through: How We Learn to Cope with Technological Change.”

Castro and McQuinn highlight several historical case studies that illustrate how privacy panics play out in practice. They include studies of photography, the transistor, and RFID tags. They also continue on to map out how various new technologies are currently—or might soon be—experiencing a privacy panic. Those include drones, facial recognition, connected cars, behavioral advertising, the Internet of Things and wearable tech. Here’s where Castro and McQuinn believe each of those technologies falls currently on the privacy panic curve.

 

Privacy Panic Cycle - 2

One problem with the ITIF report, however, is that it avoids the question of what constitutes a serious enough privacy “harm” that might be worth actually panicking over. Certainly there must be something that deserves special concern – perhaps even a little bit of panic. Of course, as I noted in my remarks at the event, this is problem with a great deal of literature in this field due to the challenge associated with defining what we even mean by “privacy” or “privacy harm.” Nonetheless, while some privacy fundamentalists are far too aggressive in using amorphous conceptions of privacy harms to fuel privacy panics, it can also be the case that others (like Castro, McQuinn, and myself) don’t do enough to specify when extremely serious privacy problems exist that warrant heightened concern.

The ITIF report rightly singles out the many groups that all too often use fear tactics and threat inflation to advance their own agendas. In the academic literature on moral panics, these people or groups are referred to as “fear entrepreneurs.” They hope to create and then take advantage of a state of fear to demand that “something must be done” about supposed problems that are often either greatly overstated or which will be solved (or just go away) over time. (For more on “fear entrepreneurs,” see Frank Furedi’s outstanding 2009 article on “Precautionary Culture and the Rise of Probabilistic Risk Assessment.”) These individuals and groups often end up having a disproportionate impact on policy debates and, through their vociferous activism, threaten to achieve a sort of “heckler’s veto” over digital innovation.

However, as I stressed in my remarks at ITIF’s launch event for the study, I believe that Castro and McQuinn were wrong to single out the International Association of Privacy Professionals (IAPP) as one of these troublemakers. Castro and McQuinn claim that “there is now a professional class of people whose job is to manage privacy risks and promote the idea that technology is becoming more invasive. These privacy professionals have a vested interest in inflating the perceived privacy risk of new technologies as their livelihood depends on businesses’ willingness to pay them to address these concerns.” (8)

I think that mischaracterizes the role that most IAPP-trained privacy professionals play today. I have done a lot of work with IAPP itself and many of the privacy professionals they have trained. In my experience, these folks aren’t trying to fan the flames of “privacy panics.” To the contrary, many (perhaps most) IAPP professionals are actively involved in putting out those fires or making sure that they do not start raging in the first place. This is particularly true of the huge number of IAPP-trained privacy professionals who work for major technology companies and who work hard every day to find practical solutions to real-world privacy and security-related concerns.

Of course, as with any large membership organization, one can find some IAPP-trained privacy professionals who may indeed be guilty of fueling privacy panics for personal or organizational purposes. After all, some IAPP-trained folks work for privacy advocacy organizations which could be classified as “privacy fundamentalists” in their philosophical orientation. But just because some IAPP-trained people play techno-panic games, it certainly doesn’t mean that most of them do.

Relatedly, another small nitpick I have with the ITIF study is that it groups together a large number of privacy and security-focused tech policy groups and implies that they are all equally guilty of fueling privacy panics. In reality, there is a small core group of individuals and advocacy organizations who are far more vociferous and extreme in their privacy panic rhetoric. Others may be guilty of that at times, but not nearly to the same extent as the most panicky Chicken Littles.

The only other problem I had with the study, and this is really quite a small matter, is that I would have liked to have seen some discussion about some strategies we might be able to employ to help counter privacy panics, or lessen the likelihood that they develop at all. In my own work, I have tried to develop constructive solutions to privacy and security-related concerns that might give rise to panics. Those solutions include things like education and tech literacy efforts, empowerment tools, transparency efforts, and so on. It’s also worth reminding concerned critics that there exists a broad range of existing legal remedies that can help address privacy concerns after the fact. These include torts and common law solutions, contractual remedies, class actions, other targeted legal solutions, and enforcement of “unfair and deceptive practices” by the Federal Trade Commission or state attorneys general. And there’s also important industry self-regulatory efforts and best practices that can help alleviate many of these privacy concerns. I would have liked to have seen the ITIF study address these or other potential solutions to privacy panics.

Overall, however, I thought that the ITIF report makes an important contribution to the literature in this field and provides us with a useful analytic framework to help us evaluate and critique privacy-related technopanics in the future.

The video of the launch event is below and the full paper can be found here. Also, for further reading on technopanics, see my compendium of 40 essays I have written on the topic.

]]>
https://techliberation.com/2015/09/11/new-itif-study-on-privacy-panics/feed/ 1 75718
Again, We Humans Are Pretty Good at Adapting to Technological Change https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/ https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/#respond Fri, 16 Jan 2015 16:58:19 +0000 http://techliberation.com/?p=75292

Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. I elaborated on these issues in a lengthy essay last summer entitled,  “Muddling Through: How We Learn to Cope with Technological Change.” I borrowed the term “muddling through” from Joel Garreau’s terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human.  Garreau argued that history can be viewed “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

Garreau associated this with what he called the “Prevail” scenario and he contrasted it with the “Heaven” scenario, which believes that technology drives history relentlessly, and in almost every way for the better, and the “Hell” scenario, which always worries that “technology is used for extreme evil, threatening humanity with extinction.” Under the “Prevail” scenario, Garreau argued, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he concluded. (p. 154) Or, as John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

In my essay last summer, I sketched out the reasons why I think this “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process. Again, it comes down to the fact that people and institutions learned to cope with technological change and become more resilient over time. It’s a learning process, and we humans are good at rolling with the punches and finding new baselines along the way. While “muddling through” can sometimes be quite difficult and messy, we adjust to most of the new technological realities we face and, over time, find constructive solutions to the really hard problems.

So, while it’s always good to reflect on the challenges of life in an age of never-ending, rapid-fire technological change, there’s almost never cause for panic. Read my old essay for more discussion on why I remain so optimistic about the human condition.

]]>
https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/feed/ 0 75292
The 10 Most-Read Posts of 2014 https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

  1. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.
  1. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.
  1. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

  1. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
  1. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”
  1. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.
  1. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

  1. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).
  1. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

  1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 1 75156
video: Cap Hill Briefing on Emerging Tech Policy Issues https://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/ https://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/#comments Thu, 12 Jun 2014 15:53:33 +0000 http://techliberation.com/?p=74611

I recently did a presentation for Capitol Hill staffers about emerging technology policy issues (driverless cars, the “Internet of Things,” wearable tech, private drones, “biohacking,” etc) and the various policy issues they would give rise to (privacy, safety, security, economic disruptions, etc.). The talk is derived from my new little book on “Permissionless Innovation,” but in coming months I will be releasing big papers on each of the topics discussed here.

Additional Reading:

]]>
https://techliberation.com/2014/06/12/video-cap-hill-briefing-on-emerging-tech-policy-issues/feed/ 1 74611
Patrick Ruffini on the defeat of SOPA https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/ https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/#respond Tue, 02 Jul 2013 10:00:23 +0000 http://techliberation.com/?p=45095

Patrick Ruffini, political strategist, author, and President of Engage, a digital agency in Washington, DC, discusses his latest book with coauthors David Segal and David Moon: Hacking Politics: How Geeks, Progressives, the Tea Party, Gamers, Anarchists, and Suits Teamed Up to Defeat SOPA and Save the Internet. Ruffini covers the history behind SOPA, its implications for Internet freedom, the “Internet blackout” in January of 2012, and how the threat of SOPA united activists, technology companies, and the broader Internet community.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/feed/ 0 45095
The Media’s Sound and Fury Over NSA Surveillance https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/ https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/#comments Mon, 10 Jun 2013 13:35:59 +0000 http://techliberation.com/?p=44926

***Cross-posted from Forbes.com***

It was, to paraphrase Yogi Berra, déjà vu all over again.  Fielding calls last week from journalists about reports the NSA had been engaged in massive and secret data mining of phone records and Internet traffic, I couldn’t help but wonder why anyone was surprised by the so-called revelations.

Not only had the surveillance been going on for years, the activity had been reported all along—at least outside the mainstream media.  The programs involved have been the subject of longstanding concern and vocal criticism by advocacy groups on both the right and the left.

For those of us who had been following the story for a decade, this was no “bombshell.”  No “leak” was required.  There was no need for an “expose” of what had long since been exposed.

As the Cato Institute’s Julian Sanchez and others reminded us, the NSA’s surveillance activities, and many of the details breathlessly reported last week, weren’t even secret.  They come up regularly in Congress, during hearings, for example, about renewal of the USA Patriot Act and the Foreign Intelligence Surveillance Act, the principal laws that govern the activity.

In those hearings, civil libertarians (Republicans and Democrats) show up to complain about the scope of the law and its secret enforcement, and are shot down as being soft on terrorism.  The laws are renewed and even extended, and the story goes back to sleep.

But for whatever reason, the mainstream media, like the corrupt Captain Renault in “Casablanca,” collectively found itself last week “shocked, shocked” to discover widespread, warrantless electronic surveillance by the U.S. government.  Surveillance they’ve known about for years.

Let me be clear.  As one of the long-standing critics of these programs, and especially their lack of oversight and transparency, I have no objection to renewed interest in the story, even if the drama with which it is being reported smells more than a little sensational with a healthy whiff of opportunism.

In a week in which the media did little to distinguish itself, for example, The Washington Post stood out, and not in a good way.  As Ed Bott detailed in a withering post for ZDNet on Saturday, the Post substantially revised its most incendiary article, a Thursday piece that originally claimed nine major technology companies had provided direct access to their servers as part of the Prism program.

That “scoop” generated more froth than the original “revelation” that Verizon had been complying with government demands for customer call records.

Except that the Post’s sole source for its claims turned out to a PowerPoint presentation of “dubious provenance.”  A day later, the editors had removed the most thrilling but unsubstantiated  revelations about Prism from the article.  Yet in an unfortunate and baffling Orwellian twist, the paper made absolutely no mention of the “correction.”   As Bott points out, that violated not only common journalistic practice but the paper’s own revision and correction policy.

All this and much more, however, would have been in the service of a good cause–if, that is, it led to an actual debate about electronic surveillance we’ve needed for over a decade.

Unfortunately, it won’t.  The mainstream media will move on to the next story soon enough, whether some natural or man-made disaster.

And outside the Fourth Estate, few people will care or even notice when the scandal dies.  However they feel this week, most Americans simply aren’t informed or bothered enough about wholesale electronic surveillance to force any real accountability, let alone reform.  Those who are up in arms today might ask themselves where they were for the last decade or so, and whether their righteous indignation now is anything more than just that.

As Politico’s James Hohmann noted on Saturday, “Government snooping gets civil libertarians from both parties exercised, but this week’s revelations are likely to elicit a collective yawn from voters if past polling is any sign.”

Why so pessimistic?  I looked over what I’ve written on this topic in the past, and found the following essay, written in 2008, which appeared in slightly different form in my 2009 book, “The Laws of Disruption.”   It puts the NSA’s programs in historical context, and tries to present both the costs and benefits of how they’ve been implemented.  It points out why at least some aspects of these government activities are likely illegal, and what should be done to rein them in.

What I describe is just as scandalous, if not moreso, than anything that came out last week.

Yet I present it below with the sad realization that if I were writing it today–five years later–I wouldn’t need to change a single word.  Except maybe the last sentence.  And then, just maybe.

Searching Bits, Seizing Information

U.S. citizens are protected from unreasonable search and seizure of their property by their government.  In the Constitution, that right is enshrined in the Fourth Amendment, which was enacted in response to warrantless searches by British agents in the run-up to the Revolutionary War. Over the past century, the Supreme Court has increasingly seen the Fourth Amendment as a source of protection for personal space—the right to a “zone of privacy” that governments can invade only with probable cause that evidence of a crime will be revealed.

Under U.S. law, Americans have little in the way of protection of their privacy from businesses or from each other. The Fourth Amendment is an exception, albeit one that applies only to government.

But digital life has introduced new and thorny problems for Fourth Amendment law. Since the early part of the twentieth century, courts have struggled to extend the “zone of privacy” to intangible interests—a right to privacy, in other words, in one’s information. But to “search” and “seize” implies real world actions. People and places can be searched; property can be seized.

Information, on the other hand, need not take physical form, and can be reproduced infinitely without damaging the original. Since copies of data may exist, however temporarily, on thousands of random computers, in what sense do netizens have “property” rights to their information? Does intercepting data constitute a search or a seizure or neither?

The law of electronic surveillance avoids these abstract questions by focusing instead on a suspect’s expectations. Courts reviewing challenged investigations ask simply if the suspect believed the information acquired by the government was private data and whether his expectation of privacy was reasonable.

It is not the actual search and seizure that the Fourth Amendment forbids, after all, but unreasonable search and seizure. So the legal analysis asks what, under the circumstances, is reasonable. If you are holding a loud conversation in a public place, it isn’t reasonable for you to expect privacy, and the police can take advantage of whatever information they overhear. Most people assume, on the other hand, that data files stored on the hard drive of a home computer are private and cannot be copied without a warrant.

One problem with the “reasonable expectation” test is that as technology changes, so do user expectations. The faster the Law of Disruption accelerates, the more difficult it is for courts to keep pace. Once private telephones became common, for example, the Supreme Court required law enforcement agencies to follow special procedures for the search and seizure of conversations—that is, for wiretaps. Congress passed the first wiretap law, known as Title III, in 1968. As information technology has revolutionized communications and as user expectations have evolved, the courts and Congress have been forced to revise Title III repeatedly to keep it up to date.

In 1986, the Electronic Communications Privacy Act amended Title III to include new protection for electronic communications, including e-mail and communications over cellular and other wireless technologies. A model of reasonable lawmaking, the ECPA ensured these new forms of communication were generally protected while closing a loophole for criminals who were using them to evade the police. (By 2005, 92 percent of wiretaps targeted cell phones.)

As telephone service providers multiplied and networks moved from analog to digital, a 1994 revision required carriers to build in special access for investigators to get around new features such as call forwarding. Once a Title III warrant is issued, law enforcement agents can now simply log in to the suspect’s network provider and receive real-time streams of network traffic.

Since 1968, Title III has maintained an uneasy truce between the rights of citizens to keep their communications private and the ability of law enforcement to maintain technological parity with criminals. As the digital age progresses, this balance is harder to maintain. With each cycle of Moore’s Law, criminals discover new ways to use digital technology to improve the efficiency and secrecy of their operations, including encryption, anonymous e-mail resenders, and private telephone networks. During the 2008 terrorist attacks in Mumbai, for example, co-conspirators used television reports of police activity to keep the gunmen at various sites informed, using Internet telephones that were hard to trace.

As criminals adopt new technologies, law enforcement agencies predictably call for new surveillance powers. China alone employs more than 30,000 “Internet police” to monitor online traffic, what is sometimes known as the “Great Firewall of China.” The government apparently intercepts all Chinese-bound text messages and scans them for restricted words including democracy, earthquake, and milk powder.

The words are removed from the messages, and a copy of the original along with identifying information is stored on the government’s system. When Canadian human rights activists recently hacked into Chinese government networks they discovered a cluster of message-logging computers that had recorded more than a million censored messages.

Netizens, increasingly fearful that the arms race between law enforcement and criminals will claim their privacy rights as unintended victims, are caught in the middle. Those fears became palpable after the September 11, 2001, terrorist attacks and those that followed in Indonesia, London, and Madrid.  The world is now engaged in a war with no measurable objectives for winning, fought against an anonymous and technologically savvy enemy who recruits, trains, and plans assaults largely through international communication networks. Security and surveillance of all varieties are now global priorities, eroding privacy interests significantly.

The emphasis on security over privacy is likely to be felt for decades to come. Some of the loss has already been felt in the real world. To protect ourselves from future attacks, everyone can now expect more invasive surveillance of their activities, whether through massive networks of closed-circuit TV cameras in large cities or increased screening of people and luggage during air travel.

The erosion of privacy is even more severe online. Intelligence is seen as the most effective weapon in a war against terrorists. With or without authorization, law enforcement agencies around the world have been monitoring large quantities of the world’s Internet data traffic. Title III has been extended to private networks and Internet phone companies, who must now insert government access points into their networks. (The FCC has proposed adding other providers of phone service, including universities and large corporations.)

Because of difficulties in isolating electronic communications associated with a single IP address, investigators now demand the complete traffic of large segments of addresses, that is, of many users. Data mining technology is applied after the fact to search the intercepted information for the relevant evidence.

Passed soon after 9/11, the USA Patriot Act went much further. The Patriot Act abandoned many of the hard-fought controls on electronic surveillance built into Title III. New “enhanced surveillance procedures” allow any judge to authorize electronic surveillance and lower the standard for warrants to seize voice mails.

The FBI was given the power to conduct wiretaps without warrants and to issue so-called national security letters to gag network operators from revealing their forced cooperation. Under a 2006 extension, FBI officials were given the power to issue NSLs that silenced the recipient forever, backed up with a penalty of up to five years in prison.

Gone is even a hint of the Supreme Court’s long-standing admonitions that search and seizure of information should be the investigatory tool of last resort.

Despite the relaxed rules, or perhaps inspired by them, the FBI acknowledged in 2007 that it had violated Title III and the Patriot Act repeatedly, illegally searching the telephone, Internet, and financial records of an unknown number of Americans. A Justice Department investigation found that from 2002 to 2005 the bureau had issued nearly 150,000 NSLs, a number the bureau had grossly under-reported to Congress.

Many of these letters violated even the relaxed requirements of the Patriot Act. The FBI habitually requested not only a suspect’s data but also those of people with whom he maintained regular contact—his “community of interest,” as the agency called it. “How could this happen?” FBI director Robert Mueller asked himself at the 2007 Senate hearings on the report. Mueller didn’t offer an answer.

Ultimately, a federal judge declared the FBI’s use of NSLs unconstitutional on free-speech grounds, a decision that is still on appeal. The National Security Agency, which gathers foreign intelligence, undertook an even more disturbing expansion of its electronic surveillance powers.

Since the Constitution applies only within the U.S., foreign intelligence agencies are not required to operate within the limits of Title III. Instead, their information- gathering practices are held to a much more relaxed standard specified in the Foreign Intelligence Surveillance Act. FISA allows warrantless wiretaps anytime that intercepted communications do not include a U.S. citizen and when the communications are not conducted through U.S. networks. (The latter restriction was removed in 2008.)

Even these minimal requirements proved too restrictive for the agency. Concerned that U.S. operatives were organizing terrorist attacks electronically with overseas collaborators, President Bush authorized the NSA to bypass FISA and conduct warrantless electronic surveillance at will as long as one of the parties to the information exchange was believed to be outside the United States.

Some of the president’s staunchest allies found the NSA’s plan, dubbed the Terrorist Surveillance Program, of dubious legality. Just before the program became public in 2005, senior officials in the Justice Department refused to reauthorize it.

In a bizarre real-world game of cloak-and-dagger, presidential aides, including future attorney general Alberto Gonzales, rushed to the hospital room of then-attorney general John Ashcroft, who was seriously ill, in hopes of getting him to overrule his staff. Justice Department officials got wind of the end run and managed to get to Ashcroft first. Ashcroft, who was barely able to speak from painkillers, sided with his staff.

Many top officials, including Ashcroft and FBI director Mueller, threatened to resign over the incident. President Bush agreed to stop bypassing the FISA procedure and seek a change in the law to allow the NSA more flexibility. Congress eventually granted his request.

The NSA’s machinations were both clumsy and dangerous. Still, I confess to having considerable sympathy for those trying to obtain actionable intelligence from online activity. Post-9/11 assessments revealed embarrassing holes in the technological capabilities of most intelligence agencies worldwide. (Admittedly, it also revealed repeated failures to act on intelligence that was already collected.) Initially at least, the public demanded tougher measures to avoid future attacks.

Keeping pace with international terror organizations and still following national laws, however, is increasingly difficult. For one thing, communications of all kinds are quickly migrating to the cheaper and more open architecture of the Internet. An unintended consequence of this change is that the nationalities of those involved in intercepted communications are increasingly difficult to determine.

E-mail addresses and instant-message IDs don’t tell you the citizenship or even the location of the sender or receiver. Even telephone numbers don’t necessarily reveal a physical location. Internet telephone services such as Skype give their customers U.S. phone numbers regardless of their actual location. Without knowing the nationality of a suspect, it is hard to know what rights she is entitled to.

The architecture of the Internet raises even more obstacles against effective surveillance. Traditional telephone calls take place over a dedicated circuit connecting the caller and the person being called, making wiretaps relatively easy to establish. Only the cooperation of the suspect’s local exchange is required.

The Internet, however, operates as a single global exchange. E-mails, voice, video, and data files—whatever is being sent is broken into small packets of data. Each packet follows its own path between connected computers, largely determined by data traffic patterns present at the time of the communication.

Data may travel around the world even if its destination is local, crossing dozens of national borders along the way. It is only on the receiving end that the packets are reassembled.

This design, the genius of the Internet, improves network efficiency. It also provides a significant advantage to anyone trying to hide his activities. On the other hand, NSLs and warrantless wiretapping on the scale apparently conducted by the NSA move us frighteningly close to the “general warrant” American colonists rejected in the Fourth Amendment. They were right to revolt over the unchecked power of an executive to do what it wants, whether in the name of orderly government, tax collection, or antiterrorism.

In trying to protect its citizens against future terror attacks, the secret operations of the U.S. government abandoned core principles of the Constitution. Even with the best intentions, governments that operate in secrecy and without judicial oversight quickly descend into totalitarianism. Only the intervention of corporate whistle-blowers, conscientious government officials, courts, and a free press brought the United States back from the brink of a different kind of terrorism.

Internet businesses may be entirely supportive of government efforts to improve the technology of policing. A society governed by laws is efficient, and efficiency is good for business. At the same time, no one is immune from the pressures of anxious customers who worry that the information they provide will be quietly delivered to whichever regulator asks for it. Secret surveillance raises the level of customer paranoia, leading rational businesses to avoid countries whose practices are not transparent.

Partly in response to the NSA program, companies and network operators are increasingly routing information flow around U.S. networks, fearing that even transient communications might be subject to large-scale collection and mining operations by law enforcement agencies. But aside from using private networks and storing data offshore, routing transmissions to avoid some locations is as hard to do as forcing them through a particular network or node.

The real guarantor of privacy in our digital lives may not be the rule of law. The Fourth Amendment and its counterparts work in the physical world, after all, because tangible property cannot be searched and seized in secret. Information, however, can be intercepted and copied without anyone knowing it. You may never know when or by whom your privacy has been invaded. That is what makes electronic surveillance more dangerous than traditional investigations, as the Supreme Court realized as early as 1967.

In the uneasy balance between the right to privacy and the needs of law enforcement, the scales are increasingly held by the Law of Disruption. More devices, more users, more computing power: the sheer volume of information and the rapid evolution of how it can be exchanged have created an ocean of data. Much of it can be captured, deciphered, and analyzed only with great (that is, expensive) effort. Moore’s Law lowers the costs to communicate, raising the costs for governments interested in the content of those communications.

The kind of electronic surveillance performed by the Chinese government is outrageous in its scope, but only the clumsiness of its technical implementation exposed it. Even if governments want to know everything that happens in our digital lives, and even if the law allows them or is currently powerless to stop them, there isn’t enough technology at their disposal to do it, or at least to do it secretly.

So far.

]]>
https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/feed/ 1 44926
Making airspace available for ‘permissionless innovation’ https://techliberation.com/2013/04/23/making-airspace-available-for-permissionless-innovation/ https://techliberation.com/2013/04/23/making-airspace-available-for-permissionless-innovation/#comments Tue, 23 Apr 2013 19:32:44 +0000 http://techliberation.com/?p=44576

Today, Jerry Brito, Adam Thierer and I filed comments on the FAA’s proposed privacy rules for “test sites” for the integration of commercial drones into domestic airspace. I’ve been excited about this development ever since I learned that Congress had ordered the FAA to complete the integration by September 2015. Airspace is a vastly underutilized resource, and new technologies are just now becoming available that will enable us to make the most of it.

In our comments, we argue that airspace, like the Internet, could be a revolutionary platform for innovation:

Vint Cerf, one of the “fathers of the Internet,” credits “permissionless innovation” for the economic benefits that the Internet has generated. As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand. Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.

And in Wired today, I argue that preemptive privacy regulation is unnecessary and unwise:

Regulation at this juncture requires our over-speculating about which types of privacy violations might arise. Since many of these harms may never materialize, pre-emptive regulation is likely to overprotect privacy at the expense of innovation. Frankly, it wouldn’t even work. Imagine if we had tried to comprehensively regulate online privacy before allowing commercial use of the internet. We wouldn’t have even known how to. We wouldn’t have had the benefit of understanding how online commerce works, nor could we have anticipated the rise of social networking and related phenomena.

I expect us all to hear more about commercial drones in the near future. See Jerry’s piece in Reason last month or Larry Downes’s great post at the HBR blog for more.

]]>
https://techliberation.com/2013/04/23/making-airspace-available-for-permissionless-innovation/feed/ 1 44576
Toward a Technology “Watchful Waiting” Principle https://techliberation.com/2013/01/17/toward-a-technology-watchful-waiting-principle/ https://techliberation.com/2013/01/17/toward-a-technology-watchful-waiting-principle/#comments Thu, 17 Jan 2013 14:55:07 +0000 http://techliberation.com/?p=43462

When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.

There was a kind of split-personality to how I approached the event this year. Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association. (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)

The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.

I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruption: social, political, and economic systems change incrementally, but technology changes exponentially.

What I found, as I wrote in a long post-mortem for Forbes, is that such technologies are well-represented at CES, but are mostly found at the edges of the show–literally.

In small booths away from the mega-displays of the TV, automotive, smartphone, and computer vendors, in hospitality suites in nearby hotels, or even in sponsored and spontaneous hackathons going on around town, I found ample evidence of a new breed of innovation and innovators, whose efforts may yield nothing today or even in a year, but which could become sudden, overnight market disrupters.

Increasingly, it’s one or the other, which is saying something all by itself. For one thing, how do incumbents compete with such all or nothing innovations?

That, however, is a subject for another day.

For now, consider again the policy implications of such dramatic transformations. As those of us sitting in room N254 debated the finer points of software patents, IP transition, copyright reform, and the misapplication of antitrust law to fast-changing technology industries (increasingly, that means ALL industries), just a few feet away the real world was changing under our feet.

The policy conference was notably tranquil this year, without such previous hot-button topics as net neutrality, SOPA, or the lack of progress on spectrum reform to generate antagonism among the participants. But as I wrote at the conclusion of last year’s Summit, at CES, the only law that really matters is Moore’s Law. Technology gets faster, smaller, and cheaper, not just predictably but exponentially.

As a result, the contrast between what the regulators talk about and what the innovators do gets more dramatic every year, accentuating the figurative if not the literal distance between the policy Summit and the show floor. I felt as if I had moved between two worlds, one that follows a dainty 19th century wind-up clock and the other that marks time using the Pebble watch, a fully-connected new timepiece funded entirely by Kickstarter.

The lesson for policymakers is sobering, and largely ignored. Humility, caution, and a Hippocratic-like oath of first-do-no-harm are, ironically, the most useful things regulators can do if, as they repeat at shorter intervals, their true goal is to spur innovation, create jobs, and rescue American entrepreneurialism.

The new wisdom is simple, deceptively so. Don’t intervene unless and until it’s clear that there is demonstrable harm to consumers (not competitors), that there’s a remedy for the harm that doesn’t make things, if only unintentionally, worse, and that the next batch of innovations won’t solve the problem more quickly and cheaply.

Or, as they say to new interns in the Emergency Room, “Don’t just do something. Stand there.”

That’s a hard lesson to learn for those of us who think we’re actually surgical policy geniuses, only to find increasingly we’re working with blood-letting and leeches.  And no anesthesia.

In some ways, it’s the opposite of an approach that Adam Thierer calls the Technology Precautionary Principle. Instead of panicking when new technologies raise new (but likely transient) issues, first try to let Moore’s Law sort it out, until and if it becomes crystal clear that it can’t. Instead of a hasty response, opt for a delayed response. Call it the Watchful Waiting Principle.

Not as much fun as fuming, ranting, and regulating at the first sign of chaos, of course, but far more helpful.

That, in any case, is the thread of my dispatches from Vegas:

  1. Telcos Race Toward an all-IP Future,” CNET
  2. At CES, Companies Large and Small Bash Broken Patent System, Forbes
  3. FCC, Stakeholders Align on Communications Policy—For Now,” CNET
  4. The Five Most Disruptive Technologies at CES 2013, Forbes
]]>
https://techliberation.com/2013/01/17/toward-a-technology-watchful-waiting-principle/feed/ 6 43462