September 2010

Caren Myers Morrison, assistant professor at Georgia State University College of Law, discusses how internet tools are affecting our jury system, which she details in her new paper, Jury 2.0.  She cites examples of jurors using the internet to seek information about cases, Facebook-friending witnesses and defendants, and even blogging about trials on which they are deliberating.  She also expounds upon jury tradition in America, the evolution of impartiality’s definition, jury secrecy and integrity, ramifications of jurors’ internet activities, and the future of the jury — Jury 2.0

Related Readings

Listen to other episodes and remember to subscribe to the podcast using RSS or iTunes.

J. C. R. Licklider (1915-1990) was early to expound on the potential of computing. His papers “Man-Computer Symbiosis” and “The Computer as a Communications Device” (both collected here) foresaw many of the uses we make of computers and the Internet today.

In Where Wizards Stay Up Late: The Origins of the Internet, Katie Hafner and Matthew Lyon write about “Lick’s” vision for computing’s influence on society:

In a McLuhanesque view of the power of electronic media, Lick saw a future in which, thanks in large part to the reach of computers, most citizens would be “informed about, and interested in, and involved in, the process of government.” He imagined what he called a “home computer console” and television sets linked together in a massive network. “The political process,” he wrote, “would essentially be a giant teleconference, and a campaign would be a months-long series of communications among candidates, propagandists, commentators, political action groups, and voters.”

My project WashingtonWatch.com is one of several efforts, however rudimentary, seeking to realize this vision. We’re working on it, Lick.

Up until I began doing my reading for this fall’s Criminal Procedure: Investigation course, I largely bought the heroic Warren Court story of privacy and the Fourth Amendment.

The story is simple: The Supreme Court, concerned only with helping businesses through decisions like Lochner, had left people unprotected from warrantless searches and seizures. In decisions like Olmstead v. United States (holding that a warrantless wiretap did not violate the Fourth Amendment), the Court threw privacy under the bus. But, as with the First Amendment, Brandeis and Holmes dissented, presaging the arrival of the glorious Warren Court, which overturned Olmstead in Katz v. United States.

Though, unlike many FedSocers, I love the Warren Court and its expansion and constitutionalization of personal liberties both procedural and substantive, the heroic story just isn’t quite right.

Continue reading →

Washington Times reporter Shaun Waterman has a characteristically excellent article out today about U.S. cybersecurity authorities failing to secure their own systems.

According to a new report by government auditors, systems at the U.S. Computer Emergency Readiness Team (US-CERT), part of the Department of Homeland Security, were not maintained with updates and security patches in a timely fashion and as a result were riddled with vulnerabilities that hackers could exploit.

Time and again, people look to government intervention based on what they imagine government might do under ideal conditions. Real conditions produce far weaker results.

We’re better off distributing the problem of data, network, and computer security among all the self-interested actors in the country—fallible as they are. We should not abandon the problem to a central authority whose failure fails us all.

[Over at the Concurring Opinions blog, I’ve posted my latest installment in the excellent online symposium they’ve been running on the themes set forth in Jonathan Zittrain’s Future of the Internet and How to Stop It. My previous post is here.  My latest is “On Defining Generativity, Openness, and Code Failure,” I’ve pasted the entire essay down below.]

_______

I’ve really enjoyed the back-and-forth in this symposium about the many issues raised in Jonathan Zittrain’s Future of the Net, and I appreciate that several of the contributors have been willing to address some of my concerns and criticisms in a serious way. I recognize I’m a bit of a skunk at the garden party here, so I really do appreciate being invited by the folks at Concurring Opinions to play a part in this. I don’t have much more to add beyond my previous essay, but I wanted to stress a few points and offer a challenge to those scholars and students who are currently researching these interesting issues.

As I noted in my earlier contribution, I’m very much hung up on this whole “open vs. closed” and “generative vs. sterile/tethered” definitional question. Much of the discussion about these concepts takes place at such a high level of abstraction that I get very frustrated and want to instead shift the discussion to real-world applications of these concepts. Because when we do, I believe we find that things are not so clear-cut. Again, “open” devices and platforms rarely are perfectly so; and “closed” systems aren’t usually completely clamped down. Same goes for the “generative vs. sterile/tethered” dichotomy. Continue reading →

Individuals, shadowy criminal organizations, and nation states all now have the capacity to devastate modern societies through computer attacks.

It’s simply not true.

The author must not know the meaning of “devastate,” which is, according to the handiest Web dictionary, “to lay waste; render desolate.”

There is no such capacity—anywhere—to do such damage through computer attacks, and the capacity of some actors to produce some inconvenience, to cause some economic harm, and perhaps to cause physical damage or injury—none of that justifies such a stupidly phrased sentence.

It’s the first line of the abstract to “An e-SOS for Cyberspace” by Temple University law professor Duncan Hollis. Almost certainly, given the overblown premise, it calls for overblown reactions.

This concludes my review of the first sentence of another fear-mongering cybersecurity paper.

After a quiet August recess in Washington, DC, it’s time to refocus our efforts on public policies that impact online commerce. And today we consider not the good, and not merely the bad, but the awful – iAWFUL.

NetChoice unveiled an updated version of out Internet Advocates’ Watchlist for Ugly Laws (iAWFUL) where we track the ten instances of state and federal legislation that pose the greatest threat to the Internet and e-commerce. Our efforts so far this year have helped to remove two of the worst offenders from the February 2010 iAWFUL list, including a federal bill giving the Federal Trade Commission more powers to make new rules for online activity without Congressional guidance, and a Maine law restricting online marketing to teenagers.

In our second update for 2010, NetChoice identifies new legislation that has the potential to stall Internet commerce. Our top two are Congressional bills:

Number 1:  Federal online privacy efforts such as Rep. Rush’s “Best Practices Act” (HR 5777) and the staff discussion draft from Boucher / Stearns.

Number 2:  The expansion of Internet taxation HR 5660, the “Streamlined Sales Tax Bill”

This iAWFUL list targets federal privacy proposals that would curtail the continued development of ad-supported content and services that consumers have come to expect from the Internet. No one’s saying that privacy isn’t important or that we shouldn’t be concerned with our personal information. However, one federal privacy proposal would regulate small websites that don’t collect personally identifiable information but add just 100 users a week, even when users provide only a nickname and password. Continue reading →

So I’m an extremist

by on September 9, 2010 · 3 comments

After our podcast last week, Tim Lee wrote a blog post expanding on our conversation about spectrum policy. I thought I’d take a little space here to respond.

Although we will probably continue to disagree on empirical questions, I think philosophically there is no light between Tim and me. He succinctly expresses our shared view when he writes,

The question advocates of free markets (“extreme” or otherwise) need to ask is not: which property rights should we create? Rather, we need to ask: which set of regulations prevents congestion at the lowest cost in liberty? You want the set of rules that maximizes individual freedom; rules that are clear and predictable and give government officials as little opportunity as possible to make mischief.

Tim and I simply come to different conclusions in our cost-benefit calculation when we look at the competing sets of rules for spectrum.

As Tim points out, radio spectrum is not like sunlight. We can’t use as much of it as we want without congestion. This precludes an open access regime. The question then is, do we create a regime were private actors own the spectrum and make decisions about how to best utilize it? (Government’s only function in this scenario would be to enforce contracts and property rights.) Or do we allow the government to in essence own the spectrum and determine which specific uses will be permitted? (Government in this scenario decides the rules that govern a commons, which is not a trivial matter since it will allow some uses and preclude others.)

So, which regime presents “a lower cost in liberty?” Which one is more “clear and predictable?” Which one presents “government officials as little opportunity as possible to make mischief?” I tend to think the former fits the bill much better than the latter. Tim thinks we can have a little bit of both; that both can be equally efficient. He seems to see little difference between government-as-court and government-as-regulator. I’m less sanguine about government’s ability to set rules for spectrum that get you to the same or better outcome than a property rights regime.

Continue reading →

Don’t miss the current issue of Cato Unbound, which explores the ideas in author James C. Scott’s essential book, Seeing Like a State. Scott’s opening essay, “The Trouble With the View From Above,” captures many of the ideas from the book.

I stumbled across Scott when I was researching my book on identification policy, Identity Crisis. As Scott observes, naming systems for people have been altered over time from vernacular to formal, the latter serving the needs of governments and large institutions. The next step in the process is numbering (well underway, the Social Security number) and full-fledged national ID and possibly world ID systems. Such systems would be used to peg humans into their place in governmental, economic, and social machinery, obviously at a high cost to liberty and social mobility.

Twice in the paragraph above I used the passive voice to hide the actor. It was governments, of course, that pushed formal naming systems, but both governments and corporations will use our increasingly formalized and machine-processable naming systems to assign people their roles. Scott is far from a libertarian battler against government power, and he specifically disclaims having Hayekian aims in his book. This makes it all the more powerful and opens the door to interesting pathways of thought, parallels between corporate environmental destruction and government intervention in economic life, for example.

I’m keen to see the comments that follow Scott’s essay, from George Mason University economist Don Boudreaux; Brad Delong of UC Berkeley; and TLF alum Timothy B. Lee, a Cato adjuct and scholar at Princeton’s Center for Information Technology Policy. Cato Unbound. Go.

At yesterday’s Gov2.0 Summit conference, "rogue archivist" Carl Malamud gave a great speech about what’s wrong with government IT and what should be done about it.

Continue reading →