Julian didn’t like Tom Sydnor’s paper on Lessig either. In particular, he went back and looked up the sections in Code in which Lessig ostensibly expressed sympathy for Communism. Here’s the rest of the story:

We learn that Lessig wrote, in the first edition of his book Code, of his “impulse to sympathize” with those on the left who are “radically skeptical” about using property rights in personal information to protect privacy. We do not learn that the long quotation that follows is Lessig’s summary of an anti-market view of which he declares himself “not convinced.” (Lessig originally endorsed a property right in personal data; he has since altered his view, and now supports treating Web sites’ privacy policies as binding contracts.) Sydnor similarly presents selective quotations from a passage in Code where Lessig describes his impression of life in communist Vietnam as surprisingly free and unregulated in certain respects. Lessig’s point is that despite a formal ideology of state omnipotence, the lack of an effective architecture of control leaves many ordinary Vietnamese relatively unfettered in their day-to-day interactions; institutional structure often determines reality more powerfully than official philosophy. Possibly Lessig is mistaken about modern Vietnamese life, but Sydnor, in what seems like a willful misreading, deploys the anecdote to depict Lessig as a disciple of Ho Chi Minh.

And of course yesterday Mike pointed out that Lessig’s point about property rights and DDT wasn’t as outrageous as Tom seemed to think. These examples strikes me as a serious problem. One of the basic obligations of any scholar is to present one’s opponents’ quotes fairly and in context. If a scholar writes “I’m sympathetic to view X, but ultimately I find the arguments for it unconvincing,” it’s extremely misleading for someone to quote the first half of the sentence without mentioning the second half.

Likewise, Julian suggests that Tom’s summary of Fisher’s proposal leaves something to be desired: Continue reading →

Dennis McCauley of Gamepolitics.com takes on that issue today in a column:

In the United States, the FBI tracks annual statistics on police officer slayings as well as assaults on police officers. I compared these figures to the various release dates for the three major GTA console game releases to date (GTA III, GTA Vice City, GTA San Andreas) and plotted the whole thing on the chart below. It’s a bit like the well-known video games vis-a-vis juvenile crime graph created by Duke Ferris of GameRevolution a few years back, although with a much narrower focus.The FBI statistics portray a much different picture than that painted by critics like Thompson and Grossman. In the chart, I’ve plotted FBI figures for police officers feloniously killed (blue line) and police officers assaulted (red line, listed in thousands). As can be seen, police officer murders peaked at 70 in 1997 (i.e., four years before GTA III) and again in 2001. GTA III was released in late October that year, so if the game caused that year’s spike, it would have had only two months in which to do so. (also, the 2001 figures don’t count the 72 officers lost when the World Trade Centers collapsed). The chart shows that since GTA III was released police killings have been trending downward to a low of 48 in 2006. Although the FBI has not yet posted 2007 numbers, the National Law Enforcement Officers Memorial Fund lists 68 police officers as having been shot to death in 2007. But it’s worth pointing out that while there may have been a spike in police slayings last year, there was no corresponding GTA release. There hasn’t been a new Grand Theft Auto console title issued since San Andreas in October, 2004.

I’ve commented more on these issues in my essay on “Why hasn’t violent media turned us into a nation of killers?”

Our friends at the Progress and Freedom Foundation have released a paper by PFF’s new copyright guru about Larry Lessig, Free Culture, and whether libertarians should take them seriously. Since the paper is framed as a response to my recent post on Lessig’s work, I suppose I should offer some thoughts on the subject.

I have to say that I found the paper disappointing. I’ve frequently said I wished more libertarians took Lessig’s ideas about copyright seriously, and so I’m generally happy to see libertarian organizations writing about Lessig’s work, even if they do so critically. But it seems to me that a basic principle of good scholarship is that you start with a good-faith interpretation of your opponent’s position and then proceed to explain the flaws in fair-minded way. The goal isn’t to give your readers the worst possible impression of your opponent, but to help your readers to better understand the opponent’s arguments even as you refute them. That doesn’t appear to be what Tom did. Rather, he appears to have read through Lessig’s rather substantial body of work (3 books and numerous papers) and cherry-picked the words, phrases, sentences, and paragraphs that, when taken in isolation, give the impression that Lessig is (as Tom puts it) a “name-calling demogogue.”

This makes it awfully hard to know where to begin in analyzing Tom’s arguments, such as they are. For example, consider the first paragraph after the introduction:

Disputes about whether Lessig “demonizes” property owners are easily resolved. He does so incessantly. Scholars are supposed to be disinterested, balanced and thoughtful. Lessig is an name-calling demagogue: In just one law-review article, he calls those who fail to agree with him sheep, cows, unimaginative, extreme, stupid, simplistic, blind, uncomprehending, oblivious, pathetic, resigned, unnoticing, unresisting, unquestioning, and confused—”most don’t really get it.”

Now, he does indeed use all of those words in “The Architecture of Innovation.” In some cases, they’re even applied to people he disagrees with. But they’re sprinkled through a 15-page paper, and to judge how demagogic they are, you really have to see the full context to see who, exactly, he’s referring to with each of these words. To take just the first example—sheep—what Lessig actually says is that he frequently encounters a sheep-like stare from his audience when he asks the questions “what would a free resource give us that controlled resources don’t? What is the value of avoiding systems of control?” He’s clearly not calling everyone who disagrees with him sheep, he’s making a point—valid or not—about peoples’ failure to understand a set of questions that he thinks are important. Continue reading →

Headline of the Day

by on April 29, 2008 · 9 comments

Hans Reiser is fscked: jury delivers guilty verdict

Ars certainly has a history of running edgy headlines, but this takes things to a new level.

Update: As PJ points out, fsck is a Unix file system utility, and Reiser did work on Linux file systems.

What a delightful chapter title in Adam Shostack’s and Andrew Stewart’s new book, The New School of Information Security. Adam is a guy I’ve known for a lot of years now – somehow. He always seems to pop up in the places I go – both physically (at conferences and such) – and intellectually. He blogs at Emergent Chaos and maintains a list of his interesting papers and presentations on his personal homepage.

Adam and his co-author have produced a readable, compact tour of the information security field as it stands today – or perhaps as it lies in its crib. What we know intuitively the authors bring forward thoughtfully in their analysis of the information security industry: it is struggling to keep up with the defects in online communication, data storage, and business processes.

 Shostack and Stewart helpfully review the stable of plagues on computing, communication, and remote commerce: spam, phishing, viruses, identity theft, and such. Likewise, they introduce the cast of characters in the security field, all of whom seem to be feeling along in the dark together.

Why are the lights off? Lack of data, they argue. Most information security decisions are taken in the absence of good information. The authors perceptively describe the substitutes for information, like following trends, clinging to established brands, or chasing after studies produced by or for security vendors.

The authors revel in the breach data that has been made available to them thanks to disclosure laws like California’s SB 1386. A libertarian purist must quibble with mandated disclosure when common law can drive consumer protection more elegantly. But good data is good data, and the happenstance of its availability in the breach area is welcome.

In the most delightful chapter in the book (I’ve used it as the title of this post), Shostack and Stewart go through the some of the most interesting problems in information security. Technical problems are what they are. Economics, sociology, psychology, and the like are the disciplines that will actually frame the solutions for information security problems.

In subsequent chapters, Shostack and Stewart examine security spending and advocate for the “New School” approach to security. I would summarize theirs as a call for rigor, which is lacking today. It’s ironic that the world of information lacks for data about its own workings, and thus lacks sound decision-making methods, but there you go.

The book is a little heavy on “New School” talk. If the name doesn’t stick, Shostack and Stewart risk looking like they failed to start a trend. But it’s a trend that must take hold if information security is going to be a sound discipline and industry. I’m better aware for reading The New School of Information Security that info sec is very much in its infancy. The nurturing Shostack and Stewart recommend will help it grow.

Bruce Owen, America’s preeminent media economist–with apologies to Harold Vogel, who at least deserves an honorable mention–has written another splendid piece for Cato’s Regulation magazine, this one entitled, “The Temptation of Media Regulation.”

This latest essay deals primarily with the many fallacies surrounding so-called “a la carte” regulation of the video marketplace, and I encourage you to read it to see Owen’s powerful refutation of the twisted logic behind that regulatory crusade. But I wanted to highlight a different point that Bruce makes right up front in his essay because it is something I am always stressing in my work too.

In some of my past work on free speech and media marketplace regulation, I have argued that there is very little difference between Republicans and Democrats when it comes to these issues. They are birds of feather who often work closely together to regulate speech and media. Whether it is broadcast ‘indecency’ controls; proposals to extend those controls to cable & satellite TV; campaign finance laws; efforts to limit or rollback ownership regulations; or even must carry and a la carte, the story is always the same: It’s one big bipartisan regulatory love fest. [And the same goes for regulation of the Internet, social networking sites, and video games.]

Owen explains why that is the case: Continue reading →

The New York Times casts its spotlight on the “Censored 11,” 11 racially-charged cartoons from the middle of the last century that have been unavailable to the public for decades. But despite repeated attempts to take them down, they keep popping up online. You can see some of them here, and the most notorious is “Coal Black and the De Sebben Dwarfs,” which as you can imagine from the title is pretty offensive:

Preventing people from watching them seems pretty silly to me. I wouldn’t want them on heavy rotation on the cartoon network, but people are entitled to know about their history, and I doubt letting people see them will cause anybody to be more racist. But this creates a dillema for Disney and Warner Brothers. If they release them in an official capacity, they’re opening themselves up to a lot of negative publicity and highlighting a part of their past they’re probably not too proud of. This wouldn’t be a problem if we didn’t grant copyrights for such absurd lengths of time. If we had the rules on the books at the time most of these videos were made–28 years, renewable once–then all films made before 1952 would now be in the public domain, which would encompass the vast majority of these cartoons. That would allow the studios to officially disavow any support for them while allowing people to view them.

It’s an interesting question whether putting these films on YouTube could constitute fair use. The fact that the entire work is being shown obviously strongly cuts against it with regard to the third factor. However, the second and fourth factors would cut strongly in favor of fair use—there is no commercial market, and the work is of particular historical importance. As to the first factor, one could argue that the cultural climate in 2008 is so different from the climate in 1935 that the act of showing it in 2008 has a fundamentally different “purpose and character” than when it was first shown, thereby rendering the simple act of showing the video, at least on a non-profit basis, transformative.

Update: OK, this one is even worse.

In less than 36 hours, one of the most anticipated—and most demonized—games in years will hit the shelves. Grand Theft Auto IV, the “ true” successor to the groundbreaking Grant Theft Auto III, has been the focus of intense criticism ever since being announced. But while GTA IV will undoubtedly be filled with extreme violence, it may also be a masterpiece of human creativity.

On Friday, IGN reviewed GTA IV , giving it a highly elusive perfect score. Calling it “masterful” and an “American dream,” IGN says GTA IV is the greatest game in nearly a decade. Since the press embargo ended this morning, many other reviewers are reaching similar conclusions .

No real surprises there. What’s surprising, however, is that unlike its somewhat one-dimensional predecessors, GTA IV offers unprecedented character depth along with an “Oscar-caliber” storyline. And it also depicts the ugly downside of crime in the same vein as epic films like Goodfellas and Scarface, retelling the classic story of a struggling immigrant coming to America in search of fortune, haunted by the experiences of a past life.

Naturally, Grand Theft Auto’s release has re-ignited public debate over how games affect kids and whether new laws are needed to protect children from the gratuitous violence found in many video games. GTA has been a favorite target of politicians for the past eight years, and the usual suspects like Jack Thompson and Tim Winter have predictably spoken out against GTA IV. But parental controls are more robust than ever, as Adam has documented , and some have even suggested that kids should be playing Grand Theft Auto. Despite the recent explosion in hyper-realistic violent games, violent crime rates have been dropping across the board. Maybe games like GTA are just another harmless outlet for kids to express violent behavior, much like playing cops and robbers.

As game budgets have swelled and public interest in gaming has expanded, more games than ever transcend the stereotype of gaming as a juvenile pursuit with little artistic merit, reminding us that games can be artistic expressions on par with books, movies, or songs. Critics whose gaming experience consists of having played Pacman in an arcade may belittle gaming as a trivial pastime, but anybody who has played Bioshock or Gears of War or Oblivion knows better. Games can critique the harsh realities of modern society and offer insight into the nature of the human soul in ways that less interactive forms of media cannot. Likewise, games deserve both critical admiration and legal protection.

Of course, GTA IV is no Mona Lisa. But the way things are going, it’s entirely possible that the next timeless masterpiece of artistic expression will be created not with a brush or pen, but with lines of code.

Every once in a while, a Slashdot post wanders out of the realm of the science/IT areas where the editors have the most expertise, and the results are often underwhelming. For example:

“The bill to ban genetic discrimination in employment or insurance coverage is moving forward. Is this the death knell of private insurance? I think private health insurance is pretty much incompatible with genetic testing (GT) for disease predisposition, if said testing turns out to be of any use whatsoever. The great strength of GT is that it will (as technology improves) take a lot of the uncertainty out of disease prediction. But that uncertainty is what insurance is based on. If discrimination is allowed, the person with the bad genes is out of luck because no one would insure them. However, if that isn’t allowed, the companies are in trouble. If I know I’m likely to get a certain condition, I’ll stock up on ‘insurance’ for it. The only solution I can see is single-payer universal coverage along the lines of the Canadian model, where everyone pays, and no one (insurer or patient) can game the system based on advance knowledge of the outcomes. Any other ideas? This bill has been in the works for a while.”

At the risk of committing the same sin of opining outside of my area of expertise, this seems to be rather misguided. I should give the guy credit for understanding the basic point that insurance is about managing risk. If you’re 100 percent sure you’ll need a heart transplant in the near future, and you buy a policy that will pay for it, that’s not an “insurance policy.” It’s just a health care plan. An insurance policy is a tool for managing the risks of events that you don’t know will definitely happen.

Unfortunately, this anonymous reader takes this kernel of truth and uses it to draw sweeping conclusions that just don’t follow from them. Because genetic tests hardly ever tell you precisely what diseases you’ll get and when you’ll get them. Rather, they tell you about dispositions and tendencies. They say “your chance of getting heart disease is twice as high as normal” or “You’re likely to get parkinsons disease sometime in your 40s or 50s.”

If it were true that anyone with an elevated risk of health problems would be ineligible for health insurance, then you’d also expect that men under 30 would be ineligible for auto insurance. But of course, that’s not what happens. Insurance companies take the elevated risk into account in setting premiums. In a world with widespread genetic screening, the price of your insurance would take into account your genetic predispositions. Those who are blessed with good genes would pay lower premiums, while those with bad genes would pay higher premiums.

Now, reasonable people can object that this is unfair. And there will likely be a small minority of individuals whose genes are so bad that they’ll be unable to pay the premium required to properly compensate the insurance company for the risk they’re taking. But if you’re inclined to have the state do something about this, it doesn’t by any means follow that the state needs to run the entire insurance/payment system. Rather, the state can take a variety of actions targeted at the losers of the genetic lottery while leaving the market free to work for the majority of individuals with average or below-average risks. This can take several forms. One would be premium subsidies at the front end: say, the state picks up a percentage of the premium for people with above-average premiums. Another would be to directly subsidize treatments for the most expensive-to-treat diseases, which would have the effect of reducing premiums for people with those diseases. Or you can (although I think we shouldn’t) continue in the direction we’ve been going, of imposing all sorts of implicit cross-subsidies in the health care market itself (such as the tax preferences for employer-provided group policies and rules requiring hospitals to treat patients regardless of their ability to pay).

This isn’t a health care blog, and I’m not a health care expert, so I won’t venture an opinion on which of those options, if any, is most desirable. But it’s a non-sequitur to assert that because we’ll be able to more accurately assess risk, that private insurance will no longer be viable. The insurance industry is extremely good at pricing risk in other parts of the economy, and they’d do it in health care too if the government didn’t exert so much effort to preventing them from doing so. It’s a complete non-sequitur to suggest that any of this is a strong argument for a centrally-planned government health care system.

Megan McCardle suggests that the Patriot Act, while bad, is hardly a harbinger of a police state. She points out, correctly, that American history is full of violations of civil liberties, some of them—the suspension of Habeas Corpus during the Civil War, the internment of Japanese during World War II—probably worse than anything the Bush administration has done. And if we’re looking at things from a narrowly legalistic perspective, it’s plainly not the case that we’re in a uniquely bad place. The courts were far less protective of civil liberties for most of the 20th century than they are today.

However, I think this telling of things misses some important context. Most importantly, it misses the point that the federal government has changed a great deal over the last century. In 1908, the United States was a balkanized country with a small, distant federal government. Few people had telephones and (obviously) nobody had a cell phone or an Internet connection. The federal government, for its part, didn’t have an NSA, a CIA, or an FBI, and even if they had existed they wouldn’t have had access to the kind of transportation, communications, and computational capacities that they have today. All of which is to say, if the federal government circa 1908 had wanted to build a panopticon police state, they wouldn’t have been able to do it, because they wouldn’t have had the technology or the manpower to do so.

So the first 150 years of American history just isn’t relevant when we’re talking about the rise of a modern police state. There’s a reason Russia and Germany got totalitarian police states in the middle of the 20th century; this was the first time modern transportation and communications technologies gave governments the ability to exert that kind of control. And while we missed the rise of totalitarianism, the post-World War II period was an extremely bad one from a civil liberties perspective. J. Edgar Hoover was an extremely bad man who wielded a great deal of power with essentially no accountability. The federal government spied on activists, journalists, politicians, and celebrities, and those with access to these surveillance tools used them to blackmail, manipulate, and ruin those they didn’t like.

Luckily, Watergate happened, and Congress passed a series of reforms that dramatically enhanced judicial and Congressional oversight of the executive branch. This were a lot better in the early 1980s than they had been since the early 20th century. And since then, we have seen a gradual erosion of those safeguards. I just put together a presentation on the subject, and it’s a long list: CALEA, roving wiretaps, national security letters, Carnivore successors, the Patriot Act, warrantless wiretapping, and probably other programs we don’t know about.

If this process continues unchecked, we will reach a point where the NSA has the technological capability to track a breathtaking amount of information about every American. And if the wrong people get ahold of this infrastructure, they can cause a lot of big problems. If the surveillance state is allowed to grow for another decade or two, we likely will reach a point where civil liberties are in the worst shape they’ve been in American history.

Will that happen? I’m optimistic. I think we’ll be able to at least slow the process down and impose some additional oversight. But if it doesn’t happen, it will be precisely because thousands of Americans were alarmed enough about the developments to fight back against them. I would far rather over-estimate the threat and be proven wrong than to underestimate the threat and wake up one morning in a world where the 21st century’s J Edgar Hoover has the power to blackmail anyone in America.