April 2008

What a delightful chapter title in Adam Shostack’s and Andrew Stewart’s new book, The New School of Information Security. Adam is a guy I’ve known for a lot of years now – somehow. He always seems to pop up in the places I go – both physically (at conferences and such) – and intellectually. He blogs at Emergent Chaos and maintains a list of his interesting papers and presentations on his personal homepage.

Adam and his co-author have produced a readable, compact tour of the information security field as it stands today – or perhaps as it lies in its crib. What we know intuitively the authors bring forward thoughtfully in their analysis of the information security industry: it is struggling to keep up with the defects in online communication, data storage, and business processes.

 Shostack and Stewart helpfully review the stable of plagues on computing, communication, and remote commerce: spam, phishing, viruses, identity theft, and such. Likewise, they introduce the cast of characters in the security field, all of whom seem to be feeling along in the dark together.

Why are the lights off? Lack of data, they argue. Most information security decisions are taken in the absence of good information. The authors perceptively describe the substitutes for information, like following trends, clinging to established brands, or chasing after studies produced by or for security vendors.

The authors revel in the breach data that has been made available to them thanks to disclosure laws like California’s SB 1386. A libertarian purist must quibble with mandated disclosure when common law can drive consumer protection more elegantly. But good data is good data, and the happenstance of its availability in the breach area is welcome.

In the most delightful chapter in the book (I’ve used it as the title of this post), Shostack and Stewart go through the some of the most interesting problems in information security. Technical problems are what they are. Economics, sociology, psychology, and the like are the disciplines that will actually frame the solutions for information security problems.

In subsequent chapters, Shostack and Stewart examine security spending and advocate for the “New School” approach to security. I would summarize theirs as a call for rigor, which is lacking today. It’s ironic that the world of information lacks for data about its own workings, and thus lacks sound decision-making methods, but there you go.

The book is a little heavy on “New School” talk. If the name doesn’t stick, Shostack and Stewart risk looking like they failed to start a trend. But it’s a trend that must take hold if information security is going to be a sound discipline and industry. I’m better aware for reading The New School of Information Security that info sec is very much in its infancy. The nurturing Shostack and Stewart recommend will help it grow.

Bruce Owen, America’s preeminent media economist–with apologies to Harold Vogel, who at least deserves an honorable mention–has written another splendid piece for Cato’s Regulation magazine, this one entitled, “The Temptation of Media Regulation.”

This latest essay deals primarily with the many fallacies surrounding so-called “a la carte” regulation of the video marketplace, and I encourage you to read it to see Owen’s powerful refutation of the twisted logic behind that regulatory crusade. But I wanted to highlight a different point that Bruce makes right up front in his essay because it is something I am always stressing in my work too.

In some of my past work on free speech and media marketplace regulation, I have argued that there is very little difference between Republicans and Democrats when it comes to these issues. They are birds of feather who often work closely together to regulate speech and media. Whether it is broadcast ‘indecency’ controls; proposals to extend those controls to cable & satellite TV; campaign finance laws; efforts to limit or rollback ownership regulations; or even must carry and a la carte, the story is always the same: It’s one big bipartisan regulatory love fest. [And the same goes for regulation of the Internet, social networking sites, and video games.]

Owen explains why that is the case:
Continue reading →

The New York Times casts its spotlight on the “Censored 11,” 11 racially-charged cartoons from the middle of the last century that have been unavailable to the public for decades. But despite repeated attempts to take them down, they keep popping up online. You can see some of them here, and the most notorious is “Coal Black and the De Sebben Dwarfs,” which as you can imagine from the title is pretty offensive:

Preventing people from watching them seems pretty silly to me. I wouldn’t want them on heavy rotation on the cartoon network, but people are entitled to know about their history, and I doubt letting people see them will cause anybody to be more racist. But this creates a dillema for Disney and Warner Brothers. If they release them in an official capacity, they’re opening themselves up to a lot of negative publicity and highlighting a part of their past they’re probably not too proud of. This wouldn’t be a problem if we didn’t grant copyrights for such absurd lengths of time. If we had the rules on the books at the time most of these videos were made–28 years, renewable once–then all films made before 1952 would now be in the public domain, which would encompass the vast majority of these cartoons. That would allow the studios to officially disavow any support for them while allowing people to view them.

It’s an interesting question whether putting these films on YouTube could constitute fair use. The fact that the entire work is being shown obviously strongly cuts against it with regard to the third factor. However, the second and fourth factors would cut strongly in favor of fair use—there is no commercial market, and the work is of particular historical importance. As to the first factor, one could argue that the cultural climate in 2008 is so different from the climate in 1935 that the act of showing it in 2008 has a fundamentally different “purpose and character” than when it was first shown, thereby rendering the simple act of showing the video, at least on a non-profit basis, transformative.

Update: OK, this one is even worse.

In less than 36 hours, one of the most anticipated—and most demonized—games in years will hit the shelves. Grand Theft Auto IV, the “true” successor to the groundbreaking Grant Theft Auto III, has been the focus of intense criticism ever since being announced. But while GTA IV will undoubtedly be filled with extreme violence, it may also be a masterpiece of human creativity.

On Friday, IGN reviewed GTA IV, giving it a highly elusive perfect score. Calling it “masterful” and an “American dream,” IGN says GTA IV is the greatest game in nearly a decade. Since the press embargo ended this morning, many other reviewers are reaching similar conclusions.

No real surprises there. What’s surprising, however, is that unlike its somewhat one-dimensional predecessors, GTA IV offers unprecedented character depth along with an “Oscar-caliber” storyline. And it also depicts the ugly downside of crime in the same vein as epic films like Goodfellas and Scarface, retelling the classic story of a struggling immigrant coming to America in search of fortune, haunted by the experiences of a past life.

Naturally, Grand Theft Auto’s release has re-ignited public debate over how games affect kids and whether new laws are needed to protect children from the gratuitous violence found in many video games. GTA has been a favorite target of politicians for the past eight years, and the usual suspects like Jack Thompson and Tim Winter have predictably spoken out against GTA IV. But parental controls are more robust than ever, as Adam has documented, and some have even suggested that kids should be playing Grand Theft Auto. Despite the recent explosion in hyper-realistic violent games, violent crime rates have been dropping across the board. Maybe games like GTA are just another harmless outlet for kids to express violent behavior, much like playing cops and robbers.

As game budgets have swelled and public interest in gaming has expanded, more games than ever transcend the stereotype of gaming as a juvenile pursuit with little artistic merit, reminding us that games can be artistic expressions on par with books, movies, or songs. Critics whose gaming experience consists of having played Pacman in an arcade may belittle gaming as a trivial pastime, but anybody who has played Bioshock or Gears of War or Oblivion knows better. Games can critique the harsh realities of modern society and offer insight into the nature of the human soul in ways that less interactive forms of media cannot. Likewise, games deserve both critical admiration and legal protection.

Of course, GTA IV is no Mona Lisa. But the way things are going, it’s entirely possible that the next timeless masterpiece of artistic expression will be created not with a brush or pen, but with lines of code.

Every once in a while, a Slashdot post wanders out of the realm of the science/IT areas where the editors have the most expertise, and the results are often underwhelming. For example:

“The bill to ban genetic discrimination in employment or insurance coverage is moving forward. Is this the death knell of private insurance? I think private health insurance is pretty much incompatible with genetic testing (GT) for disease predisposition, if said testing turns out to be of any use whatsoever. The great strength of GT is that it will (as technology improves) take a lot of the uncertainty out of disease prediction. But that uncertainty is what insurance is based on. If discrimination is allowed, the person with the bad genes is out of luck because no one would insure them. However, if that isn’t allowed, the companies are in trouble. If I know I’m likely to get a certain condition, I’ll stock up on ‘insurance’ for it. The only solution I can see is single-payer universal coverage along the lines of the Canadian model, where everyone pays, and no one (insurer or patient) can game the system based on advance knowledge of the outcomes. Any other ideas? This bill has been in the works for a while.”

At the risk of committing the same sin of opining outside of my area of expertise, this seems to be rather misguided. I should give the guy credit for understanding the basic point that insurance is about managing risk. If you’re 100 percent sure you’ll need a heart transplant in the near future, and you buy a policy that will pay for it, that’s not an “insurance policy.” It’s just a health care plan. An insurance policy is a tool for managing the risks of events that you don’t know will definitely happen.

Unfortunately, this anonymous reader takes this kernel of truth and uses it to draw sweeping conclusions that just don’t follow from them. Because genetic tests hardly ever tell you precisely what diseases you’ll get and when you’ll get them. Rather, they tell you about dispositions and tendencies. They say “your chance of getting heart disease is twice as high as normal” or “You’re likely to get parkinsons disease sometime in your 40s or 50s.”

If it were true that anyone with an elevated risk of health problems would be ineligible for health insurance, then you’d also expect that men under 30 would be ineligible for auto insurance. But of course, that’s not what happens. Insurance companies take the elevated risk into account in setting premiums. In a world with widespread genetic screening, the price of your insurance would take into account your genetic predispositions. Those who are blessed with good genes would pay lower premiums, while those with bad genes would pay higher premiums.

Now, reasonable people can object that this is unfair. And there will likely be a small minority of individuals whose genes are so bad that they’ll be unable to pay the premium required to properly compensate the insurance company for the risk they’re taking. But if you’re inclined to have the state do something about this, it doesn’t by any means follow that the state needs to run the entire insurance/payment system. Rather, the state can take a variety of actions targeted at the losers of the genetic lottery while leaving the market free to work for the majority of individuals with average or below-average risks. This can take several forms. One would be premium subsidies at the front end: say, the state picks up a percentage of the premium for people with above-average premiums. Another would be to directly subsidize treatments for the most expensive-to-treat diseases, which would have the effect of reducing premiums for people with those diseases. Or you can (although I think we shouldn’t) continue in the direction we’ve been going, of imposing all sorts of implicit cross-subsidies in the health care market itself (such as the tax preferences for employer-provided group policies and rules requiring hospitals to treat patients regardless of their ability to pay).

This isn’t a health care blog, and I’m not a health care expert, so I won’t venture an opinion on which of those options, if any, is most desirable. But it’s a non-sequitur to assert that because we’ll be able to more accurately assess risk, that private insurance will no longer be viable. The insurance industry is extremely good at pricing risk in other parts of the economy, and they’d do it in health care too if the government didn’t exert so much effort to preventing them from doing so. It’s a complete non-sequitur to suggest that any of this is a strong argument for a centrally-planned government health care system.

Megan McCardle suggests that the Patriot Act, while bad, is hardly a harbinger of a police state. She points out, correctly, that American history is full of violations of civil liberties, some of them—the suspension of Habeas Corpus during the Civil War, the internment of Japanese during World War II—probably worse than anything the Bush administration has done. And if we’re looking at things from a narrowly legalistic perspective, it’s plainly not the case that we’re in a uniquely bad place. The courts were far less protective of civil liberties for most of the 20th century than they are today.

However, I think this telling of things misses some important context. Most importantly, it misses the point that the federal government has changed a great deal over the last century. In 1908, the United States was a balkanized country with a small, distant federal government. Few people had telephones and (obviously) nobody had a cell phone or an Internet connection. The federal government, for its part, didn’t have an NSA, a CIA, or an FBI, and even if they had existed they wouldn’t have had access to the kind of transportation, communications, and computational capacities that they have today. All of which is to say, if the federal government circa 1908 had wanted to build a panopticon police state, they wouldn’t have been able to do it, because they wouldn’t have had the technology or the manpower to do so.

So the first 150 years of American history just isn’t relevant when we’re talking about the rise of a modern police state. There’s a reason Russia and Germany got totalitarian police states in the middle of the 20th century; this was the first time modern transportation and communications technologies gave governments the ability to exert that kind of control. And while we missed the rise of totalitarianism, the post-World War II period was an extremely bad one from a civil liberties perspective. J. Edgar Hoover was an extremely bad man who wielded a great deal of power with essentially no accountability. The federal government spied on activists, journalists, politicians, and celebrities, and those with access to these surveillance tools used them to blackmail, manipulate, and ruin those they didn’t like.

Luckily, Watergate happened, and Congress passed a series of reforms that dramatically enhanced judicial and Congressional oversight of the executive branch. This were a lot better in the early 1980s than they had been since the early 20th century. And since then, we have seen a gradual erosion of those safeguards. I just put together a presentation on the subject, and it’s a long list: CALEA, roving wiretaps, national security letters, Carnivore successors, the Patriot Act, warrantless wiretapping, and probably other programs we don’t know about.

If this process continues unchecked, we will reach a point where the NSA has the technological capability to track a breathtaking amount of information about every American. And if the wrong people get ahold of this infrastructure, they can cause a lot of big problems. If the surveillance state is allowed to grow for another decade or two, we likely will reach a point where civil liberties are in the worst shape they’ve been in American history.

Will that happen? I’m optimistic. I think we’ll be able to at least slow the process down and impose some additional oversight. But if it doesn’t happen, it will be precisely because thousands of Americans were alarmed enough about the developments to fight back against them. I would far rather over-estimate the threat and be proven wrong than to underestimate the threat and wake up one morning in a world where the 21st century’s J Edgar Hoover has the power to blackmail anyone in America.

This is an absolutely devastating review of Ubuntu:

In recent years Linux has suffered a major set-back following the shock revelations form SCO Group, whose software had been stolen wholesale and incorporated into illegal distributions of Linux. For the past five years, the overwhelming effort of Linux developers has gone into removing SCO’s intellectual property from the Linux operating system – but as the threat of litigation has subsided many users are asking once-again if Linux is a serious contender?

…if you object to this communism, tough luck: The so-called “Completely Fair Scheduler” is incorporated into the “kernel” ( which is the Linux equivalent of the MS Dos layer in older versions of windows). This “feature” alone is enough to frighten serious users away from the upstart operating system… Windows users have no need of the “Completely Fair Scheduler” because we have modern scheduling software such as Microsoft Outlook (above). Outlook allows you to give your time to whoever you want, regardless of any socialist definitions of ‘fairness’.

I’ve traditionally been favorable toward Linux-based operating systems, but this puts them in a whole new light.

Last week a scad of stories from Reuters to News.com covered the growing push for a “Do Not Track” registry similar to the “Do Not Call” list that serves to protect US households from mid-dinner sales calls. While I understand the concerns expressed by folks like Marc Rotenberg of EPIC and Jeff Chester of the Center for Digital Democracy, who were both cited by Anne Broache in the News.com piece from last week, I think that asking the government to hold a master list of IPs and consumer names is a bad idea, or at least one that won’t do much to really protect consumers.

First, tracking people online is a bit different from calling folks in their homes. Telemarketing, while highly effective in terms of sales produced per dollar of marketing money spent, is still orders of magnitude more expensive than spamming or collecting data online without consent. Both of these activities are illegal today, but they still occur. They occur so much that spam-filtering technology contains some of the most advanced natural language recognition and parsing software created. Cory Doctorow has mused that the first artificial intelligences will emerge from Spam and anti-Spam computer arrays.

So this list wouldn’t be the magic wish that privacy advocates and legislators might dream it to be. It would cause law-abiding companies like Google, AOL, and Microsoft to stop collecting data, but so could privately developed and enforced systems.

Anne Broache notes that cookies are a bad solution for stopping data tracking as many anti-spy-ware programs delete cookies, since cookies are often used for the purpose of data tracking. But why not just create a new variety of cookie? Call it a cake, a brownie, a cupcake–maybe even a muffin. Whatever you call it, just specify that a standards-compliant browser must contain a place for something similar to a cookie to be placed that will opt consumers out of tracking schemes. This isn’t a technological problem at all, it’s just a matter of industry deciding to follow this course.

My other concern is something that fellow TLFer and former CEI staffer James Gattuso pointed out in a 2003 piece in regard to the “Do Not Call” list, namely that the government will likely exempt itself from the rules. In our post-9/11 world (whatever that means) we should expect government–the supposed protector of our rights–to make these sorts of moves. But you don’t have to trust my assertion, look no farther than Declan McCullagh’s Wednesday post at New.com. The FBI is pushing hard for Internet companies to retain data so that they can later sift through it. It’s doubtful that the government will place itself on “Do Not Track” list if they believe they can gain useful intelligence by tracking people online.

So, by and large, this proposed registry seems unnecessary and ineffective. Industry can easily work out a way to allow consumers to opt-out and the two groups I’m most afraid of–the Russian Mob and the U.S. Government–won’t pay heed to any registry anyway.

Instead or wringing our hands over advertisers tracking what duvet covers we buy, can we turn our attention to what our freewheelin’ executive branch is trying to pull-over on us? Seems to me they’re cooking up exemptions to more than just this registry–a few of my favorite Constitutional Amendments spring to mind.

I’ve been noticing recently that wi-fi connections are flakier than they used to be. It seems to me that from about 2001 to 2005, it was almost unheard-of for my home wi-fi connection to suddenly drop out on me. In the last year or two, it has seemed like this is an increasingly common occurrence. For the last half hour or so, my Internet connection has been going out for 5 or 10 seconds at a time every few minutes. It’s not a huge problem, but it happens just often enough to be pretty annoying.

I can think of a number of possible explanations for this. One might be that my current laptop, a MacBook I bought about a year ago, might have a lower-quality wireless card. Another might be that I’m using wi-fi in more places where it might be hard to get good coverage. Or maybe I’m imagining things.

But it also seems possible that we’re starting to experience of a tragedy of the wi-fi commons. I seem to recall (and Wikipedia confirms) that wi-fi cards effectively have only 3 “channels” to choose from, and that the wi-fi protocol isn’t especially well-designed to deal with multiple networks using the same channel at close proximity. It has now become commonplace for me to whip out my laptop in an urban setting and see a dozen or more wi-fi networks. Which suggests that there’s got to be some serious contention going on for those channels.

If I’m right (and I might be wildly off base) I’m not sure where the analysis goes from there, either from a technical perspective or a policy one. One knee-jerk libertarian answer is to suggest this is an argument against allocating a lot of spectrum to be used as a commons because it tends to be over-used and there’s no one in a position to resolve this kind of contention. On the other hand, maybe people are working on better protocols for negotiating this kind of contention and achieving a fair sharing of bandwidth without these problems. Or perhaps—at least for wi-fi—it would be possible to allocate enough bandwidth that there’d be enough to go around even in dense urban areas.

Dingel points to a paper (non-paywalled draft here) exploring the historical connection between the free trade movement and the movement for worldwide copyright harmonization:

Free traders failed repeatedly for sixty years after the end of the Civil War to reduce the average tariff to its immediate prewar level. They failed despite making a case that, by comparison with the one made for free trade today, was compelling. Speci?cally, the principles of free labor engendered an antimonopoly argument for trade. Free trade, its advocates argued, would eliminate the special privileges granted to producers in speci?c industries, most notably cotton goods, iron, and steel. It would promote competition, lower prices, and raise consumers’ real incomes…

Carey attempted to turn the tables on the free traders: he argued that free trade promoted monopoly, and protection mitigated it. His conviction was sincere—but that particular part of his argument was unpersuasive, and relatively few of his followers bothered to repeat it. He was much more persuasive in arguing that international copyright promoted monopoly. In the face of the latter argument, the proponents of free trade and international copyright were put on the defensive…

One wonders whether the tireless advocacy of international copyright by free traders like Bryant—who framed the cause as one inextricably related to free trade—hindered the advancement of their principal cause. The long-awaited sweeping tariff reductions were deferred until 1913. Might the wait have been shorter if the antimonopoly credentials of the free-trade advocates had not been called into question?

This is a fascinating question. One of the things I find really interesting about the 19th century political debate is that the opposing political coalitions were more sensibly aligned, perhaps because people had a slightly clearer sense of what was at stake. My impression (which may be wrong in its details) is that the free traders tended to be liberals and economic populists. They clearly understood that protectionism brought about a transfer of wealth from relatively poor consumers to relatively wealthy business interests. In the opposing coalition were a coalition of business interests and xenophobes making fundamentally mercantilist arguments about economic nationalism.

Today’s free trade debate is much weirder, because there are enough businesses who want to export things that significant parts of the business community are for freer trade. On the other hand, the liberals who fancy themselves defenders of relatively poor consumers find themselves in bed with predatory industries like sugar and stell that have been using trade barriers to gouge consumers. And the “trade” debate has increasingly come to be focused on issues that don’t actually have much to do with trade, whether it’s labor and environmental “standards,” copyright and patent requirements, working retraining programs, cross-border subsidies, etc.

I suspect part of what’s happening is that in the United States, at least, consumers are so rich that they really don’t notice the remaining costs of protectionism. A T-shirt at Target might cost $10 instead of the $8 it would cost if there were no trade barriers with China, but this is such a tiny fraction of the average American’s budget that they don’t really care. Likewise, if the domestic price of rice or flour were to double, a significant number of Americans wouldn’t even notice. In contrast, in the 19th century, we were still poor enough that a 10 or 20 percent increase in the price of basic staples might be the difference between being able to afford meat once a week or having to skip meals once in a while to make ends meet. We may now be rich enough that we can afford to be politically clueless.