Congress to gut FISA?

by on August 4, 2007 · 0 comments

Apparently, in the last 48 hours, the Bush administration has launched a full-court press to re-write the Foreign Intelligence Surveillance Act to further expand the administration’s powers to engage in domestic surveillance with minimal judicial scrutiny. EFF says that the Democrats’ alternative to the Bush administration’s bill is a “sham compromise that poses a grave danger to Americans’ privacy.” Even if they’re wrong about that, Congress certainly shouldn’t pass legislation this important with this little time for public scrutiny and debate. Apparently, the Senate passed the legislation yesterday, and the House has been debating it today.

I have to say I find this just baffling:

With time running out before a scheduled monthlong break and the Senate already in recess, House Democrats confronted the choice of accepting the administration’s bill or letting it die. If it died, that would leave Democratic lawmakers, who have long been anxious about appearing weak on national security issues, facing an August fending off charges from Mr. Bush and Republicans that they left Americans exposed to terror threats.

There was no indication that lawmakers were responding to new intelligence warnings. Rather, Democrats were responding to administration pleas that a recent secret court ruling had created a legal obstacle in monitoring foreign communications relayed over the Internet. They also appeared worried about the political repercussions of being perceived as interfering with intelligence gathering. But the disputes were significant enough that they were likely to resurface before the end of the year.

The Bush administration’s approval ratings are in the low 30s, Alberto Gonzales is widely recognized as an embarrassment, and Congress won’t be up for re-election for more than a year. So what, exactly, is the Democratic leadership afraid of? The people who are likely to be taken in by the administration’s smears on this issue are going to almost all be either partisan Republicans or so clueless that they will have long since forgotten about this argument by the time they go to the polls next year.

One of my favorite podcasts is Slate’s Political Gabfest, a weekly show in which Slate writers discuss the week in politics. This week’s show [MP3] features Slate’s Jacob Weisberg and Newsday’s Jonathan Alter bashing Rupert Murdoch and wringing their hands over his takeover of the Wall Street Journal. Slate foreign editor June Thomas, who has what sounds like a British accent, responds by accusing Alter and Weisberg of snobbery and defending what Murdoch has done to the Times of London:

The Times was read by 100,000 old boffers, who mostly got it for the crossword… It was from another century—from the 19th Century, not the 20th Century. It didn’t have TV listings because it was too refined for that. Britain has changed. Britain is a more democratic country. You don’t need papers for the tofts any more. It’s not even the most conservative paper. It’s not the worst paper by any means. It’s not the worst of the broadsheets. I wouldn’t read it. But I think the fact that four times as many people read it now despite the way that newspaper circulation is declining everywhere in the world. He’s a good businessman. I think there is a lot of snobbery in the way that people are attacking him. I think that as long everyone is very vigilant about his interests—which are media interests, not armaments or supermarkets—I am not terribly worried about Rupert Murdoch.

I’m not sure what “boffers” and “tofts” are, but I think she makes a good point. One of the fundamental premises of the elitist argument Thomas is criticizing here is that there’s a fundamental tension between good business and good journalism—that the job of a newspapers owner (which Weisberg calls its public trust) is to forego maximizing profits in the name of good journalism. But this, it seems to me, demonstrates a myopic attitude toward business and a paternalistic attitude toward readers. Because it rests on the assumption that practicing good journalism is not a good business strategy. And the reason it’s not a good business strategy, presumably, is that readers don’t really want to read good journalism, but journalists must force-feed it to them like children eating their vegetables.

Now certainly, there are some papers that become successful by pursuing a less educated audience using dumbed-down, sensationalistic news coverage. But I don’t see how that would be a good business strategy for the Journal. The Journal has 2 million daily readers precisely because they produce the kind of in-depth, high-brow news coverage that the nation’s business elites demand. It’s not like the highly educated readers who pay almost a dollar a day for the paper aren’t going to notice if the paper’s quality starts to suffer as a result of cost-cutting.

More generally, I’m not sure I like the notion that the job of journalism is to force-feed readers information they otherwise wouldn’t be interested in. You can fill the newspapers with all the high-quality, in-depth reporting you like, but if a reader isn’t interested, he’s still going to flip to the sports section. So if dumbing the news down a little bit at least gets readers to read some news, I’m not sure that’s such a bad thing.

Ed Felten reports on the results of California’s studies of the source code of e-voting machines used in the state. I haven’t had time to read the reports myself, but according to Felten, they’re pretty devastating:

All three reports found many serious vulnerabilities. It seems likely that computer viruses could be constructed that could infect any of the three systems, spread between voting machines, and steal votes on the infected machines. All three systems use central tabulators (machines at election headquarters that accumulate ballots and report election results) that can be penetrated without great effort.

It’s hard to convey the magnitude of the problems in a short blog post. You really have read through the reports — the shortest one is 78 pages — to appreciate the sheer volume and diversity of severe vulnerabilities.

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

I think there are two policy lessons to take away from all of this. First, source code secrecy is a lousy way to protect voting machines. Any moderately skilled hacker who gets his hands on an e-voting machine will be able to reverse-engineer enough of the voting machines’ innards to uncover one of the many flaws in these machines. Secrecy simply shields e-voting vendors from public scrutiny and criticism, thereby making it less likely that these security problems will be detected and fixed in a timely manner.

Secondly, given the sheer number of vulnerabilities, it’s not reasonable to expect there to be secure voting machines on the market any time soon. Even if it were theoretically possible to create such machines, it will take several iterations of companies developing new machines and security experts tearing them apart before they get it right. So for at least the next couple of elections, states that care about security should be using paper ballots.

How I Make a Living

by on August 3, 2007 · 0 comments

In the interests of full disclosure, I should note that the Show-Me Institute, where I was employed until recently, has added Cindy Brinkley, the president of AT&T Missouri, to its board of directors. I think it’s important that people in the business of public policy advocacy be transparent about how they make a living, so I thought I’d share a few details about my recent and future sources of income.

Continue reading →

In the most recent podcast, Jim Harper and I had a little back-and-forth about the idea of a commons model for spectrum. I made the point that while I was hopeful for the future, technology that makes spectrum scarcity a thing of the past (thus allowing a commons to work) isn’t quite here yet. Regulating based on theoretical technology, I said, doesn’t bode well for the here and now.

Well, today comes word that the FCC has rejected the mystery whitespace devices that Google, Microsoft, and others in a consortium pushing for commons treatment of parts of the 700 MHz, had offered for testing. A year ago, the New America Foundation put out a paper called “Why Unlicensed Use of Vacant TV Spectrum Will Not Interfere with Television Reception.” According to The Washington Post today,

After four months of testing, the agency concluded that the devices either interfered with TV signals or could not detect them in order to skirt them. Now the coalition of companies backing the devices, which includes Dell, Intel, EarthLink, Hewlett-Packard and Philips, is going back to the drawing board, possibly to redesign the devices and meet with FCC engineers to explore other options. The FCC said Tuesday that it would continue experimenting with such devices, which use vacant TV frequencies.

I really hope they succeed because I don’t think there’s anything wrong with allowing free use of true whitespaces or commons as long as the technology really works and use truly doesn’t cause interference to an adjacent licenses holder. That said, we can’t devalue otherwise useful spectrum by allocating it as a commons until we know the tech works.

Libertarian Communalism

by on August 3, 2007 · 0 comments

Over at Open Market, Brad Walters has a great post on libertarian communalism:

there are those who accuse libertarians of hating community and society. But families are the essence of communalism. They are often authoritarian and communistic, yet libertarians love their families as much as anyone else. Likewise, I had a great time this weekend, and had no problem ceding some authority to my friend (the owner of the house) and to the collective.

The distinction between classical liberals and contemporary liberals does not center on disdaining or appreciating communalism. Any sane person recognizes that there are benefits to association. It’s a question of scale and it’s a question of voluntary versus compulsory association. The State is horrible at the idea of community because a) the association is involuntary, and b) the scale is far too massive for the personal connection inherent in smaller groups, like families and travel buddies.

In June I argued that free software is one example of the sort of voluntary communalism Walters identifies here.


In this week’s podcast, we take up a debate that’s generated some heat here on the blog: open networks. Cord and I had a friendly disagreement about the relative efficacy of open versus closed networks earlier this week. Jim Harper chimed in with a TechKnowledge accusing Google of using “open access” rhetoric to get spectrum on the cheap.

Cord, Jim, Jerry Brito and I hash these issues out under the watchful eye of host Adam Thierer. Along the way, we discuss spectrum commons, propertization, and the dangers of regulatory capture. I hope you’ll check it out.

There are several ways to listen to the TLF Podcast. You can press play on the player below to listen right now, or download the MP3 file. You can also subscribe to the podcast by clicking on the button for your preferred service. And do us a favor, Digg this podcast!

Continue reading →

Erstwhile TLF blogger Tom Pearson draws an analogy between Harry Potter and copyright law:

One of the most interesting passages in the new HP is on page 517 regarding Goblins’ views on property. I may be reaching a bit, but the first thing I thought of was intellectual property law. According to the passage, Goblins consider the maker rather than the purchaser of an object to be its true owner. Indeed, the purchaser is seen as merely renting the property. This sentence seemed particularly apt: “They [Goblins] consider our habit of keeping goblin-made objects, passing them from wizard-to-wizard without further payment, little more than theft.” That strikes me as the approach taken by the RIAA and MPAA toward copyrights and is fairly close to the Randian conception of IP as well.

So, am I grasping at straws here?

Discuss amongst yourselves.

Harold Feld speculates on a Google/Sprint/Clearwire consortium. All well and good – and just as viable without a government subsidy, in an open, full-price auction.

In his latest column for The Hill, The American Enterprise Institute’s John Fortier has a critique of the Holt bill that I found rather frustrating:

Election administrators have weighed in with a dose of reality. There is no way to implement nationwide paper trails by the 2008 elections, nor by 2010. House leaders have floated a compromise to delay implementation, but to require simple cash register-style paper trails in 2008. This also will not work.

The expedited timeline for these changes is driven by activists who are convinced that manufacturers like Diebold or clever hackers are likely to commit massive voter fraud. Some have even come to the position of opposing electronic voting machines altogether, even those with paper trails. They now advocate for voting on paper alone, counted by hand. While this might work in some parliamentary systems, where voters cast a single vote on a ballot, try counting ballots by hand in California, with 20 offices up for election and 20 more referenda. And paper ballots are also susceptible to fraud through ballot-stuffing or lost or defaced paper ballots.

What is needed is a modest push for paper trails, with flexibility for states and federal money to help states move in that direction over a six-year period.

This modest approach will not please those who now favor voting only on paper. One request: If you have comments about this column, no e-mails, please–write to me on paper.

The critique of paper ballots here is breathtakingly inane. Hardly anyone is opposing the use of optical-scan machines to count paper ballots marked by voters, because the results of optical-scan ballot counting machines can always be verified with a hand recount. And of course, the retort in that final sentence is a complete non-sequitur.

Like virtually all defenses of e-voting I’ve seen, the piece does not even mention, much less respond to, the substance of the anti-e-voting argument. Fortier’s argument, if we can call it that, is limited to portraying us “activists” as paranoid luddites who are just opposed to technological progress. That ignores the fact that the critics of e-voting include a significant number of computer science professors and a whole lot of computer programmers. These are not people with a knee-jerk opposition to technology, as such. Rather, they are people who understand the limits of technology well enough to know that touch-screen voting is a bad idea.