July 2007

This week, the Senate Commerce Committee will apparently be considering S. 602, the “Child Safe Viewing Act of 2007,” which was introduced by Sen. Mark Pryor (D-AR) earlier this year. The measure marks an important turning point in the ongoing battle over content regulation in the Information Age–in one way for the better, but in some other ways for the worse.

The measure wisely avoids direct content regulation and instead focuses on empowering families to make media consumption decisions on their own. Unfortunately, the measure seeks to accomplish that goal through government actions that could have potentially troubling regulatory implications, especially because of the First Amendment issues at stake here. Specifically, S. 602 opens the door to an expansion of the FCC’s authority over media content on multiple platforms and threatens to undermine private, voluntary rating systems in the process.

I have just released a brief analysis of the measure discussing these concerns. This 5-page paper can be found online at:

http://www.pff.org/issues-pubs/pops/pop14.17pryorchildsafetyviewingact.pdf

On the one hand, I’m glad Kip Hawley took the time to answer some skeptical questions about the TSA’s security regime. On the other hand, I don’t find this remotely reassuring:

Bruce Schneier: You don’t have a responsibility to screen shoes; you have one to protect air travel from terrorism to the best of your ability. You’re picking and choosing. We know the Chechnyan terrorists who downed two Russian planes in 2004 got through security partly because different people carried the explosive and the detonator. Why doesn’t this count as a continued, active attack method?

I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers. Or search the Internet for “BeerBelly.” It’s a device you can strap to your chest to smuggle beer into stadiums, but you can also use it smuggle 40 ounces of dangerous liquid explosive onto planes. The magnetometer won’t detect it. Your secondary screening wandings won’t detect it. Why aren’t you making us all take our shirts off? Will you have to find a printout of the webpage in some terrorist safe house? Or will someone actually have to try it? If that doesn’t bother you, search the Internet for “cell phone gun.”

It’s “cover your ass” security. If someone tries to blow up a plane with a shoe or a liquid, you’ll take a lot of blame for not catching it. But if someone uses any of these other, equally known, attack methods, you’ll be blamed less because they’re less public.

Kip Hawley: Dead wrong! Our security strategy assumes an adaptive terrorist, and that looking backwards is not a reliable predictor of the next type of attack. Yes, we screen for shoe bombs and liquids, because it would be stupid not to directly address attack methods that we believe to be active. Overall, we are getting away from trying to predict what the object looks like and looking more for the other markers of a terrorist. (Don’t forget, we see two million people a day, so we know what normal looks like.) What he/she does; the way they behave. That way we don’t put all our eggs in the basket of catching them in the act. We can’t give them free rein to surveil or do dry-runs; we need to put up obstacles for them at every turn. Working backwards, what do you need to do to be successful in an attack? Find the decision points that show the difference between normal action and action needed for an attack. Our odds are better with this approach than by trying to take away methods, annoying object by annoying object. Bruce, as for blame, that’s nothing compared to what all of us would carry inside if we failed to prevent an attack.

This is totally unresponsive to Schneier’s question. What Schneier was looking for was some sort of coherent explanation for why shoes and bottles of liquids were a bigger threat than cell phones and fake bellies. He didn’t have any such explanation, probably because there isn’t one. We’ve given the TSA an impossible job and so they’ve responded with security theater. These “security measures” won’t stop a determined terrorist, but it might make travelers (at least those who don’t think about it too hard) feel better.

There’s lots more great (appalling) stuff where the blockquote came from so click on through to part 1 and part 2.

After weeks of intense lobbying, the FCC today set rules for the auction of former UHF TV channels 60-69 (in the prime 700 MHz range of frequencies). The full details are not yet out, but the decision seems to be largely what was expected: a “public-private partnership” for newly-allocated public safety spectrum, and — for commercial spectrum — new regulations that impose “open access” rules on 22 megahertz of the allocated frequencies.

No one was completely satisfied. Google and other wireless net neutrality proponents notably failed in their bid for more expansive regulation — with the Commission rejecting their calls for mandated interconnection and wholesale leasing of spectrum.

This loss — in part — may be due to a tactical fumble by Google itself. Its pledge last week to bid a minimun of $4.6 billion if the Commission adopted four proposed rules for these frequencies was perceived (rightly or wrongly) as an ultimatum to the FCC. Had the Commission then adopted the Google’s proposed rules, the agency’s own credibility and independence would have been put at risk.

Continue reading →

I’ve written here before about the Clear card, which allows people to prove their membership in the Transportation Security Administration’s Registered Traveler program without telling TSA who they are. I disapprove of Registered Traveler, but if it’s going to exist, the Clear card system’s restrictiveness with users’ identities is a key anti-surveillance feature.

Today, the House Homeland Security Committee’s Subcommittee on Transportation Security and Infrastructure Protection is holding a hearing entitled “Managing Risk and Increasing Efficiency: An Examination of the Implementation of the Registered Traveler Program.”

Steven Brill, the Chairman and CEO of Clear, is one of the witnesses, and he has some choice criticisms of TSA.

Continue reading →

Good piece in the Wall Street Journal yesterday by Dennis Patrick (former FCC Chairman) and Thomas Hazlett (former FCC Chief Economist) on the Fairness Doctrine. In their editorial entitled, “The Return of the Speech Police,” they argue that the Doctrine represented “well-intended regulation gone wrong” and that “re-imposing ‘fairness’ regulation would be a colossal mistake.” The continue:

The Fairness Doctrine was bad public policy. It rested on the presumption that government regulators can coolly review editorial choices and, with the power to license (or not license) stations, improve the quantity and quality of broadcast news. Yet, as the volcanic eruption triggered by repeal amply demonstrated, government enforcement of “fairness” was extremely political.

Evaluations were hotly contested; each regulatory determination was loaded with implications for warring factions. The simple ceases to be easy once government is forced to issue blanket rules. What public issues are crucial to cover? How many contrasting views, and presented by whom, in what context, and for how long? The Fairness Doctrine brought a federal agency into the newsroom to second-guess a broadcaster’s editorial judgments at the behest of combatants rarely motivated by the ideal of “balanced” coverage.

Continue reading →

There, Too

by on July 31, 2007 · 2 comments

Commentary on recent real estate woes in Second Life. I’ve been thinking of opening an office there. Sort of a retreat. An asylum, as it were.

Worst-case Scenario

by on July 31, 2007 · 2 comments

Voting machine vendors are their own worst enemies:

The study, conducted by the university under a contract with Bowen’s office, examined machines sold by Diebold Election Systems, Hart InterCivic and Sequoia Voting Systems.

It concluded that they were difficult to use for voters with disabilities and that hackers could break into the systems and change vote results.

Machines made by a fourth company, Elections Systems & Software, were not included because the company was late in providing information that the secretary of state needed for the review, Bowen said.

Sequoia, in a statement read by systems sales executive Steven Bennett, called the UC review “an unrealistic, worst-case-scenario evaluation.”

Right. Because the way to tell if a system is secure is to focus at the best-case scenario.

I guess I shouldn’t be surprised. Voting machine vendors have a track record of releasing jaw-droppingly lame responses to criticisms of their products, so why not continue the pattern?

I agree with Tim that open networks are great and likely preferable in most situations, but to say that open networks simply “tend to be better than closed networks” doesn’t make sense.

This is akin to saying that copper is more efficient than iron. This begs the question. More efficient at what? Copper is more efficient than iron in some applications like conduction of electricity, but it’s a much less efficient armor plating. Ends dictate the standard by which we judge efficiency, otherwise efficiency is meaningless.

That said, not all networks are built for the same ends. While the Internet is an undisputed engine of growth and innovation, it’s not the only model that EVER makes sense. Closed or limited networks can also have value because Metcalfe’s Law–which states that a network’s utility increases in proportion to the square of the number of members–is not the only factor in determining network worth, despite being a very strong factor.

Continue reading →

Cord makes some good points about the disadvantages of open networks, but I think it’s a mistake for libertarians to hang our opposition to government regulation of networks on the contention that closed networks are better than open ones. Although it’s always possible to find examples on either side, I think it’s pretty clear that, all else being equal, open networks tend to be better than closed networks.

There are two basic reasons for this. First, networks are subject to network effects—the property that the per-user value of a network grows with the number of people connected to the network. Two networks with a million people each will generally be less valuable than a single network with two million people. The reason TCP/IP won the networking wars is that it was designed from the ground up to connect heterogeneous networks, which meant that it enjoyed the most potent network effects.

Second, open networks have lower barriers to entry. Here, again, the Internet is the poster child. Anybody can create a new website, application, or service on the Internet without asking anyone’s permission. There’s a lot to disagree with in Tim Wu’s Wireless Carterfone paper, but one thing the paper does is eloquently demonstrate how different the situation is in the cell phone world. There are a lot of innovative mobile applications that would likely be created if it weren’t so costly and time-consuming to get the telcos permission to develop for their networks.

Continue reading →

Tomorrow, the FCC is scheduled to meet and adopt rules regarding the upcoming auction of spectrum usage rights in 700 MHz band for wireless services. A number of interests have been crowding around, trying to get the FCC to slant the auction rules in their favor.

I’ve written a Cato TechKnowledge on the topic: “How About an Open Auction?