Bruce Schneier points to this underwhelming story purporting to explain that the liquid ban is really vital to airline security and not just security theater. Color me unimpressed.
Although I wasn’t smart enough to figure out how to view it, there’s apparently a video showing a large explosion made from the components in question at Sandia Labs. Fine, I’m sure there are some liquids out there that, if mixed together under the right circumstances, can produce a large explosion. The question is whether it’s possible to do that in an in-flight airline restroom, where you have very little space, no stable work surface, no access to lab equipment, not a whole lot of time, and no ventilation.
If the powers that be really wanted to convince us that this was a real threat, they should release details about what the ingredients are, so other labs can reproduce the results. The “national security” excuse doesn’t make any sense here: the terrorists obviously already know what ingredients they were using, so there’s no point in keeping the secret away from them. Moreover, if there were a real threat, public disclosure might have real benefits: labs around the country could work on developing new equipment to detect the ingredients in question, and passengers could be on the lookout for telltale signs that a liquid bomb was being mixed.
Finally, as Schneier points out, the really ridiculous part is that the TSA’s Byzantine liquids rules just don’t stop terrorists from getting a significant amount of liquid through the checkpoint. Schneier says that he was able to smuggle in 12 ounces of non-saline-solution liquid in a saline solution bottle. If it takes more than 12 ounces to make the plane go boom, you can have multiple terrorists go through the checkpoint, or make multiple trips.
The bottom line is that if every container of liquid is a potential bomb, then no liquids should be allowed through security at all. The TSA obviously isn’t that concerned, so it makes me skeptical that there’s anything more to the story than bureaucratic ass-covering.
Susan Landau, an engineer at Sun Microsystems and the author of Privacy on the Line: The Politics of Wiretapping and Encryption, has an op-ed in today’s Washington Post that builds on the FISA issues we discussed in our Tech Policy Weekly podcast yesterday. Her editorial is entitled, “A Gateway for Hackers: The Security Threat in the New Wiretapping Law.” In it she argues that:
Grant the NSA what it wants, and within 10 years the United States will be vulnerable to attacks from hackers across the globe, as well as the militaries of China, Russia and other nations.
Such threats are not theoretical. For almost a year beginning in April 2004, more than 100 phones belonging to members of the Greek government, including the prime minister and ministers of defense, foreign affairs, justice and public order, were spied on with wiretapping software that was misused. Exactly who placed the software and who did the listening remain unknown. But they were able to use software that was supposed to be used only with legal permission.
The United States itself has been attacked. … [and] U.S. communications technology is fragile and easily penetrated. While advanced, it is not decades ahead of that of our friends or our rivals. Compounding the issue is a key facet of modern systems design: Intercept capabilities are likely to be managed remotely, and vulnerabilities are as likely to be global as local. In simplifying wiretapping for U.S. intelligence, we provide a target for foreign intelligence agencies and possibly rogue hackers. Break into one service, and you get broad access to U.S. communications.
I have no idea if she is right, but this is scary stuff. I’d be interested in hearing what others think.
One of this week’s podcast guests, Derek Slater, has a fantastic post over at the EFF blog on AT&T’s flip-flopping position on domestic surveillance. In 1928, in an amicus brief in the famous Olmstead wiretapping case, Ma Bell made the same comparison I made earlier this week:
The telephone has become part and parcel of the social and business intercourse of the people of the United States, and this telephone system offers a means of espionage to which general warrants and writs of assistance were the puniest instruments of tyranny and oppression.
And of course, the voice recognition and data mining technologies the feds have today makes the wiretapping at issue in Olmstead look puny.
Apropos of my post earlier today on Google’s good privacy citizenship, the Center for Democracy and Technology has a report out reviewing progress in the search privacy area.
“Despite the progress that has been made,” Ars reports, “the CDT still feels that there is a need for stronger privacy legislation. ‘No amount of self-regulation in the search privacy space can replace the need for a comprehensive federal privacy law to protect consumers from bad actors,’ the report says.”
The computers at CDT have a macro on them (Alt+CDT) that writes an argument for comprehensive privacy legislation into any document. I heard that one time an intern at CDT printed a Chinese food menu, and it came out of the printer with a special on Comprehensive Privacy Legislation Foo Yung.
Update: I like snark, obviously, but don’t want to put snark ahead of substance. This is a good report and reports like this are a good thing to do. Do ISPs next, CDT!
Over the weekend, Congress passed legislation that dramatically expands the executive branch’s domestic surveillance powers. The legislation replaces the FISA warrant process that has governed domestic surveillance since the 1970s with a new process in which courts would only review the general procedures used to select surveillance targets, not a list of the targets themselves.
In this week’s podcast, Adam and I are joined by two of my favorite commentators on civil liberties: Derek Slater of the Electronic Frontier Foundation and Julian Sanchez of Reason magazine. They explain what’s wrong with the legislation, how it’s connected to EFF’s ongoing lawsuit against AT&T, and what we need to do to restore our privacy rights.
There are several ways to listen to the TLF Podcast. You can press play on the player below to listen right now, or download the MP3 file. You can also subscribe to the podcast by clicking on the button for your preferred service. And do us a favor, Digg this podcast!
There are no two ways about it: Google is doing good things on privacy.
The video below provides ordinary people very important information that will empower them with the awareness they need to protect their privacy. To those of us who are technically aware, the information presented here is a little obvious, but the average Internet user doesn’t know it. They need to.
Over the long haul, this kind of education will be much more effective protection for consumers than privacy regulation – and it will have none of the costs of regulation: in wasted tax dollars, market-distorting rent-seeking and regulatory capture, etc.
The video raises some important new points and questions, of course:
Think about it: we leave fingerprints all over the place, just like our SSNs are all over the place. As we use fingerprints to regulate access to more value, the value of collecting fingerprints and faking them will rise.
It won’t be tomorrow or next week, but watch for fingerprint-based identity fraud – if we rely on that biometric too much. DNA has the same quality. Other biometrics, like vein recognition, are neither easy to collect nor to reproduce (though, yes, both of these facts are technology-contingent).
In my book, Identity Crisis, I talked about the qualities of identifiers: fixity, permanence, and distinctiveness. Biometrics like fingerprints and DNA are high on the scale of fixity and permanence, but may drop in reliable distinctiveness with advanced forgery techniques.
The better designed systems will use biometric identifiers that are not only hard to forge, but that are somewhat hard to collect. Biometrics that can only be made available through some volition on the part of the individual will be the most secure.
A great insight from Avi Rubin, who attributes it to California Secretary of State Debra Bowen:
The current certification process may have been appropriate when a 900 lb lever voting machine was deployed. The machine could be tested every which way, and if it met the criteria, it could be certified because it was not likely to change. But software is different. The software lifecycle is dynamic. As an example, look at the way Apple distributes releases of the iPhone software. The first release was 1.0.0. Two minor version numbers. When the first serious flaw was discovered, they issued a patch and called it version 1.0.1. Apple knew that there would be many minor and some major releases because that is the nature of software. It’s how the entire software industry operates.
So, you cannot certify an electronic voting machine the way you certify a lever machine. Once the voting machine goes through a lengthy and expensive certification process, any change to the software requires that it be certified all over again. What if a vulnerability is discovered a week before an election? What about a month before the election, or a week after it passes certification? Now the point is that we absolutely expect that vulnerabilities will be discovered all the time. That would be the case even if the vendors had a clue about security. Microsoft, which arguably has some of the best security specialists, processes and development techniques issues security patches all the time.
Software is designed to be upgraded, and patch management systems are the norm. A certification system that requires freezing a version in stone is doomed to failure because of the inherent nature of software. Since we cannot change the nature of software, the certification process for voting machines needs to be radically revamped. The dependence on software needs to be eliminated.
USA Today reports that most are unaware of the dangers facing them at public Wi-Fi hotspots, which brought to mind an interesting question about municipal Wi-Fi. What incentive is there for municipalities to provide encryption and other security technologies?
The article mentions that AT&T and T-Mobile are the largest providers of free Wi-Fi hookups in the country and although the Wi-Fi itself is unsecured, both companies encourage the use of freely provided encryption software. The incentives for both companies seem fairly obvious. If people are going to be Wi-Fi users they need to feel safe and encryption technology is a way to do this. Customers stay safe and continue to use the service, making AT&T T-Mobile and other providers money.
Do municipal setups have the same incentives? Depending on the financial structure of such a system I can see how there would be little incentive to provide security software or other safeguards to users. Yet these Muni-Fi services would still distort the market, making it less likely for companies–that might be affected by privacy concerns–to invest in those areas.
Question: Does Muni-Fi pose a risk to security because of the lack of incentives to push security solutions and its edging out private competitors who have that motivation?
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →