If anyone out there is looking for even more to read (for instance if you are thumbing through old issues of MacWorld), you might want to check out Heritage’s new Linked! page. This page provides constantly-updated links to news and commentary from the web on a full range of public policy issues, including (of course) technology. For all issues check here. For tech only, go here
Enjoy.
I was catching up on some reading last night and I thumbed through the April issue of Macworld. I came across not just one, but two articles plugging Handbrake, a video-conversion utility that allows consumers to transfer a variety of video content–including DVDs–to their iPods for viewing on the road.
The first article makes a passing reference to this article which claims that using Handbrake is fair use, even if creating it was clearly illegal. (I think this is wrong–the DMCA’s anti-circumvention provisions don’t include a fair use exemption) In either event, the articles’ authors certainly don’t seem especially concerned about the prospect of urging their customers to break the law.
Something’s clearly screwed up here. The rule of law works because of widespread public acceptance. When the law is widely despised and ignored–as it was during prohibition, for example–it inevitably fails to accomplish its stated purpose and undermines respect for the rule of law more generally.
Now, it seems to me that one could reasonably go either way here: one could be outraged at MacWorld for blithely encouraging lawlessness. Or one can be outraged that the DMCA makes innocuous activities like watching DVDs on an iPod illegal. Obviously, my sympathies are with the latter viewpoint. But I worry the most about people who are comfortable with the status quo, where the law is routinely flouted and nobody cares. If the law is stupid, it should be changed.
Vacation? What vacation? There’s WiFi in my Frankfurt rental apartment! I’m here attending the opening round of the World Cup, Adam’s most-loved sport, and tickets to the games have RFID chips embedded in them.
Last week, the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee met in San Francisco. The most interesting thing about the meeting to me was leaving without showing ID at the airport. But one of the items of highest interest at the meeting was a draft report put forward by a subcommittee of the DPIAC suggesting that RFID should be disfavored for human tracking.
It was subject to some exaggerated reporting, and some RFID industry folks went into a bit of a tizzy. Penny-wise-and-pound-foolishly, they seem to believe that they should preserve the multi-million-chip market for RFID in identification cards, even though doing so frustrates and delays the development of a market for RFID on the packaging of consumer goods, which easily could reach tens or hundreds of billions of tags.
In a near analogy to tagging identification documents, the World Cup has issued a couple million tickets with RFID chips embedded in them. Getting RFID readers into German stadiums makes it likely that club teams will use RFID chips in their tickets henceforth. Kudos to the wily Phillips Corporation for using its World Cup sponsorship to create an installed base of RFID-using venues.
So let’s look at how mighty RFID adds value to the soccer ticket – and what part of any value goes to fans, organizers, or RFID manufacturers and integrators (after the jump).
Continue reading →
The past week has truly been a miserable one for supporters of neutrality regulation. Last Thursday, they got shellacked 269-152 in the House vote on the issue. Despite earlier talk of GOP members joining the pro-reg crowd, only 11 actually did so. But a surprising 58 Democrats voted against it. Then, yesterday, the Washington Post–no, not the conservative Washington Times, but the Post–editorialized against regulation. (By the way, no extra points will be awarded for guessing that pro-regulation advocate Jeff Chester responded to this by making an ethics charge, claiming that the Post didn’t disclose its conflicts of interest. Anyone sense a pattern here?)
The third shoe fell on the regulation crowd yesterday, when Senator Ted Stevens released his revised telecom reform bill in the Senate. Contrary to expectations, Stevens did not add neutrality regulation provisions to his bill. Instead he stood firm, with the bill only calling for a study of the issue. Kudos to Sen. Stevens for holding his ground on this.
Of course, the neutrality debate is far from over–and momentum could change. However, with only a few months left in the congressional schedule, regulation proponents must be looking nervously at the calendar, and hoping it won’t bring any more weeks like this one.
My side of the network neutrality debate may have resorted to paid astro-spam commenters to get their point across, but as far as I know, no regulation opponents have stooped to writing a cheesy song about the issue:
Three singer/songwriters met at a Los Angeles recovery center for those suffering from internet-related anger issues. How could Congress vote to destroy one of the only good things left in America? This made no sense! How could so few people be enraged? What were people doing to keep network neutrality the law of the land?
I get tired of repeating myself, so you can click here to see why this is nonsense. Oh, and you can listen to their cringe-inducing ditty here.
The Washington Post editorialized yesterday in opposition to regulating the Internet:
The advocates of neutrality suggest, absurdly, that a non-neutral Internet would resemble cable TV: a medium through which only corporate content is delivered. This analogy misses the fact that the market for Internet connections, unlike that for cable television, is competitive: More than 60 percent of Zip codes in the United States are served by four or more broadband providers that compete to give consumers what they want–fast access to the full range of Web sites, including those of their kids’ soccer league, their cousins’ photos, MoveOn.org and the Christian Coalition. If one broadband provider slowed access to fringe bloggers, the blogosphere would rise up in protest–and the provider would lose customers…
The serious argument for net neutrality has nothing to do with the cable TV boogeyman. It’s that a non-neutral net will raise barriers to entry just slightly–but enough to be alarming. To use a far better analogy: Competitive supermarkets aim to please customers by offering all kinds of goods, but the inventor of a new snack has to go through the hassle of negotiating for display space and may wind up on the bottom shelf, which dampens his incentives. Equally, if the owners of Internet pipes delivered the services of cyber-upstarts more slowly than those of cyber-incumbents, the incentive to innovate might suffer. Would instant messaging or Internet telephony have taken off if their inventors had had to plead with broadband firms to carry them?
This concern should not be exaggerated. Cyber-upstarts already face barriers: The incumbents have brand recognition and invest in tricks to make their sites load faster. The extra barrier created by a lack of net neutrality would probably be small because the pipe owners know that consumers want access to innovators.
Mike Masnick correctly notes that the Post exaggerates the competitiveness of the broadband market a bit–60 percent of zip codes may have four broadband service providers, but that doesn’t mean that 60 percent of consumers do–the vast majority have two or fewer. But I think the broader point of that paragraph–that there’s no danger of the Internet turning into a non-competitive service like cable TV–is exactly right. The value of the Internet stems from the availability of hundreds of thousands of small sites. The telcos would be shooting themselves in the foot if they cut off their customers’ access to those sites. And most broadband customers do have at least one option, so their ability to jerk their customers around is limited.
The editorial’s conclusion gets it right:
Continue reading →
The Register reports on what is quite possibly the awesomest object in the universe:
Astronomers have identified a massive comet-like structure – spanning a whopping three million light years – that is tearing through a distant galaxy cluster at more than 750 kilometres a second.
Yes, you read that right. A great ball of fiery gas, some five thousand million times the size of the solar system. Fortunately, it isn’t anywhere near Earth. The flaming gas-ball is in the Abell 3266 galaxy cluster, even more millions of light years away from us than it is across.
The fireball, which is the largest object of this kind ever identified, was spotted by stargazers using the European Space Agency’s XMM-Newton X-Ray telescope.
The author of the article showed remarkable restraint in avoiding references to “great balls of fire.”
Via IPCentral, I just finished reading “Patents and Business Models for Software Firms.” The authors assemble a large data set of patents, classify them as software and non-software, and do some statistical analysis as to which type of firms are most likely to take advantage of patents. They conclude, not surprisingly, that product-oriented firms are more likely to patent than service-oriented firms.
What they don’t do (and they acknowledge it) is determine any kind of causal connection among software patents, R&D spending, and innovation. And it seems to me it would be difficult to draw any conclusions about the impact of software patents on overall industry innovation using data of this sort. Software patents clearly benefit firms at the margin, or they wouldn’t seek them. But we can’t conclude from that fact that software patents benefit the industry overall–that would be a fallacy of composition.
It seems to me the best way of evaluating software patents empirically would be at the micro level: that is, look at individual patents and try to estimate the likelihood that the covered invention would have been created without the availability of software patents. Obviously, some will be hard cases, but there are also many easy cases.
It occurred to me that this is the sort of task that could be accomplished in a decentralized, peer produced manner: set up a web page where the user can look at a patent and rate it for obviousness, prior art, etc. There are probably enough geeks out there who hate software patents that you could analyze far more patents in far more detail than a traditional research team could hope to accomplish.
I just registered AmIObviousOrNot.com. I could set the site up, but my web development skills are rather rusty, so it would take me a while. Are there any PHP gurus out there who’d like to help out with a project like this?
Today, Tyler Cowen posted some cautious, but surprising words about his stance on the net regulation issue:
I favor net neutrality in the current environment. Without neutrality, Comcast and Verizon would use differential pricing schemes to extract more revenue and thus diminish some forms of Net output, including Google, Amazon, ebay, and possibly blogs. … If the cable and telecom companies had no legally-backed monopoly powers, I would not favor legally enforced net neutrality. “Let the market decide” would be a good answer.
You should read his whole post for more of his argument. But I wonder: If a lack of competition is caused by a government-backed monopoly power, as Cowen suggests, wouldn’t removing the regulations that create that power be the preferred course of action? Shouldn’t adding a new layer of “legally enforced net neutrality” regulation be our last, hopeless recourse? And aren’t we headed in a generally pro-competitive direction? Even putting aside the tremendous growth in competition over the past 25 years, don’t steps like the COPE Act’s streamlining of franchising help to continue to eliminate the very government-imposed barriers to entry that create market power?
I don’t know the answers to these questions, and that’s why I will remain “neutral” and simply moderate a panel discussion on neutrality regulation this Thursday, June 15, hosted by America’s Future Foundation. TLF’s own James Gattuso will be joined by Patrick Ross of PFF on the anti-regulation side, while Alex Curtis of Public Knowledge and Frannie Ross of Free Press will take the pro-neutrality side. The event will take place on the Hill with drinks beginning at 6:30 and discussion at 7. I hope you can join us! More information here.
After writing this morning’s post about VoIP and CALEA, it occurs to me that this sort of regulatory issue is probably one of the motivations behind Skype’s decision to make SkypeOut free in the United States. Skype and the FCC are heading for a collision course. Sometime in the middle of 2007, the FCC is probably going to try to force Skype to comply with CALEA. Skype will probably try to wash their hands of the matter, the way they did with E911. The FCC is unlikely to buy that, sparking a showdown.
Skype is likely to react by turning SkypeOut off (or threatening to) and blaming the FCC for the decision in hopes of creating a consumer backlash. The effectiveness of that tactic will depend on how many SkypeOut users they have. If there are enough of them, the FCC will be in the awkward situation of telling millions of Skype users that they’re no longer allowed to call their land line friends as they’d been doing for free for the previous year.
This reminds me of an excellent article in Reason back in 1999 about the fight over satellite transmission of local broadcast TV stations. Basically, satellite companies simply started transmitting the content consumers wanted in violation of the law. By the time the FCC got around to considering the issue, they had gotten so many customers that the FCC didn’t dare force them to stop.
Even if Skype isn’t able to make the FCC blink, the next year will be a fleeting opportunity to convert current landline users to IP-based telephony before going back to being a pure IP service.