I’ve finished The Long Tail. Here’s a final point from the book that I liked.
He reminds us that in the early 80s, Hollywood priced the first generation of videotapes at about $75. The theory, he writes, was that this was what a typical family of five would spend on three or four visits to the theater. Obviously, in hindsight this was a stupid pricing strategy. Demand for movies turns out to be highly elastic, and you can sell a lot more movies at $15 or $20 than you can at $75–enough that total revenues go up as a result of the price cuts. Today, the sales and rental of DVDs is on par with movie tickets as a revenue source. Although it’s possible that charging a premium for a new technology made sense, it’s almost certain that the video market would have taken off faster if Hollywood had started out with prices at $30 or $40 instead of $75.
It seems to me that as the movie and music industries move into the digital age, they’re making the same mistakes. The music industry seems to think that 99 cents is unreasonably low. But I think the opposite is probably closer to the truth; demand for music, like the demand for movies, is likely to be highly elastic. If the music industry cut prices to 49 cents a song, a lot of existing customers would buy twice as many songs. Moreover, there are some people who are currently getting their music from illicit file-sharing networks, but would be enticed to buy from an online store at a lower price.
The same seems likely to be true with movies. Apple has priced movies at $10-15, in line with DVD prices. Other movie services seem to be converging on those price points as well. But without the production, distribution, and retail costs associated with shipping a plastic disc around, the marginal cost of getting a movie to consumers via the Internet is far less than $10. It’s likely that here, too, they’d sell a lot more movies for $5 than they would for $10.
I recommend Anderson’s book. It’s an entertaining read that’s packed with insights about the emerging long tail economy.
Ars covers an FCC filing by the National Cable & Telecommunications Association concerning the uptake of CableCARDs. The CableCARD has not proven a hit with consumers, to put it charitably. So far, 200,000 have been deployed, out of 73 million households with cable TV service. That’s about a quarter of one percent.
This is not a surprise. CableCARDs incorporate two of my least favorite things–digital rights management and government technology mandates–so I might be biased, but I have trouble seeing why anyone would want one. The cards were mandated by the FCC as a way of creating a competitive market in set-top-box replacements. The cable industry likes its set-top boxes, resents the FCC’s attempts to abolish them, and so they’ve done everything they could to resist their roll-out. Their primary weapon has been foot-dragging. They released a first generation CableCARD spec that was were crippled by limited functionality. More than a year after the first generation was unveiled, it remains unclear when the second generation will become available.
Continue reading →
Remember the digital TV converter box subsidy? Last July, the Department of Commerce released for comment some fairly sensible rules for administering the program, given the constraints set out by Congress.
The deadline for public comments was this Monday, and–to no one’s surprise–quite a few commenters wanted more money. The broadcasters and TV manufacturers, for instance, complained that the program would be limited to households that do not have cable TV. “No television left behind,” was the unstated theme, as they expressed concern over disconnected televisions in basements across America.
A coalition of retailers–including firms such as Wal-Mart, Best Buy and Circuit City–supported this position. They argued for “leaving such issues to the marketplace, by letting those citizens who believe that they need a Converter apply for a coupon to get one…” This is indeed a novel reading of Adam Smith. Everyone who wants a subsidy should get one. It’s a variant of the invisible hand: outstretched and palm up.
But the retail stores didn’t stop there. They also argued that they should be directly compensated for accepting converter box coupons. The “investments, expenses, and risks,” they maintained, should not be placed “solely on the backs of retail vendors who come forward to participate in this program.”
Let’s recap. The DTV program will cause millions of consumers to drive over to their local Circuit City or Best Buy or Wal-Mart, coupon in hand, to buy converter boxes. The stores can charge whatever they want for these boxes. They will also be reimbursed by the government for the face value of the coupons. A fair number of these consumers–once in the able hands of the store sales staff, will no doubt end up buying brand-new digital televisions from the retailer instead of a puny converter box. And the stores want to be paid for the burden of handling all this additional business?
Nice try. But the argument is utter nonsense. The retail industry lobbyists should be congratulated for their creativity–and perhaps nominated for some lobbying chutzpah award. And then sent away empty-handed.
I hope the guys at Techdirt don’t mind me ripping off entire posts, because they’re too good, and too short, to excerpt:
Sometimes on the internet, things break. With so many pieces of network gear between a user, their ISP and a content provider’s servers, it’s not unreasonable that something goes down, gets misconfigured, or unplugged every once in a while. Something along those lines happened yesterday at Comcast, when a DNS server failed, temporarily blocking users from accessing Google and some other sites–and then the conspiracy theories started flying, with plenty of commenters fingering net neutrality even after the problem had been resolved, and the truth of the equipment failure had come out. The upshot of this isn’t to point out trigger-happy commenters ready to jump all over ISPs before the truth comes out, but rather that it illustrates just how difficult telcos have made it for themselves–should they ever actually go so far as to follow through on any of their inflammatory rhetoric about blocking or degrading the traffic of sites that won’t pay protection money. The tremendous amount of press this issue has gotten, fueled by the exaggerated and dishonest claims from people on both sides of the issue has made a lot of consumers hyper-sensitive and imagining “net neutrality violations” where they don’t exist. It’s seemed pretty clear all along that any telco stupid enough to block access to something like Google in the middle of this highly charged debate would be shooting itself in the foot; but these sorts of reactions to network outages and problems reiterate that even if telcos have the right to demand payments from content providers and block traffic, doing so would be commercial suicide.
I think this illustrates the virtues of the Felten thesis: threatening to enact new regulations may be more effective than actually enacting them. Even if the pro-regulatory side ultimately loses the legislative battle, the mere fact that we had a big debate about it means that a lot more people are now paying attention to the importance of network neutrality principles, and it’s likely to intensify the backlash should the telcos do anything shady in the future.
As I write this, Ed Felten is testifying before the House Administration Committee on e-voting. He recommends better physical security features, a voter-verified paper audit trail, and greater involvement of computer security experts. These are all good recommendations. One recommendation he doesn’t make, unfortunately, is that we consider scrapping e-voting altogether.
If there’s one message that comes through most clearly in his testimony, it’s “get the details right.” The word “detail” appears on every single page of the written testimony, and in five distinct cases he stresses the importance of paying attention to the implementation details of the security measures he recommends. He stresses that security measures that sound good in the abstract will be useless or worse if they’re implemented poorly.
I think he’s right, but here’s the problem: I don’t see any reason to think that the political process will ever be able to get the details right. Politics proceeds by 30-second soundbites. Congress-critters are too busy to delve deeply into the minutia of voting machine design. And, frankly, the people who tend to volunteer to be poll workers are not, on average, very smart.
If you’ve got a policy proposal that depends on the political process getting a lot of complex technical details right, you should probably find a better proposal. Our political institutions should be as fault-tolerant as possible, so that even if a lot of people screw up, the system will still work.
Continue reading →
The Baltimore Sun opinion page recognizes that the REAL ID Act’s national ID system “will neither weed out terrorists nor make a dent in the flow of illegal immigration – the two problems it was devised to address.” In light of the exorbitant cost and impossibility to implement, its advice is to junk the REAL ID Act.
Yesterday I argued that computerized voting was dangerous because it makes the voting process more centralized and less transparent. Today I’ll argue that open source voting is clearly better than proprietary computerized voting, but that paper ballots is preferable to either.
Open source voting software doesn’t do a whole lot to address the centralization issue. True, the development of the software would be decentralized, but the process of manufacturing the machines and loading the software onto them would still likely be handled by a commercial company that would constitute a single point of failure. If someone at the manufacturing facility is unscrupulous, or if someone finds a vulnerability in the software or hardware, he’s going to be just as able to compromise a large number of open source machines as he would with closed-source ones.
As for transparency, open source voting machines clearly enhance transparency in the sense that more people are able to study and criticize the design of the voting software. And that would certainly enhance security. It’s widely accepted among security professionals that openness and peer review is the best way to ensure a system’s security. If Diebold made the source code to its voting machines publicly available, it’s certain that security experts would have long since pointed out those the flaws Felten discovered and Diebold (I hope) would have fixed them.
Continue reading →
Forget missing laptops. The hot issue in the computer world lately is burning laptops. That’s right: while thousands of government laptops have gone astray, some of the rest have burst aflame. The most recent incident was about a week ago, when a Lenovo Thinkpad at Los Angeles International Airport spontaneously caught fire, leading several airlines to–at least temporarily–ban them from flights. The month before, a house burned down in Florida after a laptop sitting on a couch lit up. It appears that bad batteries are to blame, and have been recalled by several manufacturers.
Now here’s where the story gets odd. Two days after LAX fire, Greenpeace issued a report on laptops, urging manufacturers to “ditch” the fire retardants used in their products. Yes, that’s right. Two days after news of another laptop fire, Greenpeace urged less–not more–use of fire retardants.
To be fair, the Greenpeace report only scored use of a certain compound, a type of “brominated fire retardant,” which it says can be harmful in the waste stream. But there’s little evidence that the compound presents a significant risk. It can, however, save lives. Writes Dana Joel Gattuso, an adjunct analyst with the Competitive Enterprise Institute (and, for full disclosure, also my spouse):
:according to a growing body of research, the risks to human
health and the environment are far greater in the absence of brominated
flame retardants due to the increased chance of fire. A study by the Swedish
National Testing and Research Institute compared the outbreak of fires in
TV sets in Europe, where restrictions in the use of deca-bde has already
greatly limited its use on TVs produced and sold in Europe, to those
manufactured in the United States, where there were no limits to its use at
the time of the study. Using conservative estimates, the study found that
16 people die each year from TV fires in Europe, while in the U.S. there is
no record of fatalities from TV fires.
Did these retardants make a difference in the recent laptop fires? I don’t know the answer. But, on the whole, chemicals like these do have a safety impact, and incidents like these help remind us why they are there. It all makes you wonder what Greenpeace would have said if laptops weren’t catching fire.
Businessweek reports that CinemaNow has delivered the Holy Grail of the online movie business: a mainstream movie (although, it must be said, not a very good movie) that consumers can purchase for $10 and burn to a DVD that can be played on an ordinary DVD player.
Well, sort of. BusinessWeek mentions in passing that they licensed technology “from a German company” to copy-protect the DVDs. That made me skeptical, as the technical problem involved was quite challenging. As has been discussed on this site before, the copy-protection on DVDs works by putting the encryption keys for the DVD in a part of the disc that can’t be written to on the type of DVD-R media that’s available to the general public (known as “G” media). That means that if a PC tries to copy a DVD, it can read the keys, but it can’t write them to the new disk.
But what that really means is that home computers can’t create any encrypted DVDs that will play on DVD players, because the only encryption scheme those players support is the one that requires “A” media, which isn’t available to ordinary consumers. All a PC can do is generate an unencrypted movie. And that, Hollywood believes, would be an unacceptable piracy risk. So, I thought, this magical German technology must be awfully sketchy to do what it claims to do.
Continue reading →