Luis Villa has an interesting post about the evolving understanding of open source software:
I’ve long thought that in open source software we are seeing a trend away from trust in an institution (think: Microsoft) and towards trust in ‘good luck’- i.e., in the statistical likelihood that if you fall, someone will catch you. In open source, this is most manifest in support- instead of calling a 1-800 # (where someone is guaranteed to help you, as long as you’re willing to be on hold for ages and pay sometimes very high charges), one emails a list, where no one is responsible for you, but yet a great percentage of the time, someone will answer anyway. There is no guarantee, but community practices evolve to make it statistically likely that help (or bug fixing, or whatever) will occur. The internet makes this possible- whereas in the past if you wanted free advice, you had to have a close friend with the right skills free time, you can now draw from a much broader pool of people. If that pool is large enough (and in software, it appears to be) then it is a statistical matter that one of them is likely to have both the right skills and the right amount of free time.
Clay Shirky today makes an argument that this isn’t just something that is occurring in open source, but is hitting other fields of expertise as well: “My belief is that Wikipedia’s success dramatizes instead a change in the nature of authority, moving from trust inhering in guarantees offered by institutions to probabilities created by processes.” Instead of referring to a known expert to get at knowledge, you can ask Wikipedia- which is the output of a dialectic process which may fail in specific instances but which Clay seems to suggest can be trusted more than any one institution’s processes in the long run.
This is an excellent point, but it’s actually not a new one. Two examples that immediately spring to mind are Darwin’s Origin of the Species and Friederich Hayek’s The Road to Serfdom (and, more specifically, his subsequent essay “The Use of Knowledge in Society” ). Darwin and Hayek each described decentralized processes in which the correctness of the result is produced by statistical processes, rather than by the good judgment of a trusted authority.
Continue reading →
It’s interesting how people on the technology side of the media business tend to badmouth digital rights management technology even as they acquiesce to the content industry’s demands for it. We’ve seen how Steve Jobs bluntly admitted that DRM is not an effective piracy deterrent, just months before rolling out what became one of the world’s most widely deployed DRM schemes. And we’ve seen how Yahoo has pointed out to the labels that DRM does little more than inconvenience paying customers. Now Ashwin Navin, co-founder of the BitTorrent service, is badmouthing the concept even as his company implements it at the behest of Hollywood:
The reason it’s bad for content providers is because typically a DRM ties a user to one hardware platform, so if I buy my all my music on iTunes, I can’t take that content to another hardware environment or another operating platform. There are a certain number of consumers who will be turned off by that, especially people who fear that they may invest in a lot of purchases on one platform today and be frustrated later when they try to switch to another platform, and be turned off with the whole experience. Or some users might not invest in any new content today because they’re not sure if they want to have an iPod for the rest of their life.
Quite So. The people who pay for your content are not the enemy, and it’s counterproductive to create headaches for them.
Hat tip: Ars Technica
Techdirt highlights an incredibly wrongheaded decision that was handed down this week by the Belgian courts:
In the ongoing case where a bunch of newspaper publishers are trying to force Google to pay them to index them and send them traffic (a move that has search engine optimizers worldwide wondering what they could possibly be thinking), Google appealed both parts of the ruling. The bigger issue (the indexing and showing links to Belgian certain news sources) will be heard on appeal in November. However, on the issue of forcing Google to place the entire text of the legal order on the front of both google.be and news.google.be, the Belgian courts have turned down Google’s appeal, and said they will start fining the company if it does not place the entire text (with no commentary, either) on both websites. This seems drastic and entirely unnecessary for a variety of reasons. All it really seems to do is broadcast the backwardness with which Belgian news publishers view the internet. It makes you wonder… do Belgian publishers require libraries to pay them extra money to list their books in a card catalog? What this really highlights, however, is that there are still plenty of industries out there that don’t necessarily understand how the internet works–and that can cause all sorts of problems for internet companies who assume most people understand when things are being done for their benefit.
The legal issues here are pretty well settled on this side of the Atlantic. Deep linking has been repeatedly upheld by American courts, and site administrators have several ways to remove their site from Google’s index and cache on request. The issue here is really about what the default should be: does Google have to get sites to opt-in to search engines, or do the sites have to opt out. If the courts were to uphold the former position, it would have a devastating impact on the search engine industry, because the logistics of getting opt-in permission from millions of individual site owners would likely be beyond the resources of all but the largest companies. If you want a stagnant search engine industry dominated by Microsoft, Google, and Yahoo, just set up copyright hurdles that will make it virtually impossible for new firms to enter the market.
Update: It’s been pointed out to me that I should make clear the distinction between law and policy here. I have no idea if the case was correctly decided as a matter of Belgian law, about which I know nothing. It’s quite possible that the Belgian courts decided this case correctly based on the laws on the books in Belgium. My point was simply that this decision is likely to have bad policy outcomes. I should have that more clear.
Supporters of free markets and free speech in communications lost a friend this past week with the passing of Sol Schildhause at the age of 89. While perhaps not well-known to many today, Sol was for decades a fixture in the world of cable TV, serving as the first head of the FCC’s cable bureau from 1966 to 1974–where he fought against rules that protected broadcasters from cable TV competition–and later as an attorney and chairman of the Media Institute, where he worked tirelessly for competition in cable TV itself. He was particularly instrumental in the effort to end exclusive cable franchising on the grounds that it was an unconstitutional violation of free speech. The Supreme Court decision that resulted from those efforts established that cable television firms’ were entitled to First Amendment protection, although it stopped short of banning exclusive local franchising.
Schildhause always seemed the maverick in his work, a happy warrior fighting against the status quo. This was evident even during his years at the FCC, where he was far from your typical bureaucrat. Sometimes this caused difficulties, as related by Tom Hazlett (now of George Mason University) in a 1998 article for Reason Magazine entitled “Busy Work”:
Continue reading →
Honestly, I don’t get it. How in the world does government lose so many laptop computers? I don’t know if you heard this yesterday but Sonoma County, CA authorities reported that they had lost one-time JonBenet Ramsey murder suspect John Mark Karr’s laptop, which supposedly contains evidence of child pornography that could have been used to help prosecute him. In other words, we basically bought this freak a free plane ride back from Thailand and then gave him a big “Get Out of Jail Free” card. Brilliant. How in the world do you lose the laptop of the guy who has been all over the news for the past month?
But wait, there’s more missing laptop news. In response to an inquiry from the House Committee on Government Reform, 17 federal agencies where asked to report any loss of computers holding sensitive personal information. The results, revealed yesterday, are staggering. According to Alan Sipress of The Washington Post: “More than 1,100 laptop computers have vanished from the Department of Commerce since 2001, including nearly 250 from the Census Bureau containing such personal information as names, incomes and Social Security numbers…” The Census Bureau’s lost laptops alone could have compromised the personal information of about 6,200 households. Apparently, according to MSNBC, “Fifteen handheld devices used to record survey data for testing processes in preparation for the 2010 Census also were lost, the [Census] department said.” (And you thought that the Census was accurate!) Other government departments reporting lost computers with personal information include the departments of Agriculture, Defense, Education, Energy, Health and Human Services and Transportation and the Federal Trade Commission.
Of course, all this comes on top of the lost laptop scandal over at the Department of Veterans Affairs this summer. One lost laptop contained unencrypted information on about 26.5 million people and another had information on about 38,000 hospital patients. And in August, the Department of Transportation revealed that a laptop containing roughly 133,000 drivers’ and pilots’ records (including Social Security numbers) had been stolen.
I honestly don’t understand how are government agencies and officials losing all these laptops but next time they tell us that we can trust them with personal information and other sensitive things I hope we all remember these incidents. This is outrageous.
Every week, I look at a software patent that’s been in the news. You can see previous installments in the series here. Before I get to this week’s patent, I wanted to note that the Public Patent Foundation has launched Software Patent Watch, a new blog that tracks the software patent problem. On Tuesday they announced that the patent office has broken the all-time record for software patents in a single year, and is on track to issue 40,000 patents by year’s end. That’s more than 100 software patents per day.
Luckily, none of those tens of thousands of patents produced any high-profile litigation this week, so I thought I’d cover one of the classics of recent software patent litigation, Microsoft’s (and now Apple’s) legal battle with Burst.com. Burst sued Microsoft back in 2002, claiming that Microsoft’s Windows Media software violates its patents. Microsoft settled the dispute last year, and Burst turned its legal guns on Apple in April, claiming that Apple stole the same “technology.”
Continue reading →
I think the hysterical tone of this article about the new restrictions in the latest version of the Windows Media Player DRM is unnecessary, but it makes some good points:
One of the problems with WiMP11 is licensing and backing it up. If you buy media with DRM infections, you can’t move the files from PC to PC, or at least you can’t and have them play on the new box. If you want the grand privilege of moving that content, you need to get the approval of the content mafia, sign your life away, and use the tools they give you. If you want to do it in other ways, you are either a lawbreaker or following the advice of J Allard. Wait, same thing.
So, in WiMP10, you just backed up your licenses, and stored them in a safe place. Buying DRM infections gets you a bunch of bits and a promise not to sue, but really nothing more. The content mafia will do anything in its power, from buying government to rootkitting you in order to protect those bits, and backing them up leaves a minor loophole while affording the user a whole lot of protection.
Guess which one wins, minor loophole or major consumer rights? Yes, WiMP11 will no longer allow you the privilege of backing up your licenses, they are tied to a single device, and if you lose it, you are really SOL.
We hear a lot about how DRM is a contract. But what kind of contract allows one party to unilaterally and retroactively change its terms?
Moreover, this is really a pretty severe restriction on the use of digital files. Backups are a fundamental part of good computer use. I back up my data at least once a month. I use my laptop pretty heavily, and a little bit abusively, and I rely on the fact that if my hard drive dies (or is lost or stolen) I’ll be able to get my data from backups.
In some cases, if you ask really nicely, the store that sold you the files will permit you to access the files again. But it’s clear that they do this out of the goodness of their hearts: “Some stores do not permit you to restore media usage rights at all.”
Is it any wonder that Windows Media-based music stores are going down in flames?
Supporters of neutrality regulation often claim the mantle of defenders of free speech. Even the pending Senate telecom bill–which largely avoids comprehensive neutrality rules–includes a section on “Application of the First Amendment,” stating that no ISP may limit content based on “religious views, political views, or any other views expressed in such content.”
The problem, however, is that the First Amendment covers governmental, not private restrictions on speech. Moreover, as Randy May of Maryland’s Free State Foundation argues this week in Broadcasting and Cable magazine, such limits may violate–rather than further–First Amendment principles. As he points out:
Under traditional First Amendment jurisprudence, it is as much a free-speech infringement to compel an entity to convey messages it does not wish to convey as it is to prevent it from conveying messages it wishes to convey.
Going farther, he says that:
….When you think about it, laws imposing “neutrality” are eerily reminiscent of the defunct Fairness Doctrine that required broadcasters to present a balanced view of controversial issues.
The last point is particularly interesting. Given that a fair number of neutrality regulation proponents have also argued for the Fairness Doctrine, one wonders if they would disagree with the comparison.
A fuller version of May’s argument was published by the Free State Foundation here. Worth reading.
In parts 8 and 12 of this series, I’ve discussed Time Warner’s ongoing problems in what was suppose to be mass media paradise. The mega-merger that critics decried as “Big Brother,” “the end of the independent press,” and a harbinger of a “new totalitarianism” has turned out to be anything but. $100 billion in lost market cap by 2003 alone, AOL bleeding subscribers, and talk of spinning off the cable division have all led Time Warner President Jeff Bewkes to declare the death of “synergy.” More poignantly, he went so far as to call synergy “bullshit”!
And now the oldest members of this marriage – – Time and Warner – – may actually be considering a divorce too. Just last week Time announced that it was putting 18 of its 50 magazines up for sale. And, according to David Carr of the New York Times, the fire sale may not be over:
“[C]urrent realities and pressure from shareholders suggest that Time Inc. will either become a smaller, more profitable division of a public company or it will be in play. A very large boat will have to be turned around very quickly with little additional investment. There will be no big magazine start-ups, no significant acquisitions, only the grinding, dangerous task of taking some of the most storied brands in publishing and making them relevant at a time of rapidly changing consumer and advertising dynamics.”
It’s just another sign of how dynamic the media marketplace really is. See my last book for more details.
Techdirt is reporting that Maryland Governor Erlich has come out against the use of electronic voting machines in this year’s elections. I agree with Mike:
The rationale for keeping the machines also leaves us scratching our heads: “We paid millions. These are state-of-the-art machines.” Two responses: The evidence is pretty clear that these are not state of the art machines. They’re badly made, with ridiculously weak security, and a company behind them that bullies its critics, blatantly misleads in its responses to security problems and cracks jokes about their weak security when confronted. Therefore, it really doesn’t matter how many millions you spent on them, the machines are a problem. The Senate President also accused Ehrlich of simply using this issue as a political ploy to rally his supporters. By the way, for those of you who want to believe e-voting is simply a big Republican conspiracy (based on some offhand remarks by Diebold’s former chief), we should note that Ehrlich (who wants to scrap the machine) is a Republican, and the folks who want to keep the machines are Democrats. So, once again, we’ll note that this is not a partisan issue. It’s an issue about having secure, fair and accurate voting.
Quite so. Computers are very useful for a wide variety of tasks, but merely putting a computer in something does not make it “state of the art.” These are defective voting machines, they put the integrity of the election at risk, and so they shouldn’t be used no matter how many bells and whistles they might have. Hopefully Erlich’s announcement will be the start of a trend.