Sage advice from Brooke.
I’m not going to name names, but I find it particularly disturbing when people who work in tech policy refer to individual blog posts as “blogs.” The blog is the medium, not the message; calling a post a “blog” is the equivalent of calling an article in the Washington Post a newspaper, as in, “Hey, did you read that newspaper in the Washington Post this morning about new FDA regulations on over-the-counter pain relievers? Boy, that Matthew Perrone sure can write a newspaper!”
I’m glad she wrote that blog to make sure no one was confused.
Related to my previous post, I think it’s no coincidence that the Samba team has taken the lead in criticizing the Microsoft-Novell deal. Some commentators have argued that the free software movement objects to the deal because they want to prevent interoperability between free and proprietary software, thereby forcing vendors to choose sides. But that clearly can’t be right, because Samba’s raison d’être is (as their slogan says) “opening Windows to a wider world.” If the free software movement were trying to prevent compatibility with proprietary software, you would expect the Samba team to be on the other side, urging restraint and cooperation with Microsoft. That clearly hasn’t happened.
I suspect that what is happening is that the Samba guys are terrified that Microsoft will use patent law to put them out of business. They’re particularly vulnerable to patent claims because their software is designed to interoperate with Windows, which of necessity means that they have to mimic many features of Microsoft’s own software in order to achieve compatibility.
Naturally, Microsoft has never liked the fact that people could interoperate with Windows without paying Microsoft for the privilege. The Samba guys know this. So they expect they’d be among the first targets should Microsoft make a concerted effort to use the patent system against the free software movement.
Update: My software patent series will be taking the week off in observance of the holidays.
Another bit of fallout from Novell’s patent agreement with Microsoft, as Samba developer Jeremy Allison quits Novell. He was quickly snapped up by Google. I’ve never heard of the guy, but Ars calls him “prominent,” and Groklaw calls him “legendary.” His letter said, in part:
As many of you will guess, this is due to the Microsoft/Novell patent agreement, which I believe is a mistake and will be damaging to Novell’s success in the future. But my main issue with this deal is I believe that even if it does not violate the letter of the licence it violates the intent of the GPL licence the Samba code is released under, which is to treat all recipients of the code equally…
The Microsoft patent agreement has put us outside the community, and there is no positive aspect to that fact, and no way to make it so. Until the patent provision is revoked, we are pariahs.
Given that the ability to recruit and retain talent is crucial to the success of software firms, this sort of defection is likely to prove an effective way to enforce the GPL without the need to resort to the courts. Indeed, it’s a more powerful mechanism than the courts, because efforts like Novell’s to squeak by on a technicality aren’t going to fly. Being perceived as violating the spirit of the GPL is just as damaging as violating the letter of it.
Over at Catallarchy, Sean Lynch has a tirade against Wikipedia:
Wikipedia is an excellent example of when crowds are not wise; one’s actual knowledge has no correlation whatsoever with how much effort they’re willing to put out to keep Wikipedia accurate, and some of my recent experience there seems to indicate exactly the opposite, that people who know what they’re talking about have better things to do with their time than sit around all day fixing incorrect information in Wikipedia, whereas know-it-alls will spend lots of time “fixing” correct information that they disagree with. The other group who may not be know-it-alls are the rules nazis who care more about form than accuracy. These are the people who show up at all the HOA meetings to complain that your curtains are the wrong color when meanwhile the pipes are leaking and about to burst.
Recently I went back to the Wikipedia page on hydrogen peroxide out of curiosity to see if some fixes I’d made to dangerously inaccurate information on the page had survived. They had not. The same bullshit that I’d originally corrected (bullshit that could kill someone) had been returned. Rather than attempt to fix it again because the bullshit was now scattered throughout the article, I simply added a notation under “hazards” warning people that the article could be edited by anyone and that they should consult a source with actual accountability for safety information. My warning was reverted within minutes, with the notation on the edit referring me to a page entitled “What Wikipedia is Not” and suggesting that I use my knowledge to fix the information on the page. Well, I already had fixed the page, and some moron with far more certainty than knowledge had gone and screwed it up again. In addition, the “What Wikipedia Is Not” page mentions nothing about safety or accountability.
I’ve gone from merely thinking Wikipedia doesn’t live up to its name to believing that it is a complete joke.
I’ve responded to this general argument on several occasions , so I won’t rehash those arguments here. But one of the interesting things about Lynch’s post is the attitude of entitlement it seems to reflect.
Continue reading →
Related to my last post, it occurs to me that there are a lot of businesses that drink from one fire hose or another, and then sell the resulting expertise to people who are too busy to drink from the fire hose themselves. Free software firms and college professors are two such examples. It occurs to me that our friends at TechDirt are an example of the same phenomenon.
Their “fire hose” is the the world of tech news. Between formal news sites like CNet and ZDNet and the blogosphere, keeping up with the conversation about technology, business, policy, and the like is more than a full-time job. I spend a couple of hours a day reading blogs that focus on tech policy, and I’m nowhere near keeping up with all the worthwhile tech policy blogs out there. And I don’t even try to keep up with sites that focus on tech business and Silicon Valley gossip. You can get a sense of the size of this particular fire hose by perusing TechMeme, a site that aggregates the most popular stories in the tech blogosphere at any given moment. It would be a full-time job just to read every post that gets linked to from TechMeme.
Continue reading →
David Robinson, managing editor of The American, has a great article arguing that soaring spending on higher education is something to celebrate:
Modern academics often liken their work to drinking from a fire hose. Historians, philosophers, and physicists all find it impossible to keep up with every potentially relevant paper or study. It’s not just a matter of catching up to the state of the art–one couldn’t even read the research materials in an academic field as fast as they are being produced. Inevitably, this leads scholars to retreat further and further into sub-specialization, narrowing the horizon of what counts as “relevant,” of what their fields consist in. But the side effect of this constant, fractal division of the range of human knowledge is that more and more scholars are needed to cover the same range of topics. A hundred years ago, a biologist could plausibly aspire to know all the important theories and facts contained within the field of biology. But today, there are people working on genetics, proteomics, virology, ecology, and a host of other fields, each of which is a full-time, fully mind-absorbing pursuit in its own right.
This all makes sense once one recognizes that professors are the conduits carrying our accumulated knowledge into the present. Having access to something that is written in a book is not the same thing as knowing it. In order for knowledge to be available and useful here and now, someone must be practically familiar with it. And the more knowledge there is to “cover,” as it were, with practical familiarity, the greater the number of scholars needed to complete a university. This means both more professors now and a greater number of those honors undergrads, training for the professoriate. A greater throughput of accumulated knowledge among successive generations requires an ever-increasing number of conduits.
I think this observation applies equally well to the software world. As software simultaneously gets more complex and cheaper, getting access to a piece of software will be a less and less important part of the overall cost of using it. That was certainly true when I worked as a webmaster in college–keeping up with all the changes in web technology was a full-time job.
Continue reading →
You should check out Ed Felten’s excellent rebuttal of Nick Carr’s contention that sites like MySpace are “sharecropping” their users–enticing them to create valuable content which MySpace then profits from. Felten points out that users in fact get considerable value from their MySpace content. His conclusion:
The most interesting assumption Carr makes is that MySpace is capturing most of the value created by its users’ contributions. Isn’t it possible that MySpace’s profit is small, compared to the value that its users get from using the site?
Underlying all of this, perhaps, is a common but irrational discomfort with transactions where no cash changes hands. It’s the same discomfort we see in some weak critiques of open-source, which look at a free-market transaction involving copyright licenses and somehow see a telltale tinge of socialism, just because no cash changes hands in the transaction. MySpace makes a deal with its users. Based on the users’ behavior, they seem to like the deal.
As I’ve written before, markets don’t require money, and many non-market activities are perfectly compatible with a free, capitalistic society.
Mike at Techdirt has more.
I’ve just finished reading Cato’s new paper on predictive data mining as an anti-terrorism strategy, which co-author Jim Harper discussed last week. It is excellent, and I encourage you to read it. I found this part particularly interesting:
The terrorists not only operated in plain sight, they were interconnected. They lived together, shared P.O. boxes and frequent flyer numbers, used the same credit card numbers to make airline travel reservations, and made reservations using common addresses and contact phone numbers. For example, al-Mihdhar and Nawaf al-Hazmi lived together in San Diego. Hamza al-Ghamdi and Mohand al-Shehri rented Box 260 at a Mail Boxes Etc. for a year in Delray Beach, Florida. Hani Hanjour and Majed Moqed rented an apartment together at 486 Union Avenue, Patterson, New Jersey. Atta stayed with Marwan al-Shehhi at the Hamlet Country Club in Delray Beach, Florida. Later, they checked into the Panther Inn in Deerfield Beach together.
Continue reading →
Gigi Sohn had a great response to the Wall Street Journal‘s December 1 editorial printed in letters section of Thurday’s Journal:
There are legitimate and legal uses for posting (typically small portions of) copyrighted material, including for public comment and criticism–guaranteed to the public by a limitation on copyright called fair use. For purposes of fair use, someone posting material online does not need an author’s permission; imagine if a movie critic needed to ask a studio’s permission to critique a movie demonstrated by showing a clip. Google’s service indexes hundreds of thousands of pages of book texts, all to provide brief passages of context in response to a searcher’s specific query. Unless a book is in the public domain or otherwise permitted by the publisher, Google Book Search does not provide the entire text of a book online. Using just enough of a book to show the results of a search is a perfect example of fair use.
Your editorial advocates an unacceptable culture of control. Google and YouTube exist despite individual infringers, not because of them. Your version of rigorous copyright enforcement would prevent tech innovators like Google from giving users new ways to create and access content, while providing no new incentives for content innovators to create. Fair uses of home taping didn’t kill music, video recording didn’t kill TV or movies, and Google and YouTube aren’t going to do it either. These are legitimate fair uses of copyrighted works for which our society is better off, not worse.
It’s important that people understand that Google Book Search displays a tiny fraction of a book’s content. Google’s critics seem to be under the impresion that Google Book Search allows you to view entire in-copyright books. If everyone understood that in reality, the software only displays a handful of excerpts, each of which is only a couple of sentences long, I think that almost any reasonable person would recognize that Google’s in the right. It’s only because the publishers have created the misperception that Google is distributing entire books without the permission of publishers that they’ve gotten a sympathetic ear from the likes of the WSJ editorial board.
Relatedly, Sohn’s point that Google and YouTube have succeeded despite the infringing activity of individual users, not because of them, is important. Unlike Grokster, there really is an enormous amount of non-infringing material on YouTube. The service would continue to be widely used if all the infringing material were taken down. There’s certainly room for debate about how much of the burden of policing infringing content should fall to YouTube, but the more important issue is that copyright law should not shut down a fundamentally legitimate service because a minority uses it for illegal purposes.
Via Amanda, MSNBC reports that the FCC is holding onto extremely useful cell phone usage data for fear it will aid terrorists:
Any time a carrier has an outage that affects 900,000 caller minutes–say a 30-minute outage impacting 30,000 customers–it must report it to the Network Outage Reporting System. In the beginning, the reports all were from “wire line” telephone providers and were available to the public. But in 2004, the commission ordered wireless firms to supply outage reports as well. But at the same time, it removed all outage reports from public view and exempted them from the Freedom of Information Act. The FCC took the action at the urging of the Department of Homeland Security, which argued that publication of the reports would “jeopardize our security efforts.”
As Amanda puts it:
It’s unclear how terrorists would use this information; perhaps with an appeal to the same magic force that would let them use an ounce of shampoo in an 8-ounce bottle to take down an airplane. But it sure is clear how this policy benefits the cellular companies.
Relatedly, Bram Cohen quotes a friend who says that information about power grid outages are no longer published for the same reason.