May 2008

Malcolm Gladwell has an engaging write-up of Intellectual Ventures, a kind of reductio ad absurdum of the patent system:

In August of 2003, I.V. held its first invention session, and it was a revelation. “Afterward, Nathan kept saying, ‘There are so many inventions,’ ” Wood recalled. “He thought if we came up with a half-dozen good ideas it would be great, and we came up with somewhere between fifty and a hundred. I said to him, ‘But you had eight people in that room who are seasoned inventors. Weren’t you expecting a multiplier effect?’ And he said, ‘Yeah, but it was more than multiplicity.’ Not even Nathan had any idea of what it was going to be like.”

The original expectation was that I.V. would file a hundred patents a year. Currently, it’s filing five hundred a year. It has a backlog of three thousand ideas. Wood said that he once attended a two-day invention session presided over by Jung, and after the first day the group went out to dinner. “So Edward took his people out, plus me,” Wood said. “And the eight of us sat down at a table and the attorney said, ‘Do you mind if I record the evening?’ And we all said no, of course not. We sat there. It was a long dinner. I thought we were lightly chewing the rag. But the next day the attorney comes up with eight single-spaced pages flagging thirty-six different inventions from dinner. Dinner.”

As Mike points out, the blindingly obvious conclusion from this is that patents are way, way too easy to get. If a room full of smart people—even absolutely brilliant people—can come up with 36 “inventions” in one evening, the logical conclusion is that “inventions” are not rare or hard to produce, and that therefore there’s no good public policy reason to offer monopolies to people who invent them. After all, the classic theory of patent law says just the opposite: that inventions are so difficult and expensive to produce that we wouldn’t get them at all without patent protection. That’s clearly not true of the “inventions” IV is developing, which means that if IV does get patents on them, the patent system is seriously flawed.
Continue reading →

As I have argued many times before (see 1, 2, 3, 4), some sort of usage-based bandwidth metering or consumption cap makes a lot of sense as a way to deal with broadband network traffic management. So, if this is the direction that Comcast is heading–and this recent Broadband Reports piece suggests that it is–that is fine with me. The article says it might work as follows:

A Comcast insider tells me the company is considering implementing very clear monthly caps, and may begin charging overage fees for customers who cross them. While still in the early stages of development, the plan — as it stands now — would work like this: all users get a 250GB per month cap. Users would get one free “slip up” in a twelve month period, after which users would pay a $15 charge for each 10 GB over the cap they travel. According to the source, the plan has “a lot of momentum behind it,” and initial testing is slated to begin in a month or two.

“The intent appears to be to go after the people who consistently download far more than the typical user without hurting those who may have a really big month infrequently,” says an insider familiar with the project, who prefers to remain anonymous. “As far as I am aware, uploads are not affected, at least not initially.” According to this source, the new system should only impact some 14,000 customers out of Comcast’s 14.1 million users (i.e. the top 0.1%).

It’s always been my hope that we could potentially head-off burdensome Net neutrality regulations by encouraging carriers to deal with the problem of excessive bandwidth consumption by using time-tested price discrimination solutions instead of the sort of packet management techniques that are the subject of such heated debate today. Of course, on one of our old podcasts on Net neutrality issues, Richard Bennett pointed out to me that this still might not alleviate the need for other types of traffic management techniques to be used. And he also pointed out that the very small subset of true bandwidth hogs are almost entirely heavy BitTorrent users, so perhaps the way Comcast was dealing with them was just another way of skinning the same cat.

We have some very savvy contributors and readers here at the TLF, so I am hoping that some of you can help me out regarding a data search I’m struggling with. I am seeking a definitive database of blog stats. I am hoping that somebody out there has been tracking blog growth regularly and has aggregated yearly data going back a few years. I want to chart this growth as part of my ongoing “Media Metrics” series, but I want to make sure that the numbers I am using are accurate.

Since 2003, I have been relying on the occasional reports about the “State of the Blogosphere” that Dave Sifry of Technorati has been putting together. Lots of good info in those reports, but (a) it is not standardized (the totals are from random months); and (b) he stopped producing it last year (the last report is from April 2007). There are also other questions I have not been able to figure out: Should the totals include individual profiles at social networking sites? If so, how would they be counted? How are “splogs” (spam blogs) defined and factored (or not) into these totals? Should Twitter and other forms of “micro-media” factor in?

Regardless, I have put together the following chart using the numbers from Dave Sifry’s old reports as well as the latest numbers that Technorati lists on the “About Us” tab. The latest number is an astounding 112 million blogs, and according to Technorati data, “there are over 175,000 new blogs (that’s just blogs) every day. Bloggers update their blogs regularly to the tune of over 1.6 million posts per day, or over 18 updates a second.” That’s impressive, but I would love to see if anyone else has competing numbers.

Anyone have any thoughts on this? I would really appreciate any input here. One would think that something as important as the growth of the blogosphere would be better tracked by someone out there. Or perhaps someone is tracking it very closely and I just haven’t seen it because the data is proprietary? (like Gartner? or eMarketer?)

Blog Growth

Larry Magid in the San Jose Mercury News on the limits of the age verification proposals being discussed to protect kids online. A quote:

Some attorneys general want to see the electronic equivalent of showing an ID at the door. . . But Sentinel Chief Executive John Cardillo told me age- and identity-verification schemes typically rely on credit reports and other data that is accessible for most adults but generally not available for people under 17. One could, in theory, access school, birth or Social Security records, but for a variety of good reasons, these databases are off-limits to private entities. . . . [E]even if age verification is possible, I still question whether it’s desirable. I worry about some teens – including victims and youths questioning their sexual identity – being harmed because they’re denied access to online support services that could help them or even save their lives.

Arguments concerning age verification and showing harm from restricting minors’ access to the Internet were made in challenging provisions of the Communications Decency Act restricting “indecency” online back in 1997/1998. At the time, age verification was judged impracticable, and the Supreme Court’s ruling upholding free speech rights online in some part rested on this conclusion. If age verification proposals now move forward, the issue might be revisited again.

Free speech rights are some of the healthier provisions of the Bill of Rights protection. But the area of minors continues to be troublesome. The challenges to the “indecency” laws are made on behalf of members of an adult audience; they are in effect restricted to child-safe material even though the law only is intended to protect kids. Almost everyone seems to accept without question the premise that such challenges may not be brought on behalf of the children themselves.  The lack of attention to this point seems to stem, first, from observation that of course children do not have rights to free speech as against their parents or other private caretakers. I tell The Grub to hush, he must hush, or I will give his racetrack a time-out. But this power of parents and their delegates is a common law matter. It ought to have nothing to do with the resolution of the constitutional question, regarding the free speech rights of kids as against the government.

One now contends with a second argument: It is common sense that children (differently for different ages) must have different rights than adults, they are not all at the same stage of development and so on and so on. Again, the same objection. One is not dealing with the common law of contracts here. This has to do with constitutional rights. The constitution says “Congress may make no law….”  It is a restriction on the power of government. It gives government no special powers with respect to minors. If in fact Congress has such powers, where does it get them? If it can get them somewhere else, what is the limit?

Imagine if one made the “children can’t possibly have the same rights as adults” with respect to other provisions of the Bill of Rights. The anti-establishment clause for example. Government may not establish a religion for adults, but if we read exceptions into the Bill of Rights for children, evidently it *can* establish such a religion for children? That is nonsense. Similarly the right against self-incrimination. It is not okay for the police to torture adults, but if we are to read exceptions in the Bill of Rights against children, evidently it is okay for the the police to torture children? More nonsense. May Congress take the property of children without compensation? Subject them to cruel and unusual punishments? It seems to me that if the answer to any of these questions is “no,” then it must be “no” to all of them (including, to the outrage of many, to the right to bear arms and the right to trial by jury). Congress has no more powers over kids than it has over adults. May the next free speech challenge to age verification will be brought on behalf of the kids themselves.

Much hemming and hawing and lawyering will follow. I know, I know.

 

 

 

 

 

Well, this is gunna be a sure-fire sign of my uber-geekiness, but I gotta say I just love Scribd’s “iPaper” service, which allows anyone to upload and share just about any type of document with the rest of the world. Think or it as YouTube or Flickr for nerds who want to share their papers and PowerPoints even more than their pictures or videos.

Like Flickr and YouTube, Scribd offers users the ability to embed things directly into blogs like this. Below, for example, I have embedded my recent slide show presentation at Penn State University’s conference on the future of video games. If you play around with the buttons on the top of the iPaper player, you will see how easy it is to resize the embedded document, search within it for specific items, download or email it, print it out, and so on. Super cool. I hope my TLF colleagues will join me in using this great tool more here on our site. I plan on posting a lot more things here this way in the future. (And I swear I didn’t get paid by Scribd to say any of this!)

Read this doc on Scribd: Video Games presentation (PDF format)

The National Cable & Telecommunications Association blog did a series of posts back in February about the OECD study. There seems to be three basic criticisms. First, businesses in the US have a higher proportion of “special access” lines than other countries ranked, and these are not counted in the statistics, while businesses with normal DSL lines are counted. Second, the OECD statistics are focused on “connections per 100 subscribers” rather than proportion of subscribers who have an Internet connect. The result is to penalize the US, which has a larger-than-average household size (all of whom can share a single Internet connection) while giving an edge to countries with smaller household sizes. Finally, the report relies on advertised speeds and prices, which the NCTA suggests exaggerates Japan’s lead over a metric that focuses on the actual available speeds in that country. Obviously, the NCTA has an agenda to promote so it’s worth taking the criticisms with a grain of salt, but they’re interesting in any event. Thanks to reader Wyatt Ditzler for the link.

Slashdot recently linked to this comparison of the cost of Windows in Brazil and the US. This brings to mind a point I think I’ve seen Mike make: beyond the general point that libertarians should celebrate free software because it’s an example of non-coercive production of public goods, libertarians also have reasons to like free software because it’s more resistant to the coercive power of the state. When software is produced by a commercial company and sold in the marketplace, it’s relatively easy for the state to tax and regulate it. Commercial companies tend to be reflexively law-abiding, and they can afford the lawyers necessary to collect taxes or comply with complex regulatory schemes.

In contrast, free software will prove strongly resistant to state interference. Because virtually everyone associated with a free software project is a volunteer, the state cannot easily compel them to participate in tax and regulatory schemes. Such projects are likely to react to any attempt to tax or regulate them is likely to be met with passive resistance: people will stop contributing entirely rather than waste time dealing with the government.

Hence, free software thus has the salutary effect of depriving the state of tax revenue. But even better, free software is likely to prove extremely resistant to state efforts to build privacy-violating features into software systems. CALEA requires telecom infrastructure to include hooks for eavesdropping by government officials, but it will prove extremely difficult to get similar hooks added to free software. No one is likely to volunteer to add such a “feature”, and even if the state added it itself, it wouldn’t have any realistic way to force people to use its version.

Can a company have a Freudian slip? If it’s possible, L-1 Identity Solutions has commited one.

In a promotional brochure for REAL ID Act “solutions,” it implicitly touts the ability to track people by race and by political party. This is not required by the REAL ID Act, but it’s not barred by it either.

In my testimony to Congress and in a post here, I pointed out the concern that REAL ID could be used for racial tracking. Political party is a new one, but who knows what would happen should the system be implemented.

Excerpt of L1 REAL ID promotion

OECD vs. SpeedTest

by on May 5, 2008 · 8 comments

Nate Anderson points to a new report on broadband around the world that I’m looking forward to reading. I have to say I’m skeptical of this sort of thing, though:

Critics of the current US approach to spurring broadband deployment and adoption point out that the country has been falling on most broadband metrics throughout the decade. One of the most reliable, that issued by the OECD, shows the US falling from 4th place in 2001 to 15th place in 2007. While this ranking in particular has come under criticism from staunchly pro-market groups, the ITIF’s analysis shows that these numbers are the most accurate we have. According to an ITIF analysis of various OECD surveys, the US is in 15th place worldwide and it lags numerous other countries in price, speed, and availability—a trifecta of lost opportunities.

With an average broadband speed of 4.9Mbps, the US is being Chariots of Fire-d by South Korea (49.5Mbps), Japan (63.6Mbps), Finland (21.7Mbps), Sweden (16.8Mbps), and France (17.6Mbps), among others. Not only that, but the price paid per megabyte in the US ($2.83) is substantially higher than those countries, all of which come in at less than $0.50 per megabyte.

Now, this site is a tool for measuring the speed of your broadband connection, and it purports to have data from around the world. I have no idea how reliable their methodology is generally, or how good their testing equipment is around the world, but I’ve used it in several different places in the US and it at least seems reliable around here. According to their measurements, the US has an average broadband speed of 5.3 mbps, roughly what the OECD study said. But the numbers for the other countries cited are wildly different: Japan is 13 mbps, Sweden is 8.7 mbps, South Korea is 6.1 mbps, and France is 5.5 mbps. If these numbers are right, te US is behind Sweden and Japan, and slightly behind South Korea and France, but we’re not nearly as far behind the curve as the OECD reports would suggest.

And then there’s this:

The ITIF warns against simply implementing the policies that have worked for other countries, however, and it notes that a good percentage of the difference can be chalked up to non-policy factors like density. For instance, more than half of all South Koreans lived in apartment buildings that are much easier to wire with fiber connections than are the sprawling American suburbs.

Now, I haven’t examined SpeedTest’s methodology, so they might have all sorts of problems that make their results suspect. But it’s at least one data point suggesting that the OECD data might be flawed. And I think the very fact that there seems to be only one widely cited ranking out there ought to make us somewhat suspicious of its findings. Scott Wallsten had bad things to say about the OECD numbers on our podcast. Is there other work out there analyzing the quality of the OECD rankings?

This analysis from IPA in Australia suggests not.

http://www.ipa.org.au/publications/publisting_detail.asp?pubid=822

This analysis draws on two recent studies of fair trade to conclude that is it just not what it is cracked up to be. One example of the studies findings:

from the US-based Transfair, fair trade advocates conceded that fair trade producers provided lower grade coffee for sale through the fair trade system. Fair trade producers sell their best coffee on the free market when it commands a higher speciality price than fair trade. Producers then keep their lower-grade quality through the fair trade system where they receive a guaranteed price. They do this because there is an oversupply of fair trade coffee and an undersupply of buyers for fair trade coffee.

The IPA analysis concludes, that “new studies demonstrate that the evidence supporting fair trade’s contribution to development for the world’s poor is dubious, at best. The studies also show that fair trade creates a number of problems for fair trade and non-fair trade producers.”