IT&T News is a great publication that features many excellent articles by a variety of free-market policy experts. But I found this article on e-voting, by PRI’s Vince Vasquez, rather disappointing:
The e-voting experience has been a resounding success that has generated relatively few complaints from the electorate. To be sure, there were some legitimate problems with DRE machines on November 7, but many have been found to be man-made, such as innocent user error, inept poll workers, or ineffective planning by local election authorities. Unfortunately, these human-based fumbles have opened the doors for open-source zealots, wide-eyed activists, and crafty politicians who want to scrap DREs for the 2008 elections.
I’m not a politician, and I actually don’t think that open source would solve what’s wrong with e-voting, so by process of elimination, I must be a “wide-eyed activist.” I bet Ed Felten and Avi Rubin–both widely respected computer scientists–would be surprised to learn that they, too, are “wide-eyed activists.”
After busting out that sort of inflammatory rhetoric, you would think that Mr. Vasquez would have some pretty compelling refutations of us wide-eyed activists. But he doesn’t even mention–much less address–any of the actual arguments that e-voting critics make against computerize voting. No mention of the fact that DREs are less transparent, harder to audit, or more susceptible to wide-scale (rather than local) fraud than paper ballots. No mention of the current debacle in Florida, the various reports of problems with e-voting machines, or the fact that computer security researchers have actually demonstrated that some e-voting machines are vulnerable to vote-stealing viruses.
Nope, all we get is vague arguments about how “digital red tape and risky industry requirements jeopardizes the value of these innovative machines.” (Why are they innovative? Because there are computers in them!) And overheated rhetoric about “feeding the country’s voting system to ideological lions.” There might be some good arguments for using DREs, but Mr. Vasquez doesn’t seem to have any.
I’m excited to announce that Brooke Oberwetter is joining the TLF team. Brooke has been a friend of mine since we worked together at Cato. She’s one of the sharpest and funniest people I know. Brooke earned my admiration for her tireless (and sadly, futile) fight to stop the smoking ban in DC. Also, with the possible exception of Julian, she throws the best parties in DC.
And (despite my occasional nitpicking) she has many interesting and worthwhile things to say about tech policy. She’s a policy analyst at the Competitive Enterprise Institute, and she tells me her work at CEI will be more focused on tech policy in the coming months. She’s currently seeking a masters degree in public policy at American University, and she also blogs at the CEI blog and her personal blog.
Lawrence Ebert says that my American article didn’t quite get the Federal Circuit’s obviousness test right:
Of the “specific documentation” point, Lee wrote: “[The CAFC] held that when a patent covers the combination of two elements, it can be declared obvious only if someone can produce another patent, an academic paper, or other formal documentation that pre-dated the patent application and had a specific ‘teaching, suggestion, or motivation’ to combine the elements in the manner described.”
Lee is wrong in stating that specific documentation must be found. In the CAFC case of In re Kotzab, decided in the year 2000 long before KSR v. Teleflex, the CAFC wrote:
“the teaching, motivation, or suggestion may be implicit from the prior art as a whole. rather than expressly stated in the references… In re Kotzab, 217 F.3d 1365, 1370 (CAFC 2000)”
I stand corrected. Mr. Ebert has a JD and I do not, and he doubtless knows this area of the law much better than I do. In my defense, however, I think I’m in good company: in oral arguments, the Supreme Court justices seemed pretty confused by the Federal Circuit’s precedents themselves. If Justice Breyer finds the TSM test confusing, I don’t feel too bad about getting it wrong myself.
In any event, I appreciate Mr. Ebert’s taking he time to point this out.
Michael Arrington at TechCrunch has posted two videos relating to the iPhone. The first is an interview with Steve Ballmer that gives good insight into the state of competition in the device market. Ballmer scoffs at the iPhone’s high price point (and lack of current availability), while conceding that MSFT is behind in MP3 players. The second video, though, shows just why the iPhone is likely to do well. It may be high-priced, but it has incredible capabilities. Do check it out.
Bret Swanson of The Discovery Institute has an important piece on Net Neutrality in today’s Wall Street Journal entitled “The Coming Exaflood.” He’s refering to the increasing flood of exabyte-level traffic (especially from high-def video) that could begin clogging the Net in coming years unless broadband networks are built out and upgraded to handle it. He states:
“[Net neutrality supporters] now want to repeat all the investment-killing mistakes of the late 1990s, in the form of new legislation and FCC regulation… This ignores the experience of the recent past–and worse, the needs of the future.
Think of this. Each year the original content on the world’s radio, cable and broadcast television channels adds up to about 75 petabytes of data–or, 10 to the 15th power. If current estimates are correct, the two-year-old YouTube streams that much data in about three months. But a shift to high-definition video clips by YouTube users would flood the Internet with enough data to more than double the traffic of the entire cybersphere. And YouTube is just one company with one application that is itself only in its infancy. Given the growth of video cameras around the world, we could soon produce five exabytes of amateur video annually. Upgrades to high-definition will in time increase that number by another order of magnitude to some 50 exabytes or more, or 10 times the Internet’s current yearly traffic.
We will increasingly share these videos with the world. And even if we do not share them, we will back them up at remote data storage facilities. I just began using a service called Mozy that each night at 3 a.m. automatically scans and backs up the gigabytes worth of documents and photos on my PCs. My home computers are now mirrored at a data center in Utah. One way or another, these videos will thus traverse the net at least once, and possibly, in the case of a YouTube hit, hundreds of thousands of times.
Continue reading →
Ordinarily, my software patent series focuses on patents that have been granted by the patent office and the subject of litigation. I’m going to break that pattern this week because reader Richard Bennett pointed to one of his own patent applications as an example of a worthwhile software patent. Since I frequently ask supporters of software patents to point out a good one (a request that’s almost always ignored) I thought I’d analyze Bennett’s patent application to see what we can learn. Below the cut are my thoughts on it.
Continue reading →
Today, the Progress & Freedom Foundation released a “Tech Agenda for 2007” containing ten policy recommendations for the 110th Congress and the FCC. It’s a set of market-oriented proposals covering a wide array of Digital Economy issues. What follows is just a brief summary of the 10 priorities we came up with. Please review the complete study for our detailed recommendations:
1 – Renew fundamental reforms of communications regulations.
2 – Leave network neutrality concerns to the market and antitrust.
3 – Leave content business models and fair use to the market.
4 – When addressing patents, take a first-principles approach to property and innovation.
5 – Enact meaningful reform of archaic media ownerships laws and regulations that hinder media marketplace experimentation.
6- Pursue greater First Amendment parity among modern media providers by leveling the playing field in the direction of greater freedom for all operators / platforms.
7 – Subject data security and privacy proposals to careful benefit-cost analysis, including full examination of consumer benefits from services and technologies affected by these proposals.
8 – Promote pro-competitive, non-regulatory internet governance.
9 – Avoid open-ended, intrusive data retention mandates.
10 – Promote more efficient taxation of telecom services and Internet sales.
I hope every TLF reader is also a Techdirt reader, but in case some of you missed it, I wanted to point out that Mike Masnick is doing a fantastic series on post-scarcity economics. Here’s a taste:
Throw an infinity into the supply of a good and the supply/demand curve is going to toss out a price of zero (sounds familiar, right?). Again, the first assumption is to assume the system is broken and to look for ways to artificially limit supply.
However, the mistake here is to look at the market in a manner that is way too simplified. Markets aren’t just dynamic things that constantly change, but they also impact other markets. Any good that is a component of another good may be a finished good for the seller, but for the buyer it’s a resource that has a cost. The more costly that resource is, the more expensive it is to make that other good. The impact flows throughout the economy. If the inputs get cheaper, that makes the finished goods cheaper, which open up more opportunities for greater economic development. That means that even if you have an infinite good in one market, not all the markets it touches on are also infinite. However, the infinite good suddenly becomes a really useful and cheap resource in all those other markets.
So the trick to embracing infinite goods isn’t in limiting the infinite nature of them, but in rethinking how you view them. Instead of looking at them as goods to sell, look at them as inputs into something else. In other words, rather than thinking of them as a product the market is pressuring you to price at $0, recognize they’re an infinite resource that is available for you to use freely in other products and markets. When looked at that way, the infinite nature of the goods is no longer a problem, but a tremendous resource to be exploited. It almost becomes difficult to believe that people would actively try to limit an infinitely exploitable resource, but they do so because they don’t understand infinity and don’t look at the good as a resource.
Of course, the concern is that information resources only become infinite after the first copy is produced, and that first copy might not be produced absent artificially constrained supply. But I expect Mike will argue that this condition occurs less often than the standard economic model suggests–that people can show surprising ingenuity in finding ways to profit from their intellectual creations even without the benefit of a legal monopoly.
I want to second Jim’s recommendation that you read his Regulation article discussing PFF’s new book on network neutrality regulation. He argues persuasively that each side in the network neutrality debate gives the other too little credit:
It is hard to pin down what exactly the Internet is. There are several versions, with convergence around the idea that the things making up the Internet can be described as a series of layers. At the bottom, there is the physical layer–the wires, cables, and fibers that Internet communications travel over. Next there is the logical layer, the routing rules that send packets of data from origin to destination. Next there is the application layer–the programs that people use to create content and send it from one place to another. (Think of e-mail programs, browsers, and the like.) Finally, there is the content layer. This is the actual material people send to each other in those e-mails, the websites that show up on their screens, and so on.
Continue reading →
Here’s the most interesting claim in the lawsuit filed by parents against MySpace alleging its negligent failure to protect their daughters:
14. Plaintiffs allege and are prepared to show proof that, at all times relevant to the claims alleged herein, said parents were variously too busy, preoccupied, or self-absorbed to attend to their ordinary parenting duties. Alternatively and additionally, the willfullness and independence of their victim children was intimidating and exhausting, for which reason responsibility for defending and guarding the interests of said victims shifted to defendant MySpace.
/satire