Miscellaneous

TechLawJournal has a thorough analysis of Justice John Paul Stevens’ opinions in technology-related areas. I reproduce it here with permission. (Tim Lee’s shorter Cato@Liberty post about Justice Stevens’ legacy in tech is here.)

Justice John Paul Stevens, who has served on the Supreme Court since 1975, announced on April 9, 2010, that he will retire when the Court completes its current term this summer. This article reviews his contributions to technology related areas of law.

Outline of Article:
1. Summary.
2. Copyright Cases.
3. State Immunity in IPR Cases.
4. Patent Cases.
5. Communications Cases.
6. Internet Speech Cases.
7. Privacy Cases.
8. Other Cases.

1. Summary.

Justice Stevens wrote the majority opinion in the 1984 landmark Sony Betamax case. It was a 5-4 opinion. He joined in the unanimous 2005 opinion in MGM v. Grokster, regarding vicarious copyright infringement by the distributors of peer to peer systems. He wrote a long and vigorous dissent in Eldred, the 7-2 case regarding the Copyright Term Extension Act.

Justice Stevens led the fight against extending sovereign immunity to states for violation of, among other things, intellectual property laws. He dissented from the outset, and never considered the Court to be constrained by the doctrine of stare decisis. However, his concern was with the conservatives’ interpretation of states rights, not incenting the creation of intellectual property. Continue reading →

Oh yeah, that was me. And a lot of others. Well, we were wrong. The mobile app store market (Apple, Android, etc) is brimming with a bonanza of micro-business opportunities for producers and consumers alike. I am consistently amazing by the range of offerings available today, the vast majority of which remain free of charge. But what is more impressive is the growing array of applications and games available for mere pennies. Sure, some are more than a buck — but not that much more. I was just looking through the 40+ apps that I’ve got on my Droid right now (not really sure how many I’ve downloaded overall since I’ve deleted a lot) and I would guess that I paid for at least 25% of them–many after being “upsold” by first trying the free versions and then buying. Yes, I know there continues to be a debate about what counts as a “micropayment,” but the fact that so many more people are paying just a couple of bucks or less for content in these mobile app stores suggests that its only going to easier for people to pay even smaller sums for content in coming years.

What got me thinking about all this was slide #75 in Mary Meeker’s latest slideshow about Internet trends. The Morgan Stanley web guru notes that users are more willing to pay for content on mobile devices than they are on desktop computers for a number of reasons, but the first of which she listed was: “Easy-to-Use/Secure Payment Systems — embedded systems like carrier billing and iTunes allow real-time payment.”  The important point here is that the combination of these slick, well-organized online app stores + secure, super-easy billing systems have combined to overcome the so-called”mental transaction cost problem,” at least to some extent. We’re not nearly as reluctant today to surf away when something says “$0.99” on our screen. Increasingly, we’re hitting the “Buy” button.

Continue reading →

Sorry for another job board posting, but wanted to see if anyone had any leads for this open position at the Progress & Freedom Foundation. We’re looking to hire a new Vice President of Development & Outreach to help us craft a public policy agenda for the organization and find support for it going forward.

The complete job description can be found online here or down below the fold. Interested candidates should contact me directly.

Continue reading →

Several years ago at a conference on universal telecommunications service, one panel moderator noted, “Everything that can be said about universal service has already been said, but not everybody has had a chance to say it, so that’s why we still have these conferences.” After hearings and a study by the Federal Trade Commission, a Federal Communications Commission Notice of Inquiry during the previous administration, the National Broadband Plan, the FCC’s still-open Open Internet proceeding, and Wednesday’s extension of the reply comment period in the Open Internet proceeding, net neutrality is starting to have the same vibe.

That’s why, instead of virtually killing some more virtual trees by writing more lengthy comments and replies, Jerry Brito and I signed onto a declaration by telecommunications researchers which explains that there is no empirical evidence of a systemic problem that would justify net neutrality rules, and these rules might actually ban practices that benefit consumers. Since the world probably doesn’t need another blog post rehashing arguments about this issue, I’ll simply point you to the comment here. It was masterfully written by economist Jeff Eisenach, a veteran of the Federal Trade Commission. (The teeming throngs of humanity who are curious to know whether Jerry and I have any original thoughts to contribute to the issue can read this CommLaw Conspectus article.)

Now that I’ve gotten the shameless self-promotion out of the way, let me MoveOn to a broader point. The debate over net neutrality illustrates how important it is to identify and demonstrate the nature of the problem before trying to solve it.  This applies whether the issue is net neutrality or health care or financial market regulation. Two points in particular bear repeating.

First, ensure that there is empirical evidence of a system-wide problem. The arguments for net neutrality are based on concerns about things the broadband companies might have the ability to do – not empirical proof of widespread abuses that have actually occurred. Less than a handful of famous anecdotes support the argument for net neutrality. Sweeping, systemwide policy changes should only occur when a sweeping, systemwide problem actually exists.

Second, understand the actual nature of the problem. Have a coherent theory of cause and effect that explains why the problem occurs with reasoning that is consistent with what we know about human behavior. Ignoring this point has led to some odd decisions on issues far afield from net neutrality. In 2009, for example, the Department of Energy proposed energy efficiency standards for clothes washers to be used in laundromats and apartment buildings. The justification for the regulation assumed that greedy business owners and landlords willfully ignored opportunities to earn higher profits by investing in energy-efficient appliances! One might argue about whether consumers always identify and act on opportunities to save energy, but assuming that businesses will ignore opportunities to save money is a much bigger stretch.

If you don’t get the problem right, you won’t get the solution right!

The Federal Communications Commission keeps grabbing and the judges keep slapping its hands.

The big news today is that a federal appeals court has ruled the FCC has no legal authority to regulate the Internet. This throws the entire FCC broadband policy agenda into turmoil.

The decision, by the United States Court of Appeals for the District of Columbia Circuit, concerns sanctions the FCC imposed on Comcast after the cable company slowed down the rate of transfer for certain peer-to-peer files using the BitTorrent protocol. Although Comcast and BitTorrent settled the dispute, the FCC nonetheless sought to fine Comcast for violating the FCC’s network neutrality guidelines against content discrimination. Comcast sued, claiming it had the right to manage its own network to serve the interest of the 95 percent its customers who don’t use BitTorrent.

Continue reading →

Interesting upcoming event on April 21st at Georgetown University about “Digital Power and Its Discontents.” It’s described as: “A one-day conference exploring the ways digital technologies disrupt the balance of power between and among states, their citizens and the private sector.” Evgeny Morozov of Georgetown’s Institute for the Study of Diplomacy, which is organizing the event, was kind enough to invite me to participate on the first panel of the day. And I see that my fellow TLF blogger Jerry Brito of the Mercatus Center will be on another panel. Other panelists include: John Morris of CDT, Micah Sifry of the Personal Democracy Forum, Mark MacCarthy of Georgetown Univ., Rebecca MacKinnon, Joel Reidenberg of Fordham Law, Amb. Philip Verveer, and several others.

The event will be held on Wednesday, April 21, 2010 from 10:30 a.m. to 4:30 p.m. at the Georgetown University Mortara Center for International Affairs. (3600 N Street, N.W.) Go to the website to RSVP. You’ll find the complete agenda down below. It sounds like a terrific event. RSVP here.
Continue reading →

As I mentioned before, I’ve been actively seeking a replacement in my role as President of The Progress & Freedom Foundation.  I’ve already grown tired of managerial duties, fundraising responsibilities, and so on.  More importantly, it is slowly but surely destroying my ability to be a full-time policy wonk and focus all my energies on making the case for free minds, free markets, and free speech. I’m quite ready and willing to hand over the keys to someone else so I can spend all my time fighting the good fight. I just need to find the right person.

So, if you know of someone who would make a great leader, has strong free-market credentials, and extensive experience in the field of high-tech policy and media/communications law, please let me know.  They can contact me at: athierer[at]pff.org  or call PFF at 202-289-8928

The city of Bellingham, Washington lies close to the Canadian border. It is a sleep town of 70,000 or so with a decent sized University, a pleasant waterfront and charming downtown. (Full disclosure, the author attended said University a decade ago)

The town’s motto is, “the city of subdued excitement,” something that probably better fits a description of this author than the town, but whatever.

I did, however, get a kick out of the video that city leaders spent $5K putting together to accompany the Google fiber rollout project application. I love a good broadband connection as much as the next guy, but the video, while done in a very professional manner, made my hair stand up on end. For one thing, Bellingham has good broadband networks, including Clear’s WiMax, numerous coffee shops with complimentary WiFi, a networked university system, etc. We’re not dealing with backwood hicks here or stone-cobbled streets.

But I suppose a video looks less desperate than changing the name of your city.

Google Fiber: Put the G in Bellingham

From our bulletin board at home:

April Fool's Cartoon About Freedom to Innovate

This cartoon takes its inspiration from a conversation—a real gut-buster!—that I had with my kids. April would have foolishness enough, given that dread date smack in its middle, without April Fool’s Day. You can thus take this joke seriously.

[Crossposted at Agoraphilia, TechLiberation Front.]

Broadband Baselines

by on April 1, 2010 · 0 comments

The national broadband plan drafted by Federal Communications Commission staff has a lot of goals in it. Goals for broadband infrastructure deployment include:

  1. Make broadband with 4 Mbps download speeds available to every American
  2. Over the long term, have broadband with 100 Mbps download and 50 Mbps upload speeds available to 100 million American homes, with 50 Mbps downloads available to 100 million homes by 2015
  3. Have the fastest and most extensive wireless broadband networks in the world
  4. Ensure that no state lags significantly behind in 3G wireless coverage
  5. Ensure that every community has access to 1 Gbps broadband service in institutions like schools, libraries, and hospitals

The plan also outlines a number of policy steps that the FCC and other federal agencies could take to help accomplish these goals.

So far, so good. But to truly hold federal agencies accountable for achieving these objectives, we need more than goals, measures, and a list of policy proposals. We also need a realistic baseline that tells us how the market is likely to progress toward these goals in the absence of new federal action, and some way to determine how much the specific policy initiatives affect the amount of the goal achieved.

Here’s what will happen in the absence of a well-defined baseline and analysis that shows how much improvement in the goals is actually caused by federal policies: The broadband plan announces goals. The government will take some actions. Measurement will show that broadband deployment improved, moving the nation closer to achieving the goals. The FCC and other decisionmakers will then claim that their chosen policies have succeeded, because broadband deployment improved.

But in the absence of proof that the policies cause a measurable change in outcomes, this is like the rooster claiming that his crowing makes the sun rise. Scientists call this the “post hoc, ergo propter hoc” fallacy: “B happened after A, therefore A must have caused B.” (Brush up on your Latin a little more, and you’ll even find out what Mercatus means. But I digress.)

Enough abstractions. Let me give a few examples.

The first goal listed above is to ensure that all Americans have access to broadband with 4 Mbps download speeds. In his second comment on my March 17 “Broadband Funding Gap” post, James Riso notes that the plan acknowledges that 5 out of the 7 million households that currently lack access to 4 Mbps broadband will soon be covered by 4th generation wireless. That means coverage for 83 percent of the households that lack 4 Mbps broadband is already “baked into the cake.” 

Accurate accountability must avoid giving future policy changes credit for this increase in deployment, because it was going to happen anyway.  (Of course, policymakers need to avoid taking steps that would discourage this deployment, such as levying the 15 percent universal service fee on 4th generation wireless.) The relevant question for evaluating future policy changes is, “How do they affect deployment to the remaining 2 million households?”

Similarly, the goal of 50 Mbps to 100 million households by 2015 seems to have been chosen because cable and fiber broadband providers indicate that they plan to cover more than that many homes by 2013 with broadband capable of delivering those speeds (pp. 21-22). Future policy initiatives should get zero credit for contributing toward this goal unless analysis demonstrates that the initiatives increased deployment of very high speed broadband over and above what the companies were already planning.

If you think this point is so basic that it’s not worth mentioning, you haven’t read enough government reports. Post hoc, ergo propter hoc is endemic, and not just on technology-related topics. For example, both sides regularly display this fallacy whenever the unemployment figures get released: “Unemployment increased after Obama’s election, therefore his administration caused the unemployment.” “The recession started when Bush was president, therefore his administration caused the unemployment.” These are at best hypotheses whose truth, untruth, and quantititive significance needs to be established by analysis that controls for other factors affecting the results.

Just take this as an advance warning on reporting results of the national broadband plan: Tone down the triumphalism.  

Note: For those of you who just can’t get enough discussion of the national broadband plan, Jerry Brito and I will have a dialog on other aspects of the plan in a future podcast that will be available here on Surprisingilyfree.com.