Naturally, now that government plans to intervene in the economy with a massive stimulus package, everyone wants their “fair” share. Robert D. Atkinson, president of the Information Technology and Innovation Foundation, is arguing for digitized health records, a smart power grid and faster broadband connections:
While creating jobs by upgrading the nation’s physical infrastructure may help in the short term, Mr. Atkinson says, “there’s another category of stimulus you could call innovation or digital stimulus — ‘stimovation,’ as a colleague has referred to it.” Although many economists believe that a stimulus package must be timely, targeted and temporary, Mr. Atkinson’s organization argues that a fourth adjective — transformative — may be the most important. Transformative stimulus investments, he said, lead to economic growth that wouldn’t be there otherwise.
A new report by the Information Technology and Innovation Foundation [to be released Wednesday] presents the case for investing $30 billion in the nation’s digital infrastructure, including health information technology, broadband Internet access and the so-called smart grid, an effort to infuse detailed digital intelligence into the electricity distribution grid.
And a Silicon Valley petition calls for a tax credit for companies that spend more than 80 percent of what they had been spending annually on information technology like computers and software.
Usually when politicians hand out targeted tax breaks or grants there are strings attached.
Free Press is already proposing that the Internet services receiving subsidies “must be an open, freely competitive platform for ideas and commerce.” There is a possibility no one would accept the subsidies to build the network Free Press envisions. Continue reading →
The FCC’s much-maligned proposal to create a free, filtered wireless broadband network seemed all but dead earlier this week after FCC Chairman Kevin Martin stated in an interview with Broadcast & Cable that the proposal’s chances of surviving a full FCC vote were “dim.”
Now, Ars reports that Kevin Martin has changed his mind about the filtering requirements, caving in to pressure from an array of interest groups to drop the smut-free provisions from the plan. These “family-friendly” rules, which would have mandated that the network filter any content deemed unsuitable for a five-year-old, ended up acting as a lightning rod for critics across the ideological spectrum, and raised serious First Amendment concerns (as Adam and Berin have argued on several occasions).
Even with the smut-free rules having been removed, the proposal remains a very bad idea. Setting aside 25 mhz of the airwaves—a $2 billion chunk of spectrum—to blanket the nation with free wireless broadband (as defined by the FCC) would mean less spectrum available for more robust services. At a time when wireless firms are experimenting with a number of strategies for monetizing the airwaves, allowing a single firm’s business model—especially one that many experts have suggested is simply not viable—to reign over other, more effective models would hurt consumers who yearn for more than basic broadband service.
The case for setting spectrum aside for free wireless broadband is predicated on the myth that there exists an elusive “public interest” that the marketplace is unable to maximize. We’ve heard the same line many times before. It goes something like this: The forces of competition that we rely upon to allocate finite resources in nearly every other sector of the economy are incapable of fulfilling consumer needs when it comes to broadband. Washington DC intellectuals have figured out that the public really wants a free nationwide wireless network—yet this amazing concept has been blocked by evil incumbents that are bent on denying consumers the services they most desire.
Continue reading →
On the Google Public Policy Blog, Richard Whitt’s response to the recent Wall Street Journal article (of now considerable infamy) fails to mention one of the primary benefits of Google’s OpenEdge caching program. Whitt only mentions the following benefits:
By bringing YouTube videos and other content physically closer to end users, site operators can improve page load times for videos and Web pages. In addition, these solutions help broadband providers by minimizing the need to send traffic outside of their networks and reducing congestion on the Internet’s backbones.
What Whitt doesn’t say is that caching programs like Google’s have the potential to dramatically reduce the total traffic on tier one and tier two carriers (networks that peer, or exchange data without charge, with other networks). But this traffic reduction is one of the biggest benefits Google’s program provides to the rest of the Net.
How would OpenEdge do this? Let me explain using a bit of anecdotal evidence.
Continue reading →
The Washington Post reports today that Yahoo! has changed its data retention policy to anonymize user behavior information after 3 months, rather than its previous, much lengthier retention window of 13 months.
This move by Yahoo! is likely in response to both consumer demand for greater privacy protection and pressure from government regulators both in the US and in the EU. Google & Microsoft have recently tightened their own retention policies recently after experiencing similar pressure.
Yahoo! and other search companies may be experiencing pressure of a different kind under the Obama administration. Eric Holder, the President-elect’s nominee for Attorney General, has stated publicly that he believes existing privacy laws may have to change to accommodate law enforcement needs:
In some cases, changes to privacy laws may be required to recognize the new technological reality we now confront.
Speaking on data-retention specifically, in the same memo Holder said that:
Certain data must be retained by ISPs for reasonable periods of time so that it can be accessible to law enforcement.
These statements suggest that Holder may be in favor of a mandatory minimum length of time for companies to retain data, rather than mandatory maximums. This puts search engines, ISPs, and other web-based companies in the awkward position of trying to please two sets of regulators with completely opposite goals.
Continue reading →
Gov. Paterson unveiled the New York budget yesterday, and among the 137 proposed new and increased taxes is a new tax on digital products. An article in today’s New York Post quotes NetChoice opposing the Governor’s effort to tax music and creativity distributed through the Internet.
New York’s approach is two-fold, broadening what is taxed and who has to collect: 1) add digital music, books, songs and movies to what can be taxed; 2) expand and assert the concept of “nexus” to cover out-of-state sellers that use an online network of affiliates
First, New York is imposing a new tax on digital goods. It’s not as the state claims, to close a “digital property taxation loophole” — instead, this is a new tax on New Yorkers, for a service that’s not taxable under today’s law.
What’s worse, why in the world would NY impose a new tax on something we all want to encourage right now? Digital downloads of music, movies, and books have no carbon footprint and use none of the oil consumed with a round-trip to the store. Moreover, there’s no plastic and paper packaging to create and crate off to a landfill. I’ve blogged on the environmental benefits of downloading here.
It’s also important to note that this is more than just a new tax, it will also extend the long hand of government to the long tail of online commerce. Who’ll be hurt? Small, independent artists that have websites to sell their own creative works. If a NY-based author or musician adds a link to her webpage saying ‘buy my book/music now on Amazon.com’, she’d be creating a new tax collection burden for Amazon–on everything Amazon sells to anyone in NY State. Amazon’s not going to sit still for that, and they might just stop their affiliate program for NY-based suppliers, authors, and musicians.
I’ve blogged about nexus issues here. Continue reading →
The Wall Street Journal reports today that Google wants a fast lane on the Internet and has claimed that the Mountain View based giant may be moving away from its stance on network neutrality:
Google’s proposed arrangement with network providers, internally called OpenEdge, would place Google servers directly within the network of the service providers, according to documents reviewed by the Journal.
The problem with the Journal piece is that OpenEdge isn’t exactly a neutrality violation, or maybe it is. As Declan McCullagh at CNET has pointed out in his post “Google accused of turning its back on Net neutrality,” figuring out when a neutrality violation has occurred is a little tricky:
The problem with defining Net neutrality so the government can regulate it is a little like the problem of defining obscenity so the government can ban it: You know it when you see it.
Well, Google says that it knows a neutrality violation when it sees one and not surprisingly it doesn’t see one in its own actions. It’s defense essentially boils down to them pointing out that OpenEdge is caching. It’s more of a warehouse than a fast lane. Besides, anyone else can do the same thing, so Google isn’t using any ISP’s “unilateral control over consumers’ broadband connections” to their advantage.
Interestingly, however, the same Google’s Policy Blog entry defends other companies that engage in the same sort of caching, including LimeLight. But LimeLight Networks isn’t just a data warehousing company, they combine caching with real fast lanes.
Continue reading →
I’ve been reading some of Larry Lessig’s thoughts on corruption and I’ve drafted a short reaction at OpenMarket.org.
In short, I think that Lessig’s right to say that Washington is corrupt, he’s right that money has an incredible power to corrupt the system, but I think he’s wrong to say that we ought to focus on money.
Why? Because there are other forms of influence that special interests can use to push lawmakers toward the policies they would prefer. Eliminating money from politics is likely an impossible goal but would also do little to stop corruption. Taking away power from government and returning it to individuals seems to me to be the only way we can truly fight corruption. I articulate this all more fully in the post.
My Romanian space lawyer (and improbably-named) friend Virgiliu Pop has made the front page of Space.com today in a great interview with leading space journalist Leonard David about his new book Who Owns the Moon?: Extraterrestrial Aspects of Land and Mineral Resources Ownership. Virgil slams the “Common Heritage of Mankind” socialism behind the 1979 Moon Treaty, which was killed in the U.S. Senate by the free-market space movement, which later gave birth to the Space Frontier Foundation (which I chair).
Virgil once famously claimed ownership of the sun to demonstrate the absurdity of serious assertions made by a number of charlatans to ownership of lunar territory (Dennis Hope) or the entire Eros asteroid (Greg Nemitz). Virgil’s point was “to show how ridiculous a property rights system in outer space would be if it were to be based solely on claim unsubstantiated by any actual possession.”
I’m looking forward to reading Virgil’s book–and to writing a proper review. For now, I’ll just say that I think Virgil and I see eye-to-eye on three key premises (something of a rarity among space lawyers on the ultra-contentious issue of property rights):
- The Outer Space Treaty of 1967 prohibits nations from appropriating territory in space and also prohibits individuals from asserting any territorial claims (generally accepted) except to a narrowly-limited area under actual use (not accepted by all space lawyers).
- The Outer Space Treaty, properly understood, does not bar claims to ownership of movable objects such as extracted resources or even (if they can be moved in a meaningful way) entire asteroids or comets.
- Securing such property rights is essential to the economic development of space.
Here are a few choice excerpts from Virgil’s new book on the big picture of property rights in space: Continue reading →
It’s been a big year for tech policy books. Several important titles were released in 2008 that offer interesting perspectives about the future of the Internet and the impact digital technologies are having on our lives, culture, and economy. Back in September, I compared some of the most popular technology policy books of the past five years and tried to group them into two camps: “Internet optimists” vs. “Internet pessimists.” That post generated a great deal of discussion and I plan on expanding it into a longer article soon. In this post, however, I will merely list what I regard as the most important technology policy books of the past year.
What qualifies as an “important” tech policy book? Basically, it’s a title that many people in this field are currently discussing and that we will likely be talking about for many years to come. I want to make it clear, however, that merely because a book appears on this list it does not necessarily mean I agree with everything said in it. In fact, I found much with which to disagree in my picks for the two most important books of 2008, as well as many of the other books on the list. [Moreover, after reading all these books, I am more convinced than ever that libertarians are badly losing the intellectual battle of ideas over Internet issues and digital technology policy. There’s just very few people defending a “Hands-Off-the-Net” approach anymore. But that’s a subject for another day!]
Another caveat: Narrowly focused titles lose a few points on my list. For example, as was the case in past years, a number of important IP-related books have come out this year. If a book deals exclusively with copyright or patent issues, it does not exactly qualify as the same sort of “tech policy book” as other titles found on this list since it is a narrow exploration of just one set of issues that have a bearing on digital technology policy. The same could be said of a book that deals exclusively with privacy policy, like Solove’s Understanding Privacy. It’s an important book with implications for the future of tech policy, but I demoted it a bit because of its narrow focus.
With those caveats in mind, here are my Top 10 Most Important Tech Policy Books of 2008 (and please let me know about your picks for book of the year):
Continue reading →
The venerable Economist magazine has made a hash of my research on the growth of the Internet, which examines the rich media technologies now flooding onto the Web and projects Internet traffic over the coming decade. This “exaflood” of new applications and services represents a bounty of new entertainment, education, and business applications that can drive productivity and economic growth across all our industries and the world economy.
But somehow, The Economist was convinced that my research represents some “gloomy prophesy,” that I am “doom-mongering” about an Internet “overload” that could “crash” the Internet. Where does The Economist find any evidence for these silly charges?
In a series of reports, articles (here and here), and presentations around the globe — and in a long, detailed, nuanced, very pleasant interview with The Economist, in which I thought the reporter grasped the key points — I have consistently said the exaflood is an opportunity, an embarrassment of riches.
Continue reading →