Ars reports on what seems to be a genuine case of piracy choking off a popular gaming title:
Sports Interactive had made Eastside Hockey Manager 2007 available only via digital distribution in an attempt to give the game a wider reach in Europe and North America. Unfortunately for Sports Interactive, the end result was a hacked version of the game that was quickly distributed via BitTorrent.
“The orders came in a drizzle, rather than a flood,” wrote Jacobson. “We scratched our heads trying to work out what had gone wrong. And then someone pointed out that the game was being pirated, and was available as a torrent from lots of different pirating sites. Then sat there and watched as the claimed amount of downloads on those sites went up and up, as sales stayed static.”
The end result was a popular game that had “more licenses than any other hockey game in history,” according to Jacobson, but was apparently so widely distributed over peer-to-peer networks that the company was not able to make back the development or licensing costs. Although Jacobson left open the possibility that SI may resurrect Eastside Hockey Manager in the future, he said that all development on the game has been halted and the programmers and others that worked on the title have been reassigned to other projects within the company.
In some industries, such as music, I’m sympathetic to the argument that we’d get along just fine without copyright. But as I’ve said before, I think there are other categories of content that would be significantly impoverished without copyright protections. Video games, which are subject to soaring costs appears to be in the latter category.
OK, last post on the Herman paper. I’m especially pleased that he took the time to respond to my “regulatory capture” argument, because to my knowledge, he’s the first person to respond to the substance of my argument (Most of the criticism focused on my appearance and my nefarious plot to impersonate the inventor of the web):
Timothy B. Lee insists that BSPs will have more sway than any other group in hearings before the FCC and will therefore “turn the regulatory process to their advantage.” He draws from a vivid historical example of the Interstate Commerce Commission (“ICC”), founded in 1887. “After President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman, the commission quickly fell under the control of the railroads, gradually transforming the American transportation industry into a cartel.” Yet this historic analogy, and its applicability to the network neutrality problem, is highly problematic. Even the ICC, the most cliché example of regulatory capture, was not necessarily a bad policy decision when compared with the alternative of allowing market abuses to continue unabated. One study concludes “that the legislation did not provide railroads with a cartel manager but was instead a compromise among many contending interests.” In contrast with Lee’s very simplistic story of capture by a single interest group, “a multiple-interest-group perspective is frequently necessary to understand the inception of regulation.”
Herman doesn’t go into any details, so it’s hard to know which “market abuses” he’s referring to specifically, but this doesn’t square with my reading on the issue. I looked at four books on the subject that ranged across the political spectrum, and what I found striking was that even the ICC’s defenders were remarkably lukewarm about it. The most pro-regulatory historians contended that the railroad industry needed to be regulated, but that the ICC was essentially toothless for the first 15 years of its existence. At the other extreme is Gabriel Kolko, whose 1965 book Railroads and Regulation, 1877-1916 argues that the pre-ICC railroad industry was fiercely competitive, and that the ICC operated from the beginning as a way to prop up the cartelistic “pools” that had repeatedly collapsed before the ICC’s enactment. Probably the most balanced analyst, Theodore E Keeler, wrote in a Brookings Institute monograph that that in its early years, the ICC “had about it the quality of a government cartel.”
Continue reading →
I’m going to wrap up my series on Bill Herman’s paper by considering his response to counter-arguments against new regulations. I think he has the better of the argument on two of the criticisms (“network congestion” and “network diversity”), so I’ll leave those alone, but I disagree with his other two criticisms. Here’s the first, which he dubs the “better wait and see” argument:
Several opponents of network neutrality believe that the best approach is to wait and see. They are genuinely scared of broadband discrimination, but they would rather regulate after the situation has evolved further. The alleged disadvantage is that regulating now removes the chance to create better regulation later, and it accrues the unforeseen consequences described below. Felten provides a particularly visible and eloquent example of this argument. He agrees that neutrality is generally desirable as an engineering principle, but he wishes the threat of regulation could indefinitely continue to deter discrimination.
Unfortunately, the threat of regulation cannot indefinitely postpone the need for actual regulation. In the U.S. political system, most policy topics at most times will be of interest to a small number of policymakers, such as those on a relevant Congressional subcommittee or regulatory commission. This leads to periods of extended policy stability. Yet, as Baumgartner and Jones explain, this “stability is punctuated with periods of volatile change” in a given policy domain. One major source of change, they argue, is “an appeal by the disfavored side in a policy subsystem, or those excluded entirely from the arrangement, to broader political processes–Congress, the president, political parties, and public opinion.”
A key variable in the process is attention. Human attention serves as a bottleneck on policy action, and institutional constraints further tighten the bottleneck. Specialized venues such as the FCC will be able to follow most of the issues under their supervision with adequate attention, but most of the time the “broader political processes” pay no attention to those issues.
As I’ll explain below the fold, I think this dramatically underestimates the success that the pro-neutrality side has had in raising the profile of this issue.
Continue reading →
I’ve been pretty critical of Bill Herman’s paper on network neutrality regulation, so I wanted to highlight a section of the paper that I thought was extreme sensible, and I hope that Herman’s allies in the pro-regulatory camp take his recommendation seriously:
I would offer just two minor improvements by way of clarification. First, the bill is reasonably clear but could be more explicit so that the prohibition on broadband discrimination applies only to last-mile BSPs and not to intermediate transmission facilities, where the market is highly competitive and, due to packet-switching, very unlikely to lead to bottlenecks. It may be the case that, for some services, content or application discrimination is necessary; but senders and receivers should be able to choose freely among intermediate service providers or choose not to use such services. Second, the bill should add an additional clarification for establishments such as schools, libraries, government buildings, and Internet cafes that provide Internet service via computer terminals that are owned by the establishment. In the bill’s current exemption permitting BSPs to offer “consumer protection services” such as anti-spam and content filtering software, BSPs are required to offer such services with the proviso that end-users may opt out. In the case of establishments offering patrons access to the Internet on establishment-provided computers, the owner of the computer–not the user–should be able to choose whether or not such software is optional.
If we do wind up with new regulations, I think it’s very important that they be limited to those segments of the Internet where there is at least a plausible case that the lack of competition endangers the end-to-end principle. That’s clearly not the case for the Internet backbone, for high-end Internet service targeted toward businesses, or among the thousands of businesses, such as hotels and coffee shops, that now offer Internet access.
Continue reading →
The Miami Herald is predicting that newly elected Florida governor Charlie Crist will be the latest elected official to come out against DREs:
Gov. Charlie Crist will recommend on Thursday that Florida’s problem-plagued touchscreen voting machines should go the way of the butterfly ballot–the trash heap–and his proposed budget will recommend replacing them all with optical scan machines, which produce a paper trail, at a cost of up to $35 million.
U.S. Rep. Robert Wexler, a Boca Raton Democrat, will join the governor in Palm Beach County to announce that his proposed budget will include the money to pay for replacing the machines in 15 counties, said Josh Rogin, Wexler’s deputy chief of staff.
”It’s something we’ve been working on for a long time,” Rogin said of Wexler’s six-year battle to require counties to have a paper trail.
Rogin said the governor’s office will recommend replacing the touchscreen machines with optical-scan machines, even though counties have spent millions acquiring the equipment.
The governor’s office would not confirm or deny the reports. The secretary of state’s office said the cost to replace the machines would be at between $30 million and $35 million.
It’s remarkable how quickly the conventional wisdom on this issue has shifted. When researchers raised alarms less than four years ago, they were regarded by many as paranoid eccentrics. Even in the wake of alleged e-voting problems in the 2004 elections, it was still challenging to get elected officials to pay attention to the issue. Now, as more and more politicians begin to publicly question the security of e-voting, it’s beginning to look like it’s only a matter of time before states start scrapping their DREs en masse.
I wanted to quickly follow up on my earlier post regarding Peter Huber’s excellent essay about how Net neutrality will lead to a bureaucratic nightmare at the FCC and a lawyer’s bonanza once the lawsuits start flying in court. I realized that many of the people engaged in the current NN debate might not have followed the Telecom Act legal wars that took place from 1996-2004 which frame the way both Huber and I think about these issues and why we are so cynical about regulation.
Let’s start with the bureaucracy that can be spawned by seemingly simple words. For example, the Telecom Act of 1996 contained some extremely ambiguous language regarding how the FCC should determine the “cost” of various network elements (wires, switches, etc..) that incumbent telecom operators were required to share with their competitors. Now how much legal wrangling could you expect over what the term “cost” meant?
Well, in the years following passage of the Telecom Act, entire forests fell because of the thousands of pages of regulatory and judicial interpretations that were handed down trying to figure out what that word meant. In fact, let’s take a quick tally of the paperwork burden the FCC managed to churn out in just three major “competition” rules it issued in an attempt to implement the Telecom Act and define the “cost” of unbundled network elements (“UNEs”):
* Local Competition Order (1996): 737 pages, 3,283 footnotes
* UNE Remand Order (1999): 262 pages, 1,040 footnotes
* UNE Triennial Review (2003): 576 pages; 2,447 footnotes
That’s 1,575 pages and 6,770 footnotes worth of regulation in just three orders. This obviously does not count the dozens of other rules and clarifications the FCC issued to implement other parts of the Telecom Act. Nor does it include the hundreds of additional rules issued by state public utility commissions (PUCs), who actually received expanded authority under some of these FCC regulatory orders.
Again, this was all implemented following the passage of a bill (The Telecom Act) that was supposed to be deregulatory in character !!! But wait, it gets worse.
Continue reading →
The always entertaining Peter Huber has a piece in Forbes this week entitled “The Inegalitarian Web” explaining why Net neutrality regulation “is great news for all the telecom lawyers (like me) who get paid far too much to make sense out of idiotic new laws like this one.”
Huber notes that NN advocates are trying to make the case that just “a simple two-word law is all we really need–an equal rights amendment for bits” to achieve Internet nirvana. But, in reality, he argues, “It will be a 2 million-word law by the time Congress, the Federal Communications Commission and the courts are done with it. Grand principles always end up as spaghetti in this industry, because they aim to regulate networks that are far more complicated than anything you have ever seen heaped up beside an amusing little glass of chianti.”
After explaining how NN would not block Google, Amazon, Microsoft or other major intermediaries from cutting bit-mangagement deals with Akamai or other Net traffic managers, Huber goes on to explain how it’s going to be difficult to figure out how to draw lines after that:
“So what will [Net neutrality regs] block? Now, at last, we’re getting close to where the lawyers will frolic. What the neutralizers are after is what they call “last mile” and “end user” neutrality. But that only raises two further questions: How long is a mile, and where does it end?
The proposed law would block any Akamai-like technology embedded in the very last switch, the last stretch of wire that links the Net to digital midgets like you and me. That would be any technology that–for a fee–caches content or provides priority routing to speed throughput. But the ban on the fee would apply only if two legal conditions are met. First, the hardware or software that gives preference to some bits over others would have to be situated close to us midgets. Second, the fee would be banned only if it was going to be charged to someone quite far away. Exactly how close and how far, no one knows. Give us five or ten years at the FCC and in the courts and we lawyers will find out for you. Do you follow the arcane distinctions here?”
Of course, the NN proponents tell us not to worry about any of this. Just trust the friendly folks down at the FCC to use their collective wisdom to define “network discriminiation” and come up with an perfectly efficient set of regulations to govern the high-speed networks of the future. It’s all so simple! (Right.)
In Part 1 of this series, I argued that the Democratic Party seems to be gradually abandoning whatever claim it once had to being the party of the First Amendment. Regrettably, examples of Democrats selling out the First Amendment are becoming more prevalent and the few champions of freedom of speech and expression left in the party are getting more difficult to find.
For example, in my previous essay, I documented how Democratic politicians were leading the charge to reinstitute the so-called Fairness Doctrine. In today’s entry I will discuss how Democrats are now working hand-in-hand with Republicans to orchestrate what would constitute the most significant expansion of content regulation in decades–the regulation of “excessive violence” on television.
Continue reading →
The New Yorker has a dispatch from Jefferey Toobin updating us on the Google Book Search case. It’s a good primer if you haven’t been following this issue, and also fills in some details if you have. Interesting tidbits include the fact that they haven’t started witness depositions yet, and the parties won’t be able to make motions for summary judgment for another year. More interesting is the fact that both Google and the plaintiffs (authors and publishers) are sure this will settle out of court.
“The suits that have been filed are a business negotiation that happens to be going on in the courts,” [Google’s] Marissa Mayer told me. “We think of it as a business negotiation that has a large legal-system component to it.” According to Pat Schroeder, the former congresswoman, who is the president of the Association of American Publishers, “This is basically a business deal. Let’s find a way to work this out. It can be done. Google can license these rights, go to the rights holder of these books, and make a deal.”
Lawrence Lessig points out that while a settlement would be good for both parties, it could create a practical precedent that if one wanted to start a book-scanning project, one had to license the books–a lot like the precedent set by the MP3.com case that was ultimately settled out of court.
Another interesting bit about the technology itself is how Google plans to rely on linking from the wider web to give the information in books the context its search algorithms need to produce good results:
“Web sites are part of a network, and that’s a significant part of how we rank sites in our search—how much other sites refer to the others.” But, he added, “Books are not part of a network. There is a huge research challenge, to understand the relationship between books. … We just started, and we need to make these books networked, and we need people to help us do that,” [Google’s Dan] Clancy said.