ACT will host an event on open government tomorrow morning that will feature TLF’s own Jerry Brito and Andrew Plemmons Pratt of the Center for American Progress. I’ll moderate and tee-up the discussion.
We’ll focus on how governments can move from merely posting information online (e-government) to a more participatory process (we-government). We’ll discuss core concepts of “we” government—including the social media technologies that enable access, accountability and participation—and how Congress can create processes for a more transparent government:
- Define what being “open” means for the executive, legislative and judicial branches
- Review and update procedures and rules within government to better deliver information to citizens
- Create mechanisms for accepting and integrating increased constituent correspondence and comments on rulemakings
The Details: Tuesday, Apr 14 from 9:30 to 10:45am. It’s at the newly opened Capitol Visitors Center, Congressional Meeting Room South (the new main entrance to the U.S. Capitol, is located on the East front at First Street and East Capitol Street, NE). Coffee and morning snacks will be served. Please let me know if you’d like to attend!
This morning, Cato put out a TechKnowledge of mine called “The Promise that Keeps on Breaking.” It deals with the policy issues surrounding President Obama’s yet unfulfilled promise to post bills sent to him by Congress online for five days before he signs them.
A Cato@Liberty post last week went through the President’s progress so far on the five-day promise.
I wrote a piece about PACER last week, which Katherine Mangu-Ward at Reason was kind enough to link to from Hit and Run. In the comments to her post, a reader asked a reasonable question about the fees you pay to access PACER: “Are you buying the data or paying the court’s bandwidth costs?”
Now I’m usually pretty sympathetic to the idea that the people who benefit from government services should pay for those services through user fees. But there are two reasons this doesn’t apply in a case like this.
First, there’s the math. As it happens, I’m working on a project that will involve hosting large amounts of content, so I’ve been researching hosting costs. One of the most popular managed hosting services is Amazon’s EC2/S3. You can see the pricing for that system here. It provides a good point of comparison for PACER’s fees.
To make the math easy, let’s assume you’ve got a lawyer who downloads 20 documents per week, each of which is 10 pages long and 1 MB in size. Over the course of a year, this lawyer will download around 1000 documents, and he’ll be charged 1000 * 10 * $0.08 = $800 for those documents. (This is actually an underestimate because the lawyer also has to pay for search results) And because each document is about 1 MB in size, the total quantity of data transferred from PACER will be around 1 GB.
Now, if you click over to Amazon’s S3 pricing page, you’ll see that the going rate for a GB of data transfer in the private market is… 17 cents. In fairness, Amazon also charges for CPU time on the EC2 cluster, so if the courts actually built their system on EC2/S3, the marginal cost of a GB of data delivery might be more like 50 cents. But charging $800 is three orders of magnitude too much.
Continue reading →
Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
I believe that it is useful to be reminded bluntly that the dark forces are out there and that it is too easy to forget that truth by imposing rules that obscure it. As Oscar Wilde wrote in a different context, “Man is least in himself when he talks in his own person. Give him a mask, and he will tell you the truth.” Too many of us like to think that we have made great progress in human relations and that little remains to be done. Unmoderated comments provide an antidote to such ridiculous conclusions. It’s not like the rest of us don’t know those words and hear them occasionally, depending on where we choose to tread, but most of us don’t want to have to confront them.
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
But I am heartened by the fact that such comments do not go unchallenged by readers. In fact, comment strings are often self-correcting and provide informative exchanges. If somebody says something ridiculous, somebody else will challenge it. And there is wit.
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
Continue reading →
I’m taking a course here at Princeton on IT Policy, taught by my advisor, Ed Felten. It’s been an interesting experience. I think it’s safe to say that I’ve spent more time thinking about the topics than the median member of the class, and the class has been an opportunity to re-acquaint myself with how these issues look for smart people who aren’t immersed in these issues every day.
The course has a blog, where each class participant is asked to contribute one post per week. I’ve been impressed by the quality of a number of posts. Here is a good post by “Jen C.” about the Author’s Guild’s implausible claim that text-to-speech software on the Kindle 2 infringes copyright by creating a derivative work. And here is a fascinating post by Sajid Mehmood about Jonathan Zittrain’s The Future of the Internet, which our own Adam Thierer reviewed here. Sajid points out that Zittrain’s prophesy is more likely to come true if it’s helped along by bad government regulations, an argument that I find persuasive (and not just because he quotes my DMCA paper).
One of the most interesting debates has been over the Google Book Search settlement. A couple of weeks ago, Sajid posted a tentative defense of the settlement, arguing that whatever its flaws, the Google Book Search settlement is a private agreement and the courts would be overstepping their authority to reject it. I responded with a pair of posts making the case that, thanks to the creative use of the class action mechanism, the settlement would have effects far beyond those that could be achieved by an ordinary private contract, and that the results of the settlement would be anticompetitive. Sajid responded with the reasonable point that the settlement will not be the only—or even necessarily the most important—barrier to entry in the book search engine market, and that its better to have one firm able to build a book search engine than zero.
We’ll all be blogging throughout the month of April, so I encourage you to check it out.
It seems Microsoft is facing much the same problem Pepsi faced in the 70s, when it created the Pepsi challenge (a blind taste test between Coke and Pepsi):
A stark sign of the challenge Yusuf Mehdi faces as a point man for Microsoft in the company’s battle with Google comes from the company’s own research into the habits of consumers online.
During regular “blind taste tests,” in which Microsoft asks randomly-selected consumers to score the quality of results from various Internet search engines, the quality of Microsoft’s search results have so improved that people can’t tell the difference between Microsoft and Google search results, says Mr. Mehdi, senior vice president of Microsoft’s online audience business group. But when Microsoft slaps the Google brand name on the results from Microsoft’s own search engine during another portion of its tests, users invariably score them highest.
“Just by putting the name up, people think it’s more relevant,” he says.
… Microsoft still faces the problem of the strong association in consumers’ minds between Google and Internet search. In theory, it’s far easier for a consumer to switch Internet search engines than it is for them to switch other forms of software. But Mr. Mehdi–a veteran of the Web browser wars of the late 90s in which Microsoft managed to overtake the pioneer in the category, Netscape Communications–says in reality it’s very hard to convince consumers to change their search behavior.
So, Microsoft faces an uphill battle. Happily for the Internet marketplace, it seems they’re embracing the challenge cheerily by attempting to kill two birds with one stone: launching an innovative new semantic search engine capable of answering users’ questions more directly while also creating a fresh new brand for what Microsoft acknowledges is a “confusing jumble of brand names for its search efforts.” I, for one am looking forward to Microsoft’s forthcoming search engine, dubbed “Kumo.”
But I think there’s a bigger lesson here: Google’s most valuable asset is its brand. Continue reading →
Yesterday was the 40th anniversary of the issuance of the first RFC, or “request for comments,” an important milestone in the development of the Internet. This piece by Stephen Crocker is an enjoyable look back.
The title of the piece is “How the Internet Got its Rules,” which strikes me as poorly chosen. (Titles are often chosen by the publisher, not the writer.) Given the open, collaborative process used then, and still today, to govern much of the Internet’s functioning, “rules” is an inapt substitute for the word “protocols.”
Ever wonder about this? In researching COPPA, I noticed the following definition of “Internet”
collectively the myriad of computer and telecommunications facilities, including equipment and operating software, which comprise the interconnected world-wide network of networks that employ the Transmission Control Protocol/Internet Protocol, or any predecessor or successor protocols to such protocol, to communicate information of all kinds by wire, radio, or other methods of transmission.
16 CFR § 312.2 (added in 1999). This definition comes from the COPPA law itself.
My quick and by no means exhaustive research (looked for the term “Internet means” in the CFR and U.S. Code) suggests that this is one of two definitions used, with slight variations, in Federal law (in less than a dozen places total).
The earliest reference I can find to this definition is from the Internet Tax Freedom Act of 1998 (the sales tax moratorium), which differed only slightly: “comprise” instead of “constitute” and omitting the “or other methods of transmission” part. This definition appears again in the child pornography rules issued in 2005 (28 CFR § 75.1).
The other definition I see is appears in the bankruptcy code (15 USCS § 163) and in the 2005 Internet gambling ban (31 CFR § 132.2 and 12 CFR § 233.2): “the international computer network of both Federal and non-Federal interoperable packet switched data networks.”
So which definition is better? Do both suck? Should we care? “Discuss amongst yourselves!”
But no kvetching about the use of the word “myriad.” Someone already beat you to the punch—and got smacked down: Continue reading →
On the problems with the newspaper industry, Michael Kinsley writes in the Washington Post:
You may love the morning ritual of the paper and coffee, as I do, but do you seriously think that this deserves a subsidy? Sorry, but people who have grown up around computers find reading the news on paper just as annoying as you find reading it on a screen. (All that ink on your hands and clothes.) If your concern is grander – that if we don’t save traditional newspapers we will lose information vital to democracy – you are saying that people should get this information whether or not they want it. That’s an unattractive argument: shoving information down people’s throats in the name of democracy.
I rarely say it, but the whole thing is worth reading.
Anonymity, Reader Comments & Section 230
by Adam Thierer on April 9, 2009 · 18 comments
Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
Continue reading →