Today, PFF has released my latest book: Parental Controls and Online Child Protection: A Survey of Tools and Methods. The entire publication is online and can be downloaded at http://www.pff.org/parentalcontrols (Note: I will be making constant updates to the book in coming months and will post them to that site).
As the title implies, the report provides a broad survey of everything on the market today that can help parents better manage media content, whether it be broadcast television, cable or satellite TV, music devices, mobile phones, video game consoles, the Internet, or social networking websites. I put this report together to show policymakers, the press and the public that many constructive options exist that can help parents control media in their homes and in the lives of their children.
While it can be a formidable challenge to be a parent in an “always-on,” interactive, multimedia world, luckily, there has never been a time in our nation’s history when parents have had more tools and methods at their disposal to help them determine and enforce what is acceptable in their homes and in the lives of their children. And that conclusion is equally applicable to all major media platforms. In the past, the OFF button was the only technical control at a parent’s disposal. Today, by contrast, parents (like me!) have myriad tools and methods to restrict or tailor media content to their own household tastes and values. Those restrictive tools include: the V-Chip and TV ratings; cable and satellite set-top box screening tools; DVD blocking controls; cell phone blocking tools; video game console controls; Internet filtering and monitoring tools, instant messaging monitoring tools; operating system controls; web browser controls; search engine “safe search” tools; media time management devices, and so on. You will find an exhaustive discussion of all these tools and many others in my book.
Continue reading →
Google’s new blog has a post laying out their position on network neutrality. I’m probably missing something, but it strikes me as rather incoherent:
What kind of behavior is okay?
Prioritizing all applications of a certain general type, such as streaming video;
Managing their networks to, for example, block certain traffic based on IP address in order to prevent harmful denial of service (DOS) attacks, viruses or worms;
Employing certain upgrades, such as the use of local caching or private network backbone links;
Providing managed IP services and proprietary content (like IPTV); and
Charging consumers extra to receive higher speed or performance capacity broadband service.
On the other hand:
What isn’t okay?
Levying surcharges on content providers that are not their retail customers;
Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content; or
Building a new “fast lane” online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection.
So if Verizon builds a 30 Mbps pipe to consumers’ homes, and allocates 25 Mbps to a proprietary IPTV service (“Providing managed IP services and proprietary content”) and 5 Mbps to public Internet traffic, is that OK? What if they then consign all video traffic (“all applications of a certain general type”) in the public Internet to the lowest priority, rendering it effectively unusable? And can they then syndicate content from third parties through their IPTV service?
If so, I don’t understand what network neutrality is supposed to accomplish. If not, how am I mis-reading Google’s proposal?
This series of reports on the remarkable growth of the telecom and ecommerce sectors in Brazil since the phone system was privatized makes for upbeat reading.
http://www.ibls.com/internet_law_news_portal_view.aspx?s=articles&id=3901B9B2-66A2-47EF-8A4F-0E598052BF1B
http://www.crito.uci.edu/publications/pdf/gec/brazil.pdf
http://www.internetworldstats.com/sa/br.htm
http://www.midwestbusiness.com/news/viewnews.asp?newsletterID=11893
Apparently using elaborate licensing terms to extend the rights granted under copyright and patent law are not a new idea, nor are they limited to the software industry. From a record manufactered before 1909:
This record which is registered on our books in accordance with the number hereon, is licensed by us for sale and use only when sold to the public at a price not less than one dollar each. No license is granted to use this record when sold at a less price. This record is leased solely for the purpose of producing sound directly from the record and for no other purpose; all other rights under the licensor’s patents under which this record is made are expressly reserved to the licensor. Any attempt at copying or counterfeiting this record will be construed as a violation of these conditions. Any sale or use of this record in violation of any of these conditions will be considered as an infrinement of our United States patents, Nos. 524543, dated February 19, 1895, and 548623, dated October 29, 1895, issued to EMILE BERLINER, and No. 739,318, dated September 22, 1903, and No. 778,976, dated January 3, 1905, and of our other U.S. patents covering this record, and all parties so selling or using the record, or any copy thereof, contrary to the terms of this license, will be treated as infringers of said patents, and will render themselves liable for suit.
I don’t know enough about copyright history to be sure, but my guess is that the reason they talk so much about patent law is that I believe “mechanical reproductions” of music were not covered by copyright law until the 1909 Copyright Act. So record companies apparently attempted to use patent law plus some creative contract terms to create the contractual equivalent of copyright.
I have the bad feeling that I’m going to find myself disagreeing with Larry Lessig a lot more in the next few years.
Lessig did a post today announcing that he’s going to be re-orienting his research away from copyright issues:
From a public policy perspective, the question of extending existing copyright terms is, as Milton Friedman put it, a “no brainer.” As the Gowers Commission concluded in Britain, a government should never extend an existing copyright term. No public regarding justification could justify the extraordinary deadweight loss that such extensions impose.
Yet governments continue to push ahead with this idiot idea — both Britain and Japan for example are considering extending existing terms. Why?
The answer is a kind of corruption of the political process. Or better, a “corruption” of the political process. I don’t mean corruption in the simple sense of bribery. I mean “corruption” in the sense that the system is so queered by the influence of money that it can’t even get an issue as simple and clear as term extension right. Politicians are starved for the resources concentrated interests can provide. In the US, listening to money is the only way to secure reelection. And so an economy of influence bends public policy away from sense, always to dollars.
Now, I wholeheartedly agree with his assessment that lobbyists often corrupt the political process. And certainly copyright law—an issue on which I share almost all of his views—is a prime example of that. He’s quite right that there’s no plausible policy argument for retroactive copyright extension, yet Congress did it because of the lobbying might of the copyright lobby.
Continue reading →
From within the libertarian camp, one of the stronger anti-copyright arguments is the point that it is hard to prove empirically that copyright in fact fosters creativity, especially as compared to some of the alternatives to copyright. How does one go about showing that in the absence of copyright, there would be fewer created works or fewer quality created works or a lesser range of types of created works? To show this conclusively, one would need to know what would have happened in the absence of a market. For the same reason, though, it is hard to show that copyright (or related laws) do any harm; one would need to know what would have happened in the the absence of copyright.
Continue reading →
Google’s public policy shop today officially joined the blogosphere, joining Cisco (February 4, 2005), Global Crossing (November 7, 2005), and Verizon Communications (October 2, 2006), each of which already have corporate policy blogs. The maiden post, by Andrew McLaughlin, Google’s director of public policy and government affairs, promises “public policy advocacy in a Googley way.” It’s one in which users will “be part of the effort” to help “refine and improve” the company’s policy positions. The blog already has 12 posts, done during the company’s internal test. The most recent – which I suspect provided the occasion to officially launch the blog – is a short summary of the official Google position on network neutrality.
Continue reading →
Betcha.com recently began offering a U.S.-based, P2P, honor-based betting service. Its FAQ claims that Betcha.com avoids the reach of domestic state and federal anti-gambling laws because, “Unlike any other betting venue on the planet, Betcha bettors always retain the right to withdraw their bets . . . . Therefore, they are not ‘risking’ anything. No ‘risk;’ means no ‘gamble.'” Will Betcha’com’s hack of anti-internet gaming laws work?
Continue reading →
Tom Lee suggests that I’m over-stating my case with regard to the innovativeness of free software:
But I don’t think this lack of originality is due to any inherent flaw in open-source contributors or the organizational model they employ. I think it’s simply a question of capital — open source projects typically haven’t got any. The vast majority of applications benefit from network effects that arise when their userbase becomes large enough: suddenly it’s easier to find someone to play against online, or the documentation is better, or you can exchange files in the same format that your friend uses. It’s relatively easy for open-source projects to achieve the necessary level of market interest when dealing with highly technical users and applications, as Tim’s examples demonstrate — there are accepted techniques (e.g. the RFC process, making frequent commits to the project) and media outlets (e.g. listservs, usenet) that can confer legitimacy and generate interest without an investment.
Continue reading →
Here’s the other specific criticism of peer production you’ll find in Carr’s critique of peer production:
But for all its breadth and popularity, Wikipedia is a deeply flawed product. Individual articles are often poorly written and badly organized, and the encyclopedia as a whole is unbalanced, skewed toward popular culture and fads. It’s hardly elitist to point out that something’s wrong with an encyclopedia when its entry on the Flintstones is twice as long as its entry on Homer.
Carr doesn’t even have the basic facts right here. To start with, the Flintstones entry, at some 5672 words, is actually only about 50 percent longer than the Homer entry, with around 3822 words. But more to the point, the entry on homer includes links to entries on the Homeric Question (1577 words), Ancient accounts of Homer (1183 words), Homeric scholarship (4799 words), Homeric Greek (582 words), and The Historicity of the Illiad (1720 words). If my math is right, that’s 13,683 words, more than double the number of words in the Flintstone’s article. (The Flintstone’s article doesn’t appear to be divided up into sub-sections as the Homer article is, although there are entries on Flintstones-related topics, such as the characters in the show and the actors who played them. But on the other hand, there are also lengthy entries on The Iliad, The Odyssey, The geography of the Odyssey, and The Trojan War.
Continue reading →